Compare commits

...

22 Commits

Author SHA1 Message Date
Dave Conroy
955a08a21b Release 1.23.0 - See CHANGELOG.md 2020-06-15 09:44:07 -07:00
Dave Conroy
bf97c3ab97 Update README.md 2020-06-10 05:48:03 -07:00
Dave Conroy
11969da1ea Release 1.22.0 - See CHANGELOG.md 2020-06-10 05:45:49 -07:00
Dave Conroy
7998156576 Release 1.21.3 - See CHANGELOG.md 2020-06-10 05:19:24 -07:00
Dave Conroy
6655d5a12a Release 1.21.2 - See CHANGELOG.md 2020-06-08 21:29:54 -07:00
Dave Conroy
bd141cc865 Release 1.21.1 - See CHANGELOG.md 2020-06-04 05:59:29 -07:00
Dave Conroy
abf2a877f7 Fix example 2020-06-03 20:41:17 -07:00
Dave Conroy
3115cb3440 Release 1.21.0 - See CHANGELOG.md 2020-06-03 05:55:46 -07:00
Dave Conroy
859ce5ffa3 Release 1.20.1 - See CHANGELOG.md 2020-04-24 15:45:52 -07:00
Dave Conroy
4d1577e553 Fix malformed backtick 2020-04-22 14:39:09 -07:00
Dave Conroy
4f103a5b36 Release 1.20.0 - See CHANGELOG.md 2020-04-22 14:19:20 -07:00
Dave Conroy
0472cba83d Update README.md 2020-04-22 05:36:48 -07:00
Dave Conroy
fe6fab857f Merge branch 'master' of https://github.com/tiredofit/docker-db-backup 2020-04-22 05:36:08 -07:00
Dave Conroy
6113bf64b2 Update README.md 2020-04-22 05:36:03 -07:00
Dave Conroy
95d1129a12 Merge pull request #22 from pascalberger/patch-1
Fix typo
2020-04-22 05:34:48 -07:00
Dave Conroy
f8bab5f045 Release 1.19.0 - See CHANGELOG.md 2020-04-22 05:21:02 -07:00
Dave Conroy
71802d2a28 Release 1.18.2 - See CHANGELOG.md 2020-04-08 12:08:42 -07:00
Dave Conroy
c96a2179b5 Merge pull request #27 from hyun007/master
changed mysql password to env variable
2020-04-08 12:04:55 -07:00
Hyun Jo
42d3aa0fef changed mysql password to env variable 2020-03-19 20:17:49 -04:00
Dave Conroy
e6009e7a1e Release 1.18.1 - See CHANGELOG.md 2020-03-14 07:59:31 -07:00
Pascal Berger
b5466d5b97 Fix typo 2020-03-01 19:43:49 +01:00
Dave Conroy
06b6e685c7 Support new tiredofit/alpine base image 2019-12-30 07:40:13 -08:00
8 changed files with 311 additions and 70 deletions

View File

@@ -1,3 +1,97 @@
## 1.23.0 2020-06-15 <dave at tiredofit dot ca>
### Added
- Add zstd compression support
- Add choice of compression level
## 1.22.0 2020-06-10 <dave at tiredofit dot ca>
### Added
- Added EXTRA_OPTS variable to all backup commands to pass extra arguments
## 1.21.3 2020-06-10 <dave at tiredofit dot ca>
### Changed
- Fix `backup-now` manual script due to services.available change
## 1.21.2 2020-06-08 <dave at tiredofit dot ca>
### Added
- Change to support tiredofit/alpine base image 5.0.0
## 1.21.1 2020-06-04 <dave at tiredofit dot ca>
### Changed
- Bugfix to initalization routine
## 1.21.0 2020-06-03 <dave at tiredofit dot ca>
### Added
- Add S3 Compatible Storage Support
### Changed
- Switch some variables to support tiredofit/alpine base image better
- Fix issue with parallel compression not working correctly
## 1.20.1 2020-04-24 <dave at tiredofit dot ca>
### Changed
- Fix Auto Cleanup routines when using `root` as username
## 1.20.0 2020-04-22 <dave at tiredofit dot ca>
### Added
- Docker Secrets Support for DB_USER and DB_PASS variables
## 1.19.0 2020-04-22 <dave at tiredofit dot ca>
### Added
- Custom Script support to execute upon compleition of backup
## 1.18.2 2020-04-08 <hyun007 @ github>
### Changed
- Rework to allow passwords with spaces in them for MariaDB / MySQL
## 1.18.1 2020-03-14 <dave at tiredofit dot ca>
### Changed
- Allow for passwords with spaces in them for MariaDB / MySQL
## 1.18.0 2019-12-29 <dave at tiredofit dot ca>
### Added
- Update image to support new tiredofit/alpine base images
## 1.17.3 2019-12-12 <dave at tiredofit dot ca>
### Changed
- Quiet down Zabbix Agent
## 1.17.2 2019-12-12 <dave at tiredofit dot ca>
### Changed
- Re Enable ZABBIX
## 1.17.1 2019-12-10 <dave at tiredofit dot ca>
### Changed
- Fix spelling mistake in container initialization
## 1.17.0 2019-12-09 <dave at tiredofit dot ca> ## 1.17.0 2019-12-09 <dave at tiredofit dot ca>
### Changed ### Changed

View File

@@ -4,7 +4,7 @@ LABEL maintainer="Dave Conroy (dave at tiredofit dot ca)"
### Set Environment Variables ### Set Environment Variables
ENV ENABLE_CRON=FALSE \ ENV ENABLE_CRON=FALSE \
ENABLE_SMTP=FALSE \ ENABLE_SMTP=FALSE \
ENABLE_ZABBIX=FALSE \ ENABLE_ZABBIX=TRUE \
ZABBIX_HOSTNAME=db-backup ZABBIX_HOSTNAME=db-backup
### Dependencies ### Dependencies
@@ -30,6 +30,7 @@ RUN set -ex && \
postgresql-client \ postgresql-client \
redis \ redis \
xz \ xz \
zstd \
&& \ && \
\ \
apk add \ apk add \

View File

@@ -1,6 +1,6 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2019 Dave Conroy Copyright (c) 2020 Dave Conroy
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -10,17 +10,18 @@
This will build a container for backing up multiple type of DB Servers This will build a container for backing up multiple type of DB Servers
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB Postgres, Redis, Rethink servers. Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis, Rethink servers.
* dump to local filesystem * dump to local filesystem or backup to S3 Compatible services
* select database user and password * select database user and password
* backup all databases * backup all databases
* choose to have an MD5 sum after backup for verification * choose to have an MD5 sum after backup for verification
* delete old backups after specific amount of time * delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz) * choose compression type (none, gz, bz, xz, zstd)
* connect to any container running on the same system * connect to any container running on the same system
* select how often to run a dump * select how often to run a dump
* select when to start the first dump, whether time of day or relative to container start time * select when to start the first dump, whether time of day or relative to container start time
* Execute script after backup for monitoring/alerting purposes
* This Container uses a [customized Alpine Linux base](https://hub.docker.com/r/tiredofit/alpine) which includes [s6 overlay](https://github.com/just-containers/s6-overlay) enabled for PID 1 Init capabilities, [zabbix-agent](https://zabbix.org) for individual container monitoring, Cron also installed along with other tools (bash,curl, less, logrotate, nano, vim) for easier management. It also supports sending to external SMTP servers. * This Container uses a [customized Alpine Linux base](https://hub.docker.com/r/tiredofit/alpine) which includes [s6 overlay](https://github.com/just-containers/s6-overlay) enabled for PID 1 Init capabilities, [zabbix-agent](https://zabbix.org) for individual container monitoring, Cron also installed along with other tools (bash,curl, less, logrotate, nano, vim) for easier management. It also supports sending to external SMTP servers.
@@ -42,6 +43,7 @@ Currently backs up CouchDB, InfluxDB, MySQL, MongoDB Postgres, Redis, Rethink se
- [Environment Variables](#environmentvariables) - [Environment Variables](#environmentvariables)
- [Maintenance](#maintenance) - [Maintenance](#maintenance)
- [Shell Access](#shell-access) - [Shell Access](#shell-access)
- [Custom Scripts](#custom-scripts)
# Prerequisites # Prerequisites
@@ -76,16 +78,21 @@ The following directories are used for configuration and can be mapped for persi
| Directory | Description | | Directory | Description |
|-----------|-------------| |-----------|-------------|
| `/backup` | Backups | | `/backup` | Backups |
| `/assets/custom-scripts` | *Optional* Put custom scripts in this directory to execute after backup operations
## Environment Variables ## Environment Variables
*If you are trying to backup a database that doesn't have a user or a password (you should!) make sure you set `CONTAINER_ENABLE_DOCKER_SECRETS=FALSE`*
Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation. Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation.
| Parameter | Description | | Parameter | Description |
|-----------|-------------| |-----------|-------------|
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ` | `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi - Default `FILESYSTEM`
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` - Default `GZ`
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - Default `3` |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink` | `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink`
| `DB_HOST` | Server Hostname e.g. `mariadb` | `DB_HOST` | Server Hostname e.g. `mariadb`
| `DB_NAME` | Schema Name e.g. `database` | `DB_NAME` | Schema Name e.g. `database`
@@ -98,14 +105,30 @@ Along with the Environment Variables from the [Base image](https://hub.docker.co
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half | | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything. | `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. | `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed.
| `EXTRA_OPTS` | If you need to pass extra arguments to the backup command, add them here e.g. "--extra-command"
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE` | `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE`
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` | | `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` | | `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
**Backing Up to S3 Compatible Services**
If `BACKUP_LOCATION` = `S3` then the following options are used.
| Parameter | Description |
|-----------|-------------|
| `S3_BUCKET` | S3 Bucket name e.g. 'mybucket' |
| `S3_HOSTNAME` | Hostname of S3 Server e.g "s3.amazonaws.com" - You can also include a port if necessary
| `S3_KEY_ID` | S3 Key ID |
| `S3_KEY_SECRET` | S3 Key Secret |
| `S3_PATH` | S3 Pathname to save to e.g. '`backup`' |
| `S3_PROTOCOL` | Use either `http` or `https` to access service - Default `https` |
| `S3_URI_STYLE` | Choose either `VIRTUALHOST` or `PATH` style - Default `VIRTUALHOST`
## Maintenance ## Maintenance
Manual Backups can be perforemd by entering the container and typing `backup-now` Manual Backups can be performed by entering the container and typing `backup-now`
#### Shell Access #### Shell Access
@@ -115,3 +138,28 @@ For debugging and maintenance purposes you may want access the containers shell.
docker exec -it (whatever your container name is e.g.) db-backup bash docker exec -it (whatever your container name is e.g.) db-backup bash
``` ```
#### Custom Scripts
If you want to execute a custom script at the end of backup, you can drop bash scripts with the extension of `.sh` in this directory. See the following example to utilize:
````bash
$ cat post-script.sh
##!/bin/bash
## Example Post Script
## $1=DB_TYPE (Type of Backup)
## $2=DB_HOST (Backup Host)
## #3=DB_NAME (Name of Database backed up
## $4=DATE (Date of Backup)
## $5=TIME (Time of Backup)
## $6=BACKUP_FILENAME (Filename of Backup)
## $7=FILESIZE (Filesize of backup)
## $8=MD5_RESULT (MD5Sum if enabled)
echo "${1} Backup Completed on ${2} for ${3} on ${4} ${5}. Filename: ${6} Size: ${7} bytes MD5: ${8}"
````
Outputs the following on the console:
`mysql Backup Completed on example-db for example on 2020-04-22 05:19:10. Filename: mysql_example_example-db_20200422-051910.sql.bz2 Size: 7795 bytes MD5: 952fbaafa30437494fdf3989a662cd40`
If you wish to change the size value from bytes to megabytes set environment variable `SIZE_VALUE=megabytes`

View File

@@ -20,6 +20,7 @@ services:
- example-db - example-db
volumes: volumes:
- ./backups:/backup - ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment: environment:
- DB_TYPE=mariadb - DB_TYPE=mariadb
- DB_HOST=example-db - DB_HOST=example-db

13
examples/post-script.sh Executable file
View File

@@ -0,0 +1,13 @@
##!/bin/bash
## Example Post Script
## $1=DB_TYPE (Type of Backup)
## $2=DB_HOST (Backup Host)
## #3=DB_NAME (Name of Database backed up
## $4=DATE (Date of Backup)
## $5=TIME (Time of Backup)
## $6=BACKUP_FILENAME (Filename of Backup)
## $7=FILESIZE (Filesize of backup)
## $8=MD5_RESULT (MD5Sum if enabled)
echo "${1} Backup Completed on ${2} for ${3} on ${4} ${5}. Filename: ${6} Size: ${7} bytes MD5: ${8}"

View File

@@ -1,56 +1,68 @@
#!/usr/bin/with-contenv bash #!/usr/bin/with-contenv bash
source /assets/functions/00-container
PROCESS_NAME="db-backup"
date >/dev/null date >/dev/null
if [ "$1" != "NOW" ]; then if [ "$1" != "NOW" ]; then
sleep 10 sleep 10
fi fi
### Set Debug Mode
if [ "$DEBUG_MODE" = "TRUE" ] || [ "$DEBUG_MODE" = "true" ]; then
set -x
fi
### Sanity Test ### Sanity Test
if [ ! -n "$DB_TYPE" ]; then sanity_var DB_TYPE "Database Type"
echo '** [db-backup] ERROR: No Database Type Selected! ' sanity_var DB_HOST "Database Host"
exit 1
fi
if [ ! -n "$DB_HOST" ]; then file_env 'DB_USER'
echo '** [db-backup] ERROR: No Database Host Entered! ' file_env 'DB_PASS'
exit 1
fi
### Set Defaults ### Set Defaults
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
COMPRESSION=${COMPRESSION:-GZ} COMPRESSION=${COMPRESSION:-GZ}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE} COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0} DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup} DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
DBHOST=${DB_HOST} DBHOST=${DB_HOST}
DBNAME=${DB_NAME} DBNAME=${DB_NAME}
DBPASS=${DB_PASS} DBPASS=${DB_PASS}
DBUSER=${DB_USER}
DBTYPE=${DB_TYPE} DBTYPE=${DB_TYPE}
DBUSER=${DB_USER}
MD5=${MD5:-TRUE} MD5=${MD5:-TRUE}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SPLIT_DB=${SPLIT_DB:-FALSE} SPLIT_DB=${SPLIT_DB:-FALSE}
TMPDIR=/tmp/backups TMPDIR=/tmp/backups
if [ "BACKUP_TYPE" = "S3" ] || [ "BACKUP_TYPE" = "s3" ] || [ "BACKUP_TYPE" = "MINIO" ] || [ "BACKUP_TYPE" = "minio" ] ; then
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
sanity_var S3_HOST "S3 Host"
sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_KEY_ID "S3 Key ID"
sanity_var S3_KEY_SECRET "S3 Key Secret"
sanity_var S3_URI_STYLE "S3 URI Style (Virtualhost or Path)"
sanity_var S3_PATH "S3 Path"
file_env 'S3_KEY_ID'
file_env 'S3_KEY_SECRET'
fi
if [ "$1" = "NOW" ]; then if [ "$1" = "NOW" ]; then
DB_DUMP_BEGIN=+0 DB_DUMP_BEGIN=+0
MANUAL=TRUE MANUAL=TRUE
fi fi
### Set Compression Options ### Set Compression Options
if [ "$PARALLEL_COMPRESSION" = "TRUE " ]; then if var_true $PARALLEL_COMPRESSION ; then
BZIP="pbzip2" BZIP="pbzip2 -${COMPRESSION_LEVEL}"
GZIP="pigz" GZIP="pigz -${COMPRESSION_LEVEL}"
XZIP="pixz" XZIP="pixz -${COMPRESSION_LEVEL}"
ZSTD="zstd --rm -${COMPRESSION_LEVEL}"
else else
BZIP="bzip2" BZIP="bzip2 -${COMPRESSION_LEVEL}"
GZIP="gzip" GZIP="gzip -${COMPRESSION_LEVEL}"
XZIP="xz" XZIP="xz -${COMPRESSION_LEVEL} "
ZSTD="zstd --rm -${COMPRESSION_LEVEL}"
fi fi
@@ -74,7 +86,7 @@ fi
"mysql" | "MYSQL" | "mariadb" | "MARIADB") "mysql" | "MYSQL" | "mariadb" | "MARIADB")
DBTYPE=mysql DBTYPE=mysql
DBPORT=${DB_PORT:-3306} DBPORT=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) ]] && MYSQL_PASS_STR=" -p${DBPASS}" [[ ( -n "${DB_PASS}" ) ]] && export MYSQL_PWD=${DBPASS}
;; ;;
"postgres" | "postgresql" | "pgsql" | "POSTGRES" | "POSTGRESQL" | "PGSQL" ) "postgres" | "postgresql" | "pgsql" | "POSTGRES" | "POSTGRESQL" | "PGSQL" )
DBTYPE=pgsql DBTYPE=pgsql
@@ -104,21 +116,21 @@ function backup_couch() {
} }
function backup_mysql() { function backup_mysql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then if var_true $SPLIT_DB ; then
DATABASES=`mysql -h $DBHOST -P $DBPORT -u$DBUSER -p$DBPASS --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema` DATABASES=`mysql -h ${DBHOST} -P $DBPORT -u$DBUSER --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema`
for db in $DATABASES; do for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
echo "** [db-backup] Dumping database: $db" echo "** [db-backup] Dumping database: $db"
TARGET=mysql_${db}_${DBHOST}_${now}.sql TARGET=mysql_${db}_${DBHOST}_${now}.sql
mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER ${MYSQL_PASS_STR} --databases $db > ${TMPDIR}/${TARGET} mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} --databases $db > ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
fi fi
done done
else else
mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER ${MYSQL_PASS_STR} > ${TMPDIR}/${TARGET} mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} > ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
@@ -145,20 +157,20 @@ function backup_mongo() {
} }
function backup_pgsql() { function backup_pgsql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then if var_true $SPLIT_DB ; then
export PGPASSWORD=${DBPASS} export PGPASSWORD=${DBPASS}
DATABASES=`psql -h $DBHOST -U $DBUSER -p ${DBPORT} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' ` DATABASES=`psql -h $DBHOST -U $DBUSER -p ${DBPORT} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' `
for db in $DATABASES; do for db in $DATABASES; do
echo "** [db-backup] Dumping database: $db" print_info "Dumping database: $db"
TARGET=pgsql_${db}_${DBHOST}_${now}.sql TARGET=pgsql_${db}_${DBHOST}_${now}.sql
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db > ${TMPDIR}/${TARGET} pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
done done
else else
export PGPASSWORD=${DBPASS} export PGPASSWORD=${DBPASS}
pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} > ${TMPDIR}/${TARGET} pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
@@ -167,18 +179,18 @@ function backup_pgsql() {
function backup_redis() { function backup_redis() {
TARGET=redis_${db}_${DBHOST}_${now}.rdb TARGET=redis_${db}_${DBHOST}_${now}.rdb
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} ${EXTRA_OPTS}
echo "** [db-backup] Dumping Redis - Flushing Redis Cache First" print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10 sleep 10
try=5 try=5
while [ $try -gt 0 ] ; do while [ $try -gt 0 ] ; do
saved=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} | awk '/rdb_bgsave_in_progress:0/{print "saved"}') saved=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} | awk '/rdb_bgsave_in_progress:0/{print "saved"}')
ok=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} | awk '/rdb_last_bgsave_status:ok/{print "ok"}') ok=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} | awk '/rdb_last_bgsave_status:ok/{print "ok"}')
if [[ "$saved" = "saved" ]] && [[ "$ok" = "ok" ]]; then if [[ "$saved" = "saved" ]] && [[ "$ok" = "ok" ]]; then
echo "** [db-backup] Redis Backup Complete" print_info "Redis Backup Complete"
fi fi
try=$((try - 1)) try=$((try - 1))
echo "** [db-backup] Redis Busy - Waiting and retrying in 5 seconds" print_info "Redis Busy - Waiting and retrying in 5 seconds"
sleep 5 sleep 5
done done
generate_md5 generate_md5
@@ -188,8 +200,8 @@ function backup_redis() {
function backup_rethink() { function backup_rethink() {
TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz
echo "** [db-backup] Dumping rethink Database: $db" print_info "Dumping rethink Database: $db"
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} ${EXTRA_OPTS}
move_backup move_backup
} }
@@ -201,7 +213,7 @@ function check_availability() {
while ! (nc -z ${DBHOST} ${DBPORT}) ; do while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
echo "** [db-backup] CouchDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "CouchDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done done
;; ;;
"influx" ) "influx" )
@@ -209,7 +221,7 @@ function check_availability() {
while ! (nc -z ${DBHOST} ${DBPORT}) ; do while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
echo "** [db-backup] InfluxDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "InfluxDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done done
;; ;;
"mongo" ) "mongo" )
@@ -217,7 +229,7 @@ function check_availability() {
while ! (nc -z ${DBHOST} ${DBPORT}) ; do while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
echo "** [db-backup] Mongo Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "Mongo Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done done
;; ;;
"mysql" ) "mysql" )
@@ -230,7 +242,7 @@ function check_availability() {
: :
break break
fi fi
echo "** [db-backup] MySQL/MariaDB Server "$DBHOST" is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "MySQL/MariaDB Server "$DBHOST" is not accessible, retrying.. ($COUNTER seconds so far)"
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
done done
@@ -243,7 +255,7 @@ function check_availability() {
do do
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
echo "** [db-backup] Postgres Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "Postgres Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done done
;; ;;
"redis" ) "redis" )
@@ -251,7 +263,7 @@ function check_availability() {
while ! (nc -z ${DBHOST} ${DBPORT}) ; do while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
echo "** [db-backup] Redis Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "Redis Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done done
;; ;;
"rethink" ) "rethink" )
@@ -259,7 +271,7 @@ function check_availability() {
while ! (nc -z ${DBHOST} ${DBPORT}) ; do while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5 sleep 5
let COUNTER+=5 let COUNTER+=5
echo "** [db-backup] RethinkDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)" print_warn "RethinkDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done done
;; ;;
esac esac
@@ -279,27 +291,86 @@ function compression() {
$XZIP ${TMPDIR}/${TARGET} $XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz TARGET=${TARGET}.xz
;; ;;
"ZSTD" | "zstd" | "ZST" | "zst" )
$ZSTD ${TMPDIR}/${TARGET}
TARGET=${TARGET}.zst
;;
"NONE" | "none" | "FALSE" | "false") "NONE" | "none" | "FALSE" | "false")
;; ;;
esac esac
} }
function generate_md5() { function generate_md5() {
if [ "$MD5" = "TRUE" ] || [ "$MD5" = "true" ] ; then if var_true $MD5 ; then
cd $TMPDIR cd $TMPDIR
md5sum ${TARGET} > ${TARGET}.md5 md5sum ${TARGET} > ${TARGET}.md5
MD5VALUE=$(md5sum ${TARGET} | awk '{ print $1}')
fi fi
} }
function move_backup() { function move_backup() {
case "$SIZE_VALUE" in
"b" | "bytes" )
SIZE_VALUE=1
;;
"[kK]" | "[kK][bB]" | "kilobytes" | "[mM]" | "[mM][bB]" | "megabytes" )
SIZE_VALUE="-h"
;;
*)
SIZE_VALUE=1
;;
esac
if [ "$SIZE_VALUE" = "1" ] ; then
FILESIZE=$(stat -c%s "${DB_DUMP_TARGET}/${TARGET}")
else
FILESIZE=$(du -h "${DB_DUMP_TARGET}/${TARGET}" | awk '{ print $1}')
fi
case "${BACKUP_LOCATION}" in
"FILE" | "file" | "filesystem" | "FILESYSTEM" )
mkdir -p ${DB_DUMP_TARGET} mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/ mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET} mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
;;
"S3" | "s3" | "MINIO" | "minio" )
s3_content_type="application/octet-stream"
if [ "$S3_URI_STYLE" = "VIRTUALHOST" ] || [ "$S3_URI_STYLE" = "VHOST" ] [ "$S3_URI_STYLE" = "virtualhost" ] [ "$S3_URI_STYLE" = "vhost" ] ; then
s3_url="${S3_BUCKET}.${S3_HOST}"
else
s3_url="${S3_HOST}/${S3_BUCKET}"
fi
if var_true $MD5 ; then
s3_date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
s3_md5="$(libressl md5 -binary < "${TMPDIR}/${TARGET}.md5" | base64)"
sig="$(printf "PUT\n$s3_md5\n${s3_content_type}\n$s3_date\n/$S3_BUCKET/$S3_PATH/${TARGET}.md5" | libressl sha1 -binary -hmac "${S3_KEY_SECRET}" | base64)"
print_debug "Uploading ${TARGET}.md5 to S3"
curl -T "${TMPDIR}/${TARGET}.md5" ${S3_PROTOCOL}://${s3_url}/${S3_PATH}/${TARGET}.md5 \
-H "Date: $date" \
-H "Authorization: AWS ${S3_KEY_ID}:$sig" \
-H "Content-Type: ${s3_content_type}" \
-H "Content-MD5: ${s3_md5}"
fi
s3_date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
s3_md5="$(libressl md5 -binary < "${TMPDIR}/${TARGET}" | base64)"
sig="$(printf "PUT\n$s3_md5\n${s3_content_type}\n$s3_date\n/$S3_BUCKET/$S3_PATH/${TARGET}" | libressl sha1 -binary -hmac "${S3_KEY_SECRET}" | base64)"
print_debug "Uploading ${TARGET} to S3"
curl -T ${TMPDIR}/${TARGET} ${S3_PROTOCOL}://${s3_url}/${S3_PATH}/${TARGET} \
-H "Date: $s3_date" \
-H "Authorization: AWS ${S3_KEY_ID}:$sig" \
-H "Content-Type: ${s3_content_type}" \
-H "Content-MD5: ${s3_md5}"
rm -rf ${TMPDIR}/*.md5
rm -rf ${TMPDIR}/${TARGET}
;;
esac
} }
### Container Startup ### Container Startup
echo '** [db-backup] Initialized at at '$(date) print_info "Initialized on `date`"
### Wait for Next time to start backup ### Wait for Next time to start backup
current_time=$(date +"%s") current_time=$(date +"%s")
@@ -325,6 +396,8 @@ echo '** [db-backup] Initialized at at '$(date)
### Define Target name ### Define Target name
now=$(date +"%Y%m%d-%H%M%S") now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
TARGET=${DBTYPE}_${DBNAME}_${DBHOST}_${now}.sql TARGET=${DBTYPE}_${DBNAME}_${DBHOST}_${now}.sql
### Take a Dump ### Take a Dump
@@ -360,18 +433,29 @@ echo '** [db-backup] Initialized at at '$(date)
esac esac
### Zabbix ### Zabbix
if [ "$ENABLE_ZABBIX" = "TRUE" ] || [ "$ENABLE_ZABBIX" = "true" ]; then if var_true $ENABLE_ZABBIX ; then
zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}` silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}`
zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'` silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'`
fi fi
### Automatic Cleanup ### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then if [[ -n "$DB_CLEANUP_TIME" ]]; then
find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \; find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "*" -exec rm {} \;
fi
### Post Backup Custom Script Support
if [ -d /assets/custom-scripts/ ] ; then
print_info "Found Custom Scripts to Execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_info "Running Script ${f}"
## script DB_TYPE DB_HOST DB_NAME DATE BACKUP_FILENAME FILESIZE MD5_VALUE
chmod +x ${f}
${f} "${DBTYPE}" "${DBHOST}" "${DBNAME}" "${now_date}" "${now_time}" "${TARGET}" "${FILESIZE}" "${MD5VALUE}"
done
fi fi
### Go back to Sleep until next Backup time ### Go back to Sleep until next Backup time
if [ "$MANUAL" = "TRUE" ]; then if var_true $MANUAL ; then
exit 1; exit 1;
else else
sleep $(($DB_DUMP_FREQ*60)) sleep $(($DB_DUMP_FREQ*60))

View File

@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash #!/usr/bin/with-contenv bash
echo '** Performing Manual Backup' echo '** Performing Manual Backup'
/etc/s6/services/10-db-backup/run NOW /etc/services.available/10-db-backup/run NOW