Compare commits

..

4 Commits

Author SHA1 Message Date
Dave Conroy
955a08a21b Release 1.23.0 - See CHANGELOG.md 2020-06-15 09:44:07 -07:00
Dave Conroy
bf97c3ab97 Update README.md 2020-06-10 05:48:03 -07:00
Dave Conroy
11969da1ea Release 1.22.0 - See CHANGELOG.md 2020-06-10 05:45:49 -07:00
Dave Conroy
7998156576 Release 1.21.3 - See CHANGELOG.md 2020-06-10 05:19:24 -07:00
5 changed files with 56 additions and 24 deletions

View File

@@ -1,3 +1,22 @@
## 1.23.0 2020-06-15 <dave at tiredofit dot ca>
### Added
- Add zstd compression support
- Add choice of compression level
## 1.22.0 2020-06-10 <dave at tiredofit dot ca>
### Added
- Added EXTRA_OPTS variable to all backup commands to pass extra arguments
## 1.21.3 2020-06-10 <dave at tiredofit dot ca>
### Changed
- Fix `backup-now` manual script due to services.available change
## 1.21.2 2020-06-08 <dave at tiredofit dot ca> ## 1.21.2 2020-06-08 <dave at tiredofit dot ca>
### Added ### Added

View File

@@ -4,7 +4,7 @@ LABEL maintainer="Dave Conroy (dave at tiredofit dot ca)"
### Set Environment Variables ### Set Environment Variables
ENV ENABLE_CRON=FALSE \ ENV ENABLE_CRON=FALSE \
ENABLE_SMTP=FALSE \ ENABLE_SMTP=FALSE \
ENABLE_ZABBIX=FALSE \ ENABLE_ZABBIX=TRUE \
ZABBIX_HOSTNAME=db-backup ZABBIX_HOSTNAME=db-backup
### Dependencies ### Dependencies
@@ -30,6 +30,7 @@ RUN set -ex && \
postgresql-client \ postgresql-client \
redis \ redis \
xz \ xz \
zstd \
&& \ && \
\ \
apk add \ apk add \

View File

@@ -17,7 +17,7 @@ Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis, Rethink s
* backup all databases * backup all databases
* choose to have an MD5 sum after backup for verification * choose to have an MD5 sum after backup for verification
* delete old backups after specific amount of time * delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz) * choose compression type (none, gz, bz, xz, zstd)
* connect to any container running on the same system * connect to any container running on the same system
* select how often to run a dump * select how often to run a dump
* select when to start the first dump, whether time of day or relative to container start time * select when to start the first dump, whether time of day or relative to container start time
@@ -83,13 +83,16 @@ The following directories are used for configuration and can be mapped for persi
## Environment Variables ## Environment Variables
*If you are trying to backup a database that doesn't have a user or a password (you should!) make sure you set `CONTAINER_ENABLE_DOCKER_SECRETS=FALSE`*
Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation. Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation.
| Parameter | Description | | Parameter | Description |
|-----------|-------------| |-----------|-------------|
| `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi - Default `FILESYSTEM` | `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi - Default `FILESYSTEM`
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ` | `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` - Default `GZ`
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - Default `3` |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink` | `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink`
| `DB_HOST` | Server Hostname e.g. `mariadb` | `DB_HOST` | Server Hostname e.g. `mariadb`
| `DB_NAME` | Schema Name e.g. `database` | `DB_NAME` | Schema Name e.g. `database`
@@ -102,10 +105,12 @@ Along with the Environment Variables from the [Base image](https://hub.docker.co
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half | | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything. | `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. | `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed.
| `EXTRA_OPTS` | If you need to pass extra arguments to the backup command, add them here e.g. "--extra-command"
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE` | `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE`
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` | | `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` | | `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
**Backing Up to S3 Compatible Services** **Backing Up to S3 Compatible Services**
If `BACKUP_LOCATION` = `S3` then the following options are used. If `BACKUP_LOCATION` = `S3` then the following options are used.

View File

@@ -13,12 +13,14 @@ fi
### Sanity Test ### Sanity Test
sanity_var DB_TYPE "Database Type" sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host" sanity_var DB_HOST "Database Host"
file_env 'DB_USER' file_env 'DB_USER'
file_env 'DB_PASS' file_env 'DB_PASS'
### Set Defaults ### Set Defaults
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"} BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
COMPRESSION=${COMPRESSION:-GZ} COMPRESSION=${COMPRESSION:-GZ}
COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0} DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440} DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup} DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
@@ -43,7 +45,6 @@ if [ "BACKUP_TYPE" = "S3" ] || [ "BACKUP_TYPE" = "s3" ] || [ "BACKUP_TYPE" = "MI
sanity_var S3_PATH "S3 Path" sanity_var S3_PATH "S3 Path"
file_env 'S3_KEY_ID' file_env 'S3_KEY_ID'
file_env 'S3_KEY_SECRET' file_env 'S3_KEY_SECRET'
fi fi
if [ "$1" = "NOW" ]; then if [ "$1" = "NOW" ]; then
@@ -53,13 +54,15 @@ fi
### Set Compression Options ### Set Compression Options
if var_true $PARALLEL_COMPRESSION ; then if var_true $PARALLEL_COMPRESSION ; then
BZIP="pbzip2" BZIP="pbzip2 -${COMPRESSION_LEVEL}"
GZIP="pigz" GZIP="pigz -${COMPRESSION_LEVEL}"
XZIP="pixz" XZIP="pixz -${COMPRESSION_LEVEL}"
ZSTD="zstd --rm -${COMPRESSION_LEVEL}"
else else
BZIP="bzip2" BZIP="bzip2 -${COMPRESSION_LEVEL}"
GZIP="gzip" GZIP="gzip -${COMPRESSION_LEVEL}"
XZIP="xz" XZIP="xz -${COMPRESSION_LEVEL} "
ZSTD="zstd --rm -${COMPRESSION_LEVEL}"
fi fi
@@ -120,14 +123,14 @@ function backup_mysql() {
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
echo "** [db-backup] Dumping database: $db" echo "** [db-backup] Dumping database: $db"
TARGET=mysql_${db}_${DBHOST}_${now}.sql TARGET=mysql_${db}_${DBHOST}_${now}.sql
mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER --databases $db > ${TMPDIR}/${TARGET} mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} --databases $db > ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
fi fi
done done
else else
mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER > ${TMPDIR}/${TARGET} mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} > ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
@@ -160,14 +163,14 @@ function backup_pgsql() {
for db in $DATABASES; do for db in $DATABASES; do
print_info "Dumping database: $db" print_info "Dumping database: $db"
TARGET=pgsql_${db}_${DBHOST}_${now}.sql TARGET=pgsql_${db}_${DBHOST}_${now}.sql
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db > ${TMPDIR}/${TARGET} pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
done done
else else
export PGPASSWORD=${DBPASS} export PGPASSWORD=${DBPASS}
pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} > ${TMPDIR}/${TARGET} pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
@@ -176,7 +179,7 @@ function backup_pgsql() {
function backup_redis() { function backup_redis() {
TARGET=redis_${db}_${DBHOST}_${now}.rdb TARGET=redis_${db}_${DBHOST}_${now}.rdb
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} ${EXTRA_OPTS}
print_info "Dumping Redis - Flushing Redis Cache First" print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10 sleep 10
try=5 try=5
@@ -198,7 +201,7 @@ function backup_redis() {
function backup_rethink() { function backup_rethink() {
TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz
print_info "Dumping rethink Database: $db" print_info "Dumping rethink Database: $db"
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} ${EXTRA_OPTS}
move_backup move_backup
} }
@@ -288,6 +291,10 @@ function compression() {
$XZIP ${TMPDIR}/${TARGET} $XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz TARGET=${TARGET}.xz
;; ;;
"ZSTD" | "zstd" | "ZST" | "zst" )
$ZSTD ${TMPDIR}/${TARGET}
TARGET=${TARGET}.zst
;;
"NONE" | "none" | "FALSE" | "false") "NONE" | "none" | "FALSE" | "false")
;; ;;
esac esac

View File

@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash #!/usr/bin/with-contenv bash
echo '** Performing Manual Backup' echo '** Performing Manual Backup'
/etc/s6/services/10-db-backup/run NOW /etc/services.available/10-db-backup/run NOW