Compare commits

...

8 Commits

Author SHA1 Message Date
Dave Conroy
11969da1ea Release 1.22.0 - See CHANGELOG.md 2020-06-10 05:45:49 -07:00
Dave Conroy
7998156576 Release 1.21.3 - See CHANGELOG.md 2020-06-10 05:19:24 -07:00
Dave Conroy
6655d5a12a Release 1.21.2 - See CHANGELOG.md 2020-06-08 21:29:54 -07:00
Dave Conroy
bd141cc865 Release 1.21.1 - See CHANGELOG.md 2020-06-04 05:59:29 -07:00
Dave Conroy
abf2a877f7 Fix example 2020-06-03 20:41:17 -07:00
Dave Conroy
3115cb3440 Release 1.21.0 - See CHANGELOG.md 2020-06-03 05:55:46 -07:00
Dave Conroy
859ce5ffa3 Release 1.20.1 - See CHANGELOG.md 2020-04-24 15:45:52 -07:00
Dave Conroy
4d1577e553 Fix malformed backtick 2020-04-22 14:39:09 -07:00
5 changed files with 137 additions and 30 deletions

View File

@@ -1,3 +1,43 @@
## 1.22.0 2020-06-10 <dave at tiredofit dot ca>
### Added
- Added EXTRA_OPTS variable to all backup commands to pass extra arguments
## 1.21.3 2020-06-10 <dave at tiredofit dot ca>
### Changed
- Fix `backup-now` manual script due to services.available change
## 1.21.2 2020-06-08 <dave at tiredofit dot ca>
### Added
- Change to support tiredofit/alpine base image 5.0.0
## 1.21.1 2020-06-04 <dave at tiredofit dot ca>
### Changed
- Bugfix to initalization routine
## 1.21.0 2020-06-03 <dave at tiredofit dot ca>
### Added
- Add S3 Compatible Storage Support
### Changed
- Switch some variables to support tiredofit/alpine base image better
- Fix issue with parallel compression not working correctly
## 1.20.1 2020-04-24 <dave at tiredofit dot ca>
### Changed
- Fix Auto Cleanup routines when using `root` as username
## 1.20.0 2020-04-22 <dave at tiredofit dot ca> ## 1.20.0 2020-04-22 <dave at tiredofit dot ca>
### Added ### Added

View File

@@ -12,7 +12,7 @@ This will build a container for backing up multiple type of DB Servers
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis, Rethink servers. Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis, Rethink servers.
* dump to local filesystem * dump to local filesystem or backup to S3 Compatible services
* select database user and password * select database user and password
* backup all databases * backup all databases
* choose to have an MD5 sum after backup for verification * choose to have an MD5 sum after backup for verification
@@ -78,7 +78,7 @@ The following directories are used for configuration and can be mapped for persi
| Directory | Description | | Directory | Description |
|-----------|-------------| |-----------|-------------|
| `/backup` | Backups | | `/backup` | Backups |
| `/assets/custom-scripts | *Optional* Put custom scripts in this directory to execute after backup operations` | `/assets/custom-scripts` | *Optional* Put custom scripts in this directory to execute after backup operations
## Environment Variables ## Environment Variables
@@ -88,6 +88,7 @@ Along with the Environment Variables from the [Base image](https://hub.docker.co
| Parameter | Description | | Parameter | Description |
|-----------|-------------| |-----------|-------------|
| `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi - Default `FILESYSTEM`
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ` | `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ`
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink` | `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink`
| `DB_HOST` | Server Hostname e.g. `mariadb` | `DB_HOST` | Server Hostname e.g. `mariadb`
@@ -105,6 +106,19 @@ Along with the Environment Variables from the [Base image](https://hub.docker.co
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` | | `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` | | `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
**Backing Up to S3 Compatible Services**
If `BACKUP_LOCATION` = `S3` then the following options are used.
| Parameter | Description |
|-----------|-------------|
| `S3_BUCKET` | S3 Bucket name e.g. 'mybucket' |
| `S3_HOSTNAME` | Hostname of S3 Server e.g "s3.amazonaws.com" - You can also include a port if necessary
| `S3_KEY_ID` | S3 Key ID |
| `S3_KEY_SECRET` | S3 Key Secret |
| `S3_PATH` | S3 Pathname to save to e.g. '`backup`' |
| `S3_PROTOCOL` | Use either `http` or `https` to access service - Default `https` |
| `S3_URI_STYLE` | Choose either `VIRTUALHOST` or `PATH` style - Default `VIRTUALHOST`
## Maintenance ## Maintenance

View File

@@ -6,7 +6,6 @@ services:
image: mariadb:latest image: mariadb:latest
volumes: volumes:
- ./db:/var/lib/mysql - ./db:/var/lib/mysql
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment: environment:
- MYSQL_ROOT_PASSWORD=examplerootpassword - MYSQL_ROOT_PASSWORD=examplerootpassword
- MYSQL_DATABASE=example - MYSQL_DATABASE=example
@@ -21,6 +20,7 @@ services:
- example-db - example-db
volumes: volumes:
- ./backups:/backup - ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment: environment:
- DB_TYPE=mariadb - DB_TYPE=mariadb
- DB_HOST=example-db - DB_HOST=example-db

View File

@@ -1,6 +1,7 @@
#!/usr/bin/with-contenv bash #!/usr/bin/with-contenv bash
for s in /assets/functions/*; do source $s; done source /assets/functions/00-container
PROCESS_NAME="db-backup" PROCESS_NAME="db-backup"
date >/dev/null date >/dev/null
@@ -16,28 +17,42 @@ file_env 'DB_USER'
file_env 'DB_PASS' file_env 'DB_PASS'
### Set Defaults ### Set Defaults
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
COMPRESSION=${COMPRESSION:-GZ} COMPRESSION=${COMPRESSION:-GZ}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0} DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup} DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
DBHOST=${DB_HOST} DBHOST=${DB_HOST}
DBNAME=${DB_NAME} DBNAME=${DB_NAME}
DBPASS=${DB_PASS} DBPASS=${DB_PASS}
DBUSER=${DB_USER}
DBTYPE=${DB_TYPE} DBTYPE=${DB_TYPE}
DBUSER=${DB_USER}
MD5=${MD5:-TRUE} MD5=${MD5:-TRUE}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
SIZE_VALUE=${SIZE_VALUE:-"bytes"} SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SPLIT_DB=${SPLIT_DB:-FALSE} SPLIT_DB=${SPLIT_DB:-FALSE}
TMPDIR=/tmp/backups TMPDIR=/tmp/backups
if [ "BACKUP_TYPE" = "S3" ] || [ "BACKUP_TYPE" = "s3" ] || [ "BACKUP_TYPE" = "MINIO" ] || [ "BACKUP_TYPE" = "minio" ] ; then
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
sanity_var S3_HOST "S3 Host"
sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_KEY_ID "S3 Key ID"
sanity_var S3_KEY_SECRET "S3 Key Secret"
sanity_var S3_URI_STYLE "S3 URI Style (Virtualhost or Path)"
sanity_var S3_PATH "S3 Path"
file_env 'S3_KEY_ID'
file_env 'S3_KEY_SECRET'
fi
if [ "$1" = "NOW" ]; then if [ "$1" = "NOW" ]; then
DB_DUMP_BEGIN=+0 DB_DUMP_BEGIN=+0
MANUAL=TRUE MANUAL=TRUE
fi fi
### Set Compression Options ### Set Compression Options
if [ "$PARALLEL_COMPRESSION" = "TRUE " ]; then if var_true $PARALLEL_COMPRESSION ; then
BZIP="pbzip2" BZIP="pbzip2"
GZIP="pigz" GZIP="pigz"
XZIP="pixz" XZIP="pixz"
@@ -98,21 +113,21 @@ function backup_couch() {
} }
function backup_mysql() { function backup_mysql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then if var_true $SPLIT_DB ; then
DATABASES=`mysql -h ${DBHOST} -P $DBPORT -u$DBUSER --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema` DATABASES=`mysql -h ${DBHOST} -P $DBPORT -u$DBUSER --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema`
for db in $DATABASES; do for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
echo "** [db-backup] Dumping database: $db" echo "** [db-backup] Dumping database: $db"
TARGET=mysql_${db}_${DBHOST}_${now}.sql TARGET=mysql_${db}_${DBHOST}_${now}.sql
mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER --databases $db > ${TMPDIR}/${TARGET} mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} --databases $db > ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
fi fi
done done
else else
mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER > ${TMPDIR}/${TARGET} mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} > ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
@@ -139,20 +154,20 @@ function backup_mongo() {
} }
function backup_pgsql() { function backup_pgsql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then if var_true $SPLIT_DB ; then
export PGPASSWORD=${DBPASS} export PGPASSWORD=${DBPASS}
DATABASES=`psql -h $DBHOST -U $DBUSER -p ${DBPORT} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' ` DATABASES=`psql -h $DBHOST -U $DBUSER -p ${DBPORT} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' `
for db in $DATABASES; do for db in $DATABASES; do
print_info "Dumping database: $db" print_info "Dumping database: $db"
TARGET=pgsql_${db}_${DBHOST}_${now}.sql TARGET=pgsql_${db}_${DBHOST}_${now}.sql
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db > ${TMPDIR}/${TARGET} pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
done done
else else
export PGPASSWORD=${DBPASS} export PGPASSWORD=${DBPASS}
pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} > ${TMPDIR}/${TARGET} pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5 generate_md5
compression compression
move_backup move_backup
@@ -161,7 +176,7 @@ function backup_pgsql() {
function backup_redis() { function backup_redis() {
TARGET=redis_${db}_${DBHOST}_${now}.rdb TARGET=redis_${db}_${DBHOST}_${now}.rdb
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} ${EXTRA_OPTS}
print_info "Dumping Redis - Flushing Redis Cache First" print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10 sleep 10
try=5 try=5
@@ -183,7 +198,7 @@ function backup_redis() {
function backup_rethink() { function backup_rethink() {
TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz
print_info "Dumping rethink Database: $db" print_info "Dumping rethink Database: $db"
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} ${EXTRA_OPTS}
move_backup move_backup
} }
@@ -262,16 +277,16 @@ function check_availability() {
function compression() { function compression() {
case "$COMPRESSION" in case "$COMPRESSION" in
"GZ" | "gz" | "gzip" | "GZIP") "GZ" | "gz" | "gzip" | "GZIP")
$GZIP ${TMPDIR}/${TARGET} $GZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.gz TARGET=${TARGET}.gz
;; ;;
"BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2") "BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2")
$BZIP ${TMPDIR}/${TARGET} $BZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.bz2 TARGET=${TARGET}.bz2
;; ;;
"XZ" | "xz" | "XZIP" | "xzip" ) "XZ" | "xz" | "XZIP" | "xzip" )
$XZIP ${TMPDIR}/${TARGET} $XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz TARGET=${TARGET}.xz
;; ;;
"NONE" | "none" | "FALSE" | "false") "NONE" | "none" | "FALSE" | "false")
;; ;;
@@ -279,7 +294,7 @@ function compression() {
} }
function generate_md5() { function generate_md5() {
if [ "$MD5" = "TRUE" ] || [ "$MD5" = "true" ] ; then if var_true $MD5 ; then
cd $TMPDIR cd $TMPDIR
md5sum ${TARGET} > ${TARGET}.md5 md5sum ${TARGET} > ${TARGET}.md5
MD5VALUE=$(md5sum ${TARGET} | awk '{ print $1}') MD5VALUE=$(md5sum ${TARGET} | awk '{ print $1}')
@@ -287,9 +302,6 @@ fi
} }
function move_backup() { function move_backup() {
mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
case "$SIZE_VALUE" in case "$SIZE_VALUE" in
"b" | "bytes" ) "b" | "bytes" )
SIZE_VALUE=1 SIZE_VALUE=1
@@ -306,6 +318,47 @@ function move_backup() {
else else
FILESIZE=$(du -h "${DB_DUMP_TARGET}/${TARGET}" | awk '{ print $1}') FILESIZE=$(du -h "${DB_DUMP_TARGET}/${TARGET}" | awk '{ print $1}')
fi fi
case "${BACKUP_LOCATION}" in
"FILE" | "file" | "filesystem" | "FILESYSTEM" )
mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
;;
"S3" | "s3" | "MINIO" | "minio" )
s3_content_type="application/octet-stream"
if [ "$S3_URI_STYLE" = "VIRTUALHOST" ] || [ "$S3_URI_STYLE" = "VHOST" ] [ "$S3_URI_STYLE" = "virtualhost" ] [ "$S3_URI_STYLE" = "vhost" ] ; then
s3_url="${S3_BUCKET}.${S3_HOST}"
else
s3_url="${S3_HOST}/${S3_BUCKET}"
fi
if var_true $MD5 ; then
s3_date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
s3_md5="$(libressl md5 -binary < "${TMPDIR}/${TARGET}.md5" | base64)"
sig="$(printf "PUT\n$s3_md5\n${s3_content_type}\n$s3_date\n/$S3_BUCKET/$S3_PATH/${TARGET}.md5" | libressl sha1 -binary -hmac "${S3_KEY_SECRET}" | base64)"
print_debug "Uploading ${TARGET}.md5 to S3"
curl -T "${TMPDIR}/${TARGET}.md5" ${S3_PROTOCOL}://${s3_url}/${S3_PATH}/${TARGET}.md5 \
-H "Date: $date" \
-H "Authorization: AWS ${S3_KEY_ID}:$sig" \
-H "Content-Type: ${s3_content_type}" \
-H "Content-MD5: ${s3_md5}"
fi
s3_date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
s3_md5="$(libressl md5 -binary < "${TMPDIR}/${TARGET}" | base64)"
sig="$(printf "PUT\n$s3_md5\n${s3_content_type}\n$s3_date\n/$S3_BUCKET/$S3_PATH/${TARGET}" | libressl sha1 -binary -hmac "${S3_KEY_SECRET}" | base64)"
print_debug "Uploading ${TARGET} to S3"
curl -T ${TMPDIR}/${TARGET} ${S3_PROTOCOL}://${s3_url}/${S3_PATH}/${TARGET} \
-H "Date: $s3_date" \
-H "Authorization: AWS ${S3_KEY_ID}:$sig" \
-H "Content-Type: ${s3_content_type}" \
-H "Content-MD5: ${s3_md5}"
rm -rf ${TMPDIR}/*.md5
rm -rf ${TMPDIR}/${TARGET}
;;
esac
} }
@@ -373,14 +426,14 @@ print_info "Initialized on `date`"
esac esac
### Zabbix ### Zabbix
if [ "$ENABLE_ZABBIX" = "TRUE" ] || [ "$ENABLE_ZABBIX" = "true" ]; then if var_true $ENABLE_ZABBIX ; then
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}` silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}`
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'` silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'`
fi fi
### Automatic Cleanup ### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then if [[ -n "$DB_CLEANUP_TIME" ]]; then
find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \; find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "*" -exec rm {} \;
fi fi
### Post Backup Custom Script Support ### Post Backup Custom Script Support
@@ -395,7 +448,7 @@ print_info "Initialized on `date`"
fi fi
### Go back to Sleep until next Backup time ### Go back to Sleep until next Backup time
if [ "$MANUAL" = "TRUE" ]; then if var_true $MANUAL ; then
exit 1; exit 1;
else else
sleep $(($DB_DUMP_FREQ*60)) sleep $(($DB_DUMP_FREQ*60))

View File

@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash #!/usr/bin/with-contenv bash
echo '** Performing Manual Backup' echo '** Performing Manual Backup'
/etc/s6/services/10-db-backup/run NOW /etc/services.available/10-db-backup/run NOW