Compare commits

...

10 Commits

Author SHA1 Message Date
Dave Conroy
955a08a21b Release 1.23.0 - See CHANGELOG.md 2020-06-15 09:44:07 -07:00
Dave Conroy
bf97c3ab97 Update README.md 2020-06-10 05:48:03 -07:00
Dave Conroy
11969da1ea Release 1.22.0 - See CHANGELOG.md 2020-06-10 05:45:49 -07:00
Dave Conroy
7998156576 Release 1.21.3 - See CHANGELOG.md 2020-06-10 05:19:24 -07:00
Dave Conroy
6655d5a12a Release 1.21.2 - See CHANGELOG.md 2020-06-08 21:29:54 -07:00
Dave Conroy
bd141cc865 Release 1.21.1 - See CHANGELOG.md 2020-06-04 05:59:29 -07:00
Dave Conroy
abf2a877f7 Fix example 2020-06-03 20:41:17 -07:00
Dave Conroy
3115cb3440 Release 1.21.0 - See CHANGELOG.md 2020-06-03 05:55:46 -07:00
Dave Conroy
859ce5ffa3 Release 1.20.1 - See CHANGELOG.md 2020-04-24 15:45:52 -07:00
Dave Conroy
4d1577e553 Fix malformed backtick 2020-04-22 14:39:09 -07:00
6 changed files with 167 additions and 40 deletions

View File

@@ -1,3 +1,50 @@
## 1.23.0 2020-06-15 <dave at tiredofit dot ca>
### Added
- Add zstd compression support
- Add choice of compression level
## 1.22.0 2020-06-10 <dave at tiredofit dot ca>
### Added
- Added EXTRA_OPTS variable to all backup commands to pass extra arguments
## 1.21.3 2020-06-10 <dave at tiredofit dot ca>
### Changed
- Fix `backup-now` manual script due to services.available change
## 1.21.2 2020-06-08 <dave at tiredofit dot ca>
### Added
- Change to support tiredofit/alpine base image 5.0.0
## 1.21.1 2020-06-04 <dave at tiredofit dot ca>
### Changed
- Bugfix to initalization routine
## 1.21.0 2020-06-03 <dave at tiredofit dot ca>
### Added
- Add S3 Compatible Storage Support
### Changed
- Switch some variables to support tiredofit/alpine base image better
- Fix issue with parallel compression not working correctly
## 1.20.1 2020-04-24 <dave at tiredofit dot ca>
### Changed
- Fix Auto Cleanup routines when using `root` as username
## 1.20.0 2020-04-22 <dave at tiredofit dot ca>
### Added

View File

@@ -4,7 +4,7 @@ LABEL maintainer="Dave Conroy (dave at tiredofit dot ca)"
### Set Environment Variables
ENV ENABLE_CRON=FALSE \
ENABLE_SMTP=FALSE \
ENABLE_ZABBIX=FALSE \
ENABLE_ZABBIX=TRUE \
ZABBIX_HOSTNAME=db-backup
### Dependencies
@@ -30,12 +30,13 @@ RUN set -ex && \
postgresql-client \
redis \
xz \
zstd \
&& \
\
apk add \
pixz@testing \
&& \
\
\
mkdir -p /usr/src/pbzip2 && \
curl -ssL https://launchpad.net/pbzip2/1.1/1.1.13/+download/pbzip2-1.1.13.tar.gz | tar xvfz - --strip=1 -C /usr/src/pbzip2 && \
cd /usr/src/pbzip2 && \

View File

@@ -12,12 +12,12 @@ This will build a container for backing up multiple type of DB Servers
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis, Rethink servers.
* dump to local filesystem
* dump to local filesystem or backup to S3 Compatible services
* select database user and password
* backup all databases
* choose to have an MD5 sum after backup for verification
* delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz)
* choose compression type (none, gz, bz, xz, zstd)
* connect to any container running on the same system
* select how often to run a dump
* select when to start the first dump, whether time of day or relative to container start time
@@ -78,17 +78,21 @@ The following directories are used for configuration and can be mapped for persi
| Directory | Description |
|-----------|-------------|
| `/backup` | Backups |
| `/assets/custom-scripts | *Optional* Put custom scripts in this directory to execute after backup operations`
| `/assets/custom-scripts` | *Optional* Put custom scripts in this directory to execute after backup operations
## Environment Variables
*If you are trying to backup a database that doesn't have a user or a password (you should!) make sure you set `CONTAINER_ENABLE_DOCKER_SECRETS=FALSE`*
Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation.
| Parameter | Description |
|-----------|-------------|
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ`
| `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi - Default `FILESYSTEM`
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` - Default `GZ`
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - Default `3` |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink`
| `DB_HOST` | Server Hostname e.g. `mariadb`
| `DB_NAME` | Schema Name e.g. `database`
@@ -101,11 +105,26 @@ Along with the Environment Variables from the [Base image](https://hub.docker.co
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed.
| `EXTRA_OPTS` | If you need to pass extra arguments to the backup command, add them here e.g. "--extra-command"
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE`
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
**Backing Up to S3 Compatible Services**
If `BACKUP_LOCATION` = `S3` then the following options are used.
| Parameter | Description |
|-----------|-------------|
| `S3_BUCKET` | S3 Bucket name e.g. 'mybucket' |
| `S3_HOSTNAME` | Hostname of S3 Server e.g "s3.amazonaws.com" - You can also include a port if necessary
| `S3_KEY_ID` | S3 Key ID |
| `S3_KEY_SECRET` | S3 Key Secret |
| `S3_PATH` | S3 Pathname to save to e.g. '`backup`' |
| `S3_PROTOCOL` | Use either `http` or `https` to access service - Default `https` |
| `S3_URI_STYLE` | Choose either `VIRTUALHOST` or `PATH` style - Default `VIRTUALHOST`
## Maintenance
Manual Backups can be performed by entering the container and typing `backup-now`

View File

@@ -6,7 +6,6 @@ services:
image: mariadb:latest
volumes:
- ./db:/var/lib/mysql
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- MYSQL_ROOT_PASSWORD=examplerootpassword
- MYSQL_DATABASE=example
@@ -21,6 +20,7 @@ services:
- example-db
volumes:
- ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- DB_TYPE=mariadb
- DB_HOST=example-db

View File

@@ -1,6 +1,7 @@
#!/usr/bin/with-contenv bash
for s in /assets/functions/*; do source $s; done
source /assets/functions/00-container
PROCESS_NAME="db-backup"
date >/dev/null
@@ -12,39 +13,56 @@ fi
### Sanity Test
sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host"
file_env 'DB_USER'
file_env 'DB_PASS'
### Set Defaults
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
COMPRESSION=${COMPRESSION:-GZ}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
DBHOST=${DB_HOST}
DBNAME=${DB_NAME}
DBPASS=${DB_PASS}
DBUSER=${DB_USER}
DBTYPE=${DB_TYPE}
DBUSER=${DB_USER}
MD5=${MD5:-TRUE}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SPLIT_DB=${SPLIT_DB:-FALSE}
TMPDIR=/tmp/backups
if [ "BACKUP_TYPE" = "S3" ] || [ "BACKUP_TYPE" = "s3" ] || [ "BACKUP_TYPE" = "MINIO" ] || [ "BACKUP_TYPE" = "minio" ] ; then
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
sanity_var S3_HOST "S3 Host"
sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_KEY_ID "S3 Key ID"
sanity_var S3_KEY_SECRET "S3 Key Secret"
sanity_var S3_URI_STYLE "S3 URI Style (Virtualhost or Path)"
sanity_var S3_PATH "S3 Path"
file_env 'S3_KEY_ID'
file_env 'S3_KEY_SECRET'
fi
if [ "$1" = "NOW" ]; then
DB_DUMP_BEGIN=+0
MANUAL=TRUE
fi
### Set Compression Options
if [ "$PARALLEL_COMPRESSION" = "TRUE " ]; then
BZIP="pbzip2"
GZIP="pigz"
XZIP="pixz"
if var_true $PARALLEL_COMPRESSION ; then
BZIP="pbzip2 -${COMPRESSION_LEVEL}"
GZIP="pigz -${COMPRESSION_LEVEL}"
XZIP="pixz -${COMPRESSION_LEVEL}"
ZSTD="zstd --rm -${COMPRESSION_LEVEL}"
else
BZIP="bzip2"
GZIP="gzip"
XZIP="xz"
BZIP="bzip2 -${COMPRESSION_LEVEL}"
GZIP="gzip -${COMPRESSION_LEVEL}"
XZIP="xz -${COMPRESSION_LEVEL} "
ZSTD="zstd --rm -${COMPRESSION_LEVEL}"
fi
@@ -98,21 +116,21 @@ function backup_couch() {
}
function backup_mysql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then
if var_true $SPLIT_DB ; then
DATABASES=`mysql -h ${DBHOST} -P $DBPORT -u$DBUSER --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema`
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
echo "** [db-backup] Dumping database: $db"
TARGET=mysql_${db}_${DBHOST}_${now}.sql
mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER --databases $db > ${TMPDIR}/${TARGET}
mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} --databases $db > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
done
else
mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER > ${TMPDIR}/${TARGET}
mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER ${EXTRA_OPTS} > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
@@ -139,20 +157,20 @@ function backup_mongo() {
}
function backup_pgsql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then
if var_true $SPLIT_DB ; then
export PGPASSWORD=${DBPASS}
DATABASES=`psql -h $DBHOST -U $DBUSER -p ${DBPORT} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' `
for db in $DATABASES; do
print_info "Dumping database: $db"
TARGET=pgsql_${db}_${DBHOST}_${now}.sql
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db > ${TMPDIR}/${TARGET}
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
done
else
export PGPASSWORD=${DBPASS}
pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} > ${TMPDIR}/${TARGET}
pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} ${EXTRA_OPTS}> ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
@@ -161,7 +179,7 @@ function backup_pgsql() {
function backup_redis() {
TARGET=redis_${db}_${DBHOST}_${now}.rdb
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET}
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET} ${EXTRA_OPTS}
print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10
try=5
@@ -183,7 +201,7 @@ function backup_redis() {
function backup_rethink() {
TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz
print_info "Dumping rethink Database: $db"
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR}
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR} ${EXTRA_OPTS}
move_backup
}
@@ -262,16 +280,20 @@ function check_availability() {
function compression() {
case "$COMPRESSION" in
"GZ" | "gz" | "gzip" | "GZIP")
$GZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.gz
$GZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.gz
;;
"BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2")
$BZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.bz2
$BZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.bz2
;;
"XZ" | "xz" | "XZIP" | "xzip" )
$XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz
$XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz
;;
"ZSTD" | "zstd" | "ZST" | "zst" )
$ZSTD ${TMPDIR}/${TARGET}
TARGET=${TARGET}.zst
;;
"NONE" | "none" | "FALSE" | "false")
;;
@@ -279,7 +301,7 @@ function compression() {
}
function generate_md5() {
if [ "$MD5" = "TRUE" ] || [ "$MD5" = "true" ] ; then
if var_true $MD5 ; then
cd $TMPDIR
md5sum ${TARGET} > ${TARGET}.md5
MD5VALUE=$(md5sum ${TARGET} | awk '{ print $1}')
@@ -287,9 +309,6 @@ fi
}
function move_backup() {
mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
case "$SIZE_VALUE" in
"b" | "bytes" )
SIZE_VALUE=1
@@ -306,6 +325,47 @@ function move_backup() {
else
FILESIZE=$(du -h "${DB_DUMP_TARGET}/${TARGET}" | awk '{ print $1}')
fi
case "${BACKUP_LOCATION}" in
"FILE" | "file" | "filesystem" | "FILESYSTEM" )
mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
;;
"S3" | "s3" | "MINIO" | "minio" )
s3_content_type="application/octet-stream"
if [ "$S3_URI_STYLE" = "VIRTUALHOST" ] || [ "$S3_URI_STYLE" = "VHOST" ] [ "$S3_URI_STYLE" = "virtualhost" ] [ "$S3_URI_STYLE" = "vhost" ] ; then
s3_url="${S3_BUCKET}.${S3_HOST}"
else
s3_url="${S3_HOST}/${S3_BUCKET}"
fi
if var_true $MD5 ; then
s3_date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
s3_md5="$(libressl md5 -binary < "${TMPDIR}/${TARGET}.md5" | base64)"
sig="$(printf "PUT\n$s3_md5\n${s3_content_type}\n$s3_date\n/$S3_BUCKET/$S3_PATH/${TARGET}.md5" | libressl sha1 -binary -hmac "${S3_KEY_SECRET}" | base64)"
print_debug "Uploading ${TARGET}.md5 to S3"
curl -T "${TMPDIR}/${TARGET}.md5" ${S3_PROTOCOL}://${s3_url}/${S3_PATH}/${TARGET}.md5 \
-H "Date: $date" \
-H "Authorization: AWS ${S3_KEY_ID}:$sig" \
-H "Content-Type: ${s3_content_type}" \
-H "Content-MD5: ${s3_md5}"
fi
s3_date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
s3_md5="$(libressl md5 -binary < "${TMPDIR}/${TARGET}" | base64)"
sig="$(printf "PUT\n$s3_md5\n${s3_content_type}\n$s3_date\n/$S3_BUCKET/$S3_PATH/${TARGET}" | libressl sha1 -binary -hmac "${S3_KEY_SECRET}" | base64)"
print_debug "Uploading ${TARGET} to S3"
curl -T ${TMPDIR}/${TARGET} ${S3_PROTOCOL}://${s3_url}/${S3_PATH}/${TARGET} \
-H "Date: $s3_date" \
-H "Authorization: AWS ${S3_KEY_ID}:$sig" \
-H "Content-Type: ${s3_content_type}" \
-H "Content-MD5: ${s3_md5}"
rm -rf ${TMPDIR}/*.md5
rm -rf ${TMPDIR}/${TARGET}
;;
esac
}
@@ -373,14 +433,14 @@ print_info "Initialized on `date`"
esac
### Zabbix
if [ "$ENABLE_ZABBIX" = "TRUE" ] || [ "$ENABLE_ZABBIX" = "true" ]; then
if var_true $ENABLE_ZABBIX ; then
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}`
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'`
fi
### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then
find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \;
find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "*" -exec rm {} \;
fi
### Post Backup Custom Script Support
@@ -395,7 +455,7 @@ print_info "Initialized on `date`"
fi
### Go back to Sleep until next Backup time
if [ "$MANUAL" = "TRUE" ]; then
if var_true $MANUAL ; then
exit 1;
else
sleep $(($DB_DUMP_FREQ*60))

View File

@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash
echo '** Performing Manual Backup'
/etc/s6/services/10-db-backup/run NOW
/etc/services.available/10-db-backup/run NOW