Compare commits

...

29 Commits
3.0.1 ... 3.1.1

Author SHA1 Message Date
Dave Conroy
24ed769429 Release 3.1.1 - See CHANGELOG.md 2022-03-28 10:29:00 -07:00
Dave Conroy
cbd87a5ede Update README.md 2022-03-23 19:16:51 -07:00
Dave Conroy
13214665c9 Release 3.1.0 - See CHANGELOG.md 2022-03-23 16:21:12 -07:00
Dave Conroy
2e71f617a1 Merge pull request #107 from piemonkey/mongo-restore
Add Mongo support to restore script
2022-03-23 12:24:43 -07:00
Dave Conroy
fbe9dde4a1 Release 3.0.16 - See CHANGELOG.md 2022-03-23 07:57:28 -07:00
Dave Conroy
eb2a18672b Release 3.0.15 - See CHANGELOG.md 2022-03-22 18:27:57 -07:00
Dave Conroy
5f784ed156 Tweak Example 2022-03-22 09:57:28 -07:00
Dave Conroy
d9a4690ea2 Release 3.0.14 - See CHANGELOG.md 2022-03-22 07:52:15 -07:00
Rich
e7eb88c32a Give feedback if restore script doesn't support db type 2022-03-22 09:12:34 +01:00
Rich
52dc510b89 Add auto restore support for mongodb 2022-03-22 09:12:16 +01:00
Rich
06677dbc8b Fix typo setting dbhost in restore script 2022-03-22 09:12:05 +01:00
Rich
e0dd2bc91b Set db type from env vars correctly during restore 2022-03-22 09:11:54 +01:00
Dave Conroy
baba842373 Release 3.0.13 - See CHANGELOG.md 2022-03-21 16:26:45 -07:00
Dave Conroy
108938c17a Release 3.0.12 - See CHANGELOG.md 2022-03-21 13:51:01 -07:00
Dave Conroy
b0b39fa8c1 Release 3.0.11 - See CHANGELOG.md 2022-03-21 12:34:33 -07:00
Dave Conroy
fa8f43132c Release 3.0.10 - See CHANGELOG.md 2022-03-21 11:19:17 -07:00
Dave Conroy
3f693feefc Release 3.0.9 - See CHANGELOG.md 2022-03-21 10:57:17 -07:00
Dave Conroy
bc32b7d084 Release 3.0.8 - See CHANGELOG.md 2022-03-21 10:47:18 -07:00
Dave Conroy
f7f6a646a0 Release 3.0.7 - See CHANGELOG.md 2022-03-21 10:32:57 -07:00
Dave Conroy
b755497062 Release 3.0.6 - See CHANGELOG.md 2022-03-21 10:32:27 -07:00
Dave Conroy
656bca02cd Release 3.0.5 - See CHANGELOG.md 2022-03-21 09:44:05 -07:00
Dave Conroy
da0c7f9a03 Release 3.0.4 - See CHANGELOG.md 2022-03-21 09:04:52 -07:00
Dave Conroy
b8d7832145 Release 3.0.3 - See CHANGELOG.md 2022-03-21 08:07:26 -07:00
Dave Conroy
a4d7d833b7 Release 3.0.2 - See CHANGELOG.md 2022-03-18 06:37:19 -07:00
Dave Conroy
24b3239e9f Send proper Zabbix value for Exit Code 2022-03-18 06:34:16 -07:00
Dave Conroy
ac5a09361a Add updated Zabbix template 2022-03-18 06:33:38 -07:00
Dave Conroy
179b39e7d5 Don't fail the script when there is an error code 1 2022-03-18 06:10:33 -07:00
Dave Conroy
22db6d79a7 Cleanup backup start time argument for POST_SCRIPT environment var 2022-03-18 06:01:24 -07:00
Dave Conroy
f725b59c5c Cleanup DEBUG Mode backup duration output 2022-03-18 06:00:39 -07:00
9 changed files with 747 additions and 512 deletions

View File

@@ -1,3 +1,123 @@
## 3.1.1 2022-03-28 <dave at tiredofit dot ca>
### Changed
- Resolve some issues with backups of Mongo and others not saving the proper timestamp
## 3.1.0 2022-03-23 <dave at tiredofit dot ca>
### Added
- Backup multiple databases by seperating with comma e.g. db1,db2
- Backup ALL databases bu setting DB_NAME to ALL
- Exclude databases from being backed up comma seperated when DB_NAME is all eg DB_NAME_EXCLUDE=db3,db4
- Backup timers execute per database, not per the whole script run
- Post scripts run after each database backup
- Checksum does not occur when database backup failed
- Database cleanup does not occur when any databases backups fail throughout session
- MongoDB now supported with 'restore' script - Credit to piemonkey@github
- Lots of reshuffling, optimizations with script due to botched 3.0 release
### Changed
- ZSTD replaces GZ as default compression type
- Output is cleaner when backups are occurring
## 3.0.16 2022-03-23 <dave at tiredofit dot ca>
### Changed
- Fix for SPLIT_DB not looping through all databse names properly
## 3.0.15 2022-03-22 <dave at tiredofit dot ca>
### Changed
- Rework compression function
- Fix for Bzip compression failing
## 3.0.14 2022-03-22 <dave at tiredofit dot ca>
### Changed
- Rearrange Notice stating when next backup is going to start
## 3.0.13 2022-03-21 <dave at tiredofit dot ca>
### Added
- Add compression levels to debug mode
## 3.0.12 2022-03-21 <dave at tiredofit dot ca>
### Added
- Throw Errors for MANUAL mode when certain other CONTAINER_* services are enabled
## 3.0.11 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Fix for Parallel Compression
## 3.0.10 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Fix for restore script not taking "custom" usernames or passwords
## 3.0.9 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Switch to using parallel versions of compression tools all the time, yet explicitly state the threads in use (1 or ++)
## 3.0.8 2022-03-21 <dave at tiredofit dot ca>
### Added
- Add PARALLEL_COMPRESSION_THREADS environment variable to limit amount of threads when compressing - Currently autodetects however many processors are avaialable to the container
## 3.0.7 2022-03-21 <dave at tiredofit dot ca>
### Reverted
- Strip unused LOG directives
## 3.0.6 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Fix for parallel compression
## 3.0.5 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Add more detail regarding manual modes
## 3.0.4 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Fix for 3.0.3
## 3.0.3 2022-03-21 <dave at tiredofit dot ca>
### Changed
- Add documentation for Manual mode
- Revert Compression variables
## 3.0.2 2022-03-18 <dave at tiredofit dot ca>
### Changed
- Cleanup of Zabbix Agent options
- Updated Zabbix template
- Split apart S3 options for better debugging and also cleaned up their variables
- Fixed issue with post scripts not outputting proper backup start time
- Cleaned up some notifications
- Rearranged code
## 3.0.1 2022-03-17 <dave at tiredofit dot ca>
### Changed

View File

@@ -16,7 +16,8 @@ Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis servers.
* dump to local filesystem or backup to S3 Compatible services
* select database user and password
* backup all databases
* backup all databases, single, or multiple databases
* backup all to seperate files or one singular file
* choose to have an MD5 or SHA1 sum after backup for verification
* delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz, zstd)
@@ -47,10 +48,15 @@ Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis servers.
- [Persistent Storage](#persistent-storage-1)
- [Environment Variables](#environment-variables)
- [Base Images used](#base-images-used)
- [Container Options](#container-options)
- [Database Specific Options](#database-specific-options)
- [Scheduling Options](#scheduling-options)
- [Backup Options](#backup-options)
- [Backing Up to S3 Compatible Services](#backing-up-to-s3-compatible-services)
- [Maintenance](#maintenance)
- [Shell Access](#shell-access)
- [Manual Backups](#manual-backups)
- [Restoring Databases](#restoring-databases)
- [Custom Scripts](#custom-scripts)
- [Support](#support)
- [Usage](#usage)
@@ -117,28 +123,30 @@ Be sure to view the following repositories to understand all the customizable op
#### Container Options
| Parameter | Description | Default |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------- | --------------- |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------- | --------------- |
| `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi | `FILESYSTEM` |
| `MODE` | `AUTO` mode to use internal scheduling routines or `MANUAL` to simply use this as manual backups only executed by your own means | `AUTO` |
| `MANUAL_RUN_FOREVER` | `TRUE` or `FALSE` if you wish to try to make the container exit after the backup | `TRUE` |
| `TEMP_LOCATION` | Perform Backups and Compression in this temporary directory | `/tmp/backups/` |
| `DB_AUTH` | (Mongo Only - Optional) Authentication Database | |
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. | `FALSE` |
| `POST_SCRIPT` | Fill this variable in with a command to execute post the script backing up | |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. | `FALSE` |
| `SPLIT_DB` | For each backup, create a new archive. `TRUE` or `FALSE` (MySQL and Postgresql Only) | `TRUE`
### Database Specific Options
| Parameter | Description | Default |
| --------- | --------------------------------------------------------------------------------------------- | ------- |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `DB_AUTH` | (Mongo Only - Optional) Authentication Database | |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `sqlite3` | |
| `DB_HOST` | Server Hostname e.g. `mariadb`. For `sqlite3`, full path to DB file e.g. `/backup/db.sqlite3` | |
| `DB_NAME` | Schema Name e.g. `database` | |
| `DB_USER` | username for the database - use `root` to backup all MySQL of them. | |
| `DB_NAME` | Schema Name e.g. `database` or `ALL` to backup all databases the user has access to. Backup multiple by seperating with commas eg `db1,db2` | |
| `DB_NAME_EXCLUDE` | If using `ALL` - use this as to exclude databases seperated via commas from being backed up | |
| `DB_USER` | username for the database(s) - Can use `root` for MySQL | |
| `DB_PASS` | (optional if DB doesn't require it) password for the database | |
| `DB_PORT` | (optional) Set port to connect to DB_HOST. Defaults are provided | varies |
### Scheduling Options
| Parameter | Description | Default |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day. | `1440` |
| `DB_DUMP_FREQ` | How often to do a dump, in minutes after the first backup. Defaults to 1440 minutes, or once per day. | `1440` |
| `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats | |
| | Absolute HHMM, e.g. `2330` or `0415` | |
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half | |
@@ -147,16 +155,18 @@ Be sure to view the following repositories to understand all the customizable op
- You may need to wrap your `DB_DUMP_BEGIN` value in quotes for it to properly parse. There have been reports of backups that start with a `0` get converted into a different format which will not allow the timer to start at the correct time.
### Backup Options
| Parameter | Description | Default |
| ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | ------- |
| `ENABLE_COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` | `GZ` |
| `ENABLE_PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` | `TRUE` |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | -------------- |
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` | `ZSTD` |
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - | `3` |
| `ENABLE_PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` | `TRUE` |
| `PARALLEL_COMPRESSION_THREADS` | Maximum amount of threads to use when compressing - Integer value e.g. `8` | `autodetected` |
| `ENABLE_CHECKSUM` | Generate either a MD5 or SHA1 in Directory, `TRUE` or `FALSE` | `TRUE` |
| `CHECKSUM` | Either `MD5` or `SHA1` | `MD5` |
| `EXTRA_OPTS` | If you need to pass extra arguments to the backup command, add them here e.g. `--extra-command` | |
| `MYSQL_MAX_ALLOWED_PACKET` | Max allowed packet if backing up MySQL / MariaDB | `512M` |
| `MYSQL_SINGLE_TRANSACTION` | Backup in a single transaction with MySQL / MariaDB | `TRUE` |
| `MYSQL_STORED_PROCEDURES` | Backup stored procedures with MySQL / MariaDB | `TRUE` |
- When using compression with MongoDB, only `GZ` compression is possible.
#### Backing Up to S3 Compatible Services
@@ -179,7 +189,6 @@ If `BACKUP_LOCATION` = `S3` then the following options are used.
## Maintenance
### Shell Access
For debugging and maintenance purposes you may want access the containers shell.
@@ -190,8 +199,10 @@ docker exec -it (whatever your container name is) bash
### Manual Backups
Manual Backups can be performed by entering the container and typing `backup-now`
- Recently there was a request to have the container work with Kukbernetes cron scheduling. This can theoretically be accomplished by setting the container `MODE=MANUAL` and then setting `MANUAL_RUN_FOREVER=FALSE` - You would also want to disable a few features from the upstream base images specifically `CONTAINER_ENABLE_SCHEDULING` and `CONTAINER_ENABLE_MONITORING`. This should allow the container to start, execute a backup by executing and then exit cleanly. An alternative way to running the script is to execute `/etc/services.available/10-db-backup/run`.
### Restoring Databases
Entering in the container and executing `restore` will execute a menu based script to restore your backups.
Entering in the container and executing `restore` will execute a menu based script to restore your backups - MariaDB, Postgres, and Mongo supported.
You will be presented with a series of menus allowing you to choose:
- What file to restore

View File

@@ -20,7 +20,7 @@ services:
- example-db
volumes:
- ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh
#- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- DB_TYPE=mariadb
- DB_HOST=example-db
@@ -30,8 +30,8 @@ services:
- DB_DUMP_FREQ=1440
- DB_DUMP_BEGIN=0000
- DB_CLEANUP_TIME=8640
- CHECKSUM=MD5
- COMPRESSION=XZ
- CHECKSUM=SHA1
- COMPRESSION=ZSTD
- SPLIT_DB=FALSE
restart: always

View File

@@ -2,20 +2,19 @@
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
CHECKSUM=${CHECKSUM:-"MD5"}
COMPRESSION=${COMPRESSION:-"ZSTD"}
COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-"/backup"}
ENABLE_CHECKSUM=${ENABLE_CHECKSUM:-"TRUE"}
ENABLE_COMPRESSION=${ENABLE_COMPRESSION:-"GZ"}
ENABLE_PARALLEL_COMPRESSION={ENABLE_PARALLEL_COMPRESSION:-"TRUE"}
LOG_PATH=${LOG_PATH:-"/logs/"}
LOG_TYPE=${LOG_TYPE:-"BOTH"}
ENABLE_PARALLEL_COMPRESSION=${ENABLE_PARALLEL_COMPRESSION:-"TRUE"}
MANUAL_RUN_FOREVER=${MANUAL_RUN_FOREVER:-"TRUE"}
MODE=${MODE:-"AUTO"}
MYSQL_MAX_ALLOWED_PACKET=${MYSQL_MAX_ALLOWED_PACKET:-"512M"}
MYSQL_SINGLE_TRANSACTION=${MYSQL_SINGLE_TRANSACTION:-"TRUE"}
MYSQL_STORED_PROCEDURES=${MYSQL_STORED_PROCEDURES:-"TRUE"}
PARALLEL_COMPRESSION_THREADS=${PARALLEL_COMPRESSION_THREADS:-"$(nproc)"}
S3_CERT_SKIP_VERIFY=${S3_CERT_SKIP_VERIFY:-"TRUE"}
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}

View File

@@ -1,20 +1,5 @@
#!/command/with-contenv bash
bootstrap_compression() {
### Set Compression Options
if var_true "${ENABLE_PARALLEL_COMPRESSION}" ; then
bzip="pbzip2 -${COMPRESSION_LEVEL}"
gzip="pigz -${COMPRESSION_LEVEL}"
xzip="pixz -${COMPRESSION_LEVEL}"
zstd="zstd --rm -${COMPRESSION_LEVEL}"
else
bzip="bzip2 -${COMPRESSION_LEVEL}"
gzip="gzip -${COMPRESSION_LEVEL}"
xzip="xz -${COMPRESSION_LEVEL} "
zstd="zstd --rm -${COMPRESSION_LEVEL}"
fi
}
bootstrap_variables() {
case "${dbtype,,}" in
couch* )
@@ -39,6 +24,7 @@ bootstrap_variables() {
dbtype=mysql
dbport=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;;
"mssql" | "microsoftsql" )
apkArch="$(apk --print-arch)"; \
@@ -53,6 +39,7 @@ bootstrap_variables() {
dbtype=pgsql
dbport=${DB_PORT:-5432}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;;
"redis" )
dbtype=redis
@@ -90,59 +77,68 @@ bootstrap_variables() {
}
backup_couch() {
pre_dbbackup
target=couch_${dbname}_${dbhost}_${now}.txt
compression
print_notice "Dumping CouchDB database: '${dbname}'"
curl -X GET http://${dbhost}:${dbport}/${dbname}/_all_docs?include_docs=true ${dumpoutput} | $dumpoutput > ${TEMP_LOCATION}/${target}
print_notice "Dumping CouchDB database: '${dbname}' ${compression_string}"
curl -X GET http://${dbhost}:${dbport}/${dbname}/_all_docs?include_docs=true ${compress_cmd} | $compress_cmd > ${TEMP_LOCATION}/${target}
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
move_dbbackup
post_dbbackup
}
backup_influx() {
if [ "${ENABLE_COMPRESSION,,}" = "none" ] || [ "${ENABLE_COMPRESSION,,}" = "false" ] ; then
:
else
print_notice "Compressing InfluxDB backup with gzip"
influx_compression="-portable"
compression_string=" and compressing with gzip"
fi
for DB in ${DB_NAME}; do
print_notice "Dumping Influx database: '${DB}'"
target=influx_${DB}_${dbhost}_${now}
influxd backup ${influx_compression} -database $DB -host ${dbhost}:${dbport} ${TEMP_LOCATION}/${target}
for db in ${DB_NAME}; do
pre_dbbackup
target=influx_${db}_${dbhost}_${now}
print_notice "Dumping Influx database: '${db}' ${compression_string}"
influxd backup ${influx_compression} -database $db -host ${dbhost}:${dbport} ${TEMP_LOCATION}/${target}
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
move_dbbackup
post_dbbackup
done
}
backup_mongo() {
pre_dbbackup
if [ "${ENABLE_COMPRESSION,,}" = "none" ] || [ "${ENABLE_COMPRESSION,,}" = "false" ] ; then
target=${dbtype}_${dbname}_${dbhost}_${now}.archive
else
print_notice "Compressing MongoDB backup with gzip"
target=${dbtype}_${dbname}_${dbhost}_${now}.archive.gz
mongo_compression="--gzip"
compression_string="and compressing with gzip"
fi
print_notice "Dumping MongoDB database: '${DB_NAME}'"
print_notice "Dumping MongoDB database: '${DB_NAME}' ${compression_string}"
mongodump --archive=${TEMP_LOCATION}/${target} ${mongo_compression} --host ${dbhost} --port ${dbport} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_AUTH_STR}${MONGO_DB_STR} ${EXTRA_OPTS}
exit_code=$?
check_exit_code
check_exit_code $target
cd "${TEMP_LOCATION}"
generate_checksum
move_backup
move_dbbackup
post_dbbackup
}
backup_mssql() {
pre_dbbackup
target=mssql_${dbname}_${dbhost}_${now}.bak
print_notice "Dumping MSSQL database: '${dbname}'"
/opt/mssql-tools/bin/sqlcmd -E -C -S ${dbhost}\,${dbport} -U ${dbuser} -P ${dbpass} Q "BACKUP DATABASE \[${dbname}\] TO DISK = N'${TEMP_LOCATION}/${target}' WITH NOFORMAT, NOINIT, NAME = '${dbname}-full', SKIP, NOREWIND, NOUNLOAD, STATS = 10"
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
move_dbbackup
post_dbbackup
}
backup_mysql() {
@@ -152,64 +148,111 @@ backup_mysql() {
if var_true "${MYSQL_STORED_PROCEDURES}" ; then
stored_procedures="--routines"
fi
if [ "${dbname,,}" = "all" ] ; then
print_debug "Preparing to back up everything except for information_schema and _* prefixes"
db_names=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database | grep -v schema )
if [ -n "${DB_NAME_EXCLUDE}" ] ; then
db_names_exclusions=$(echo "${DB_NAME_EXCLUDE}" | tr ',' '\n')
for db_exclude in ${db_names_exclusions} ; do
print_debug "Excluding '${db_exclude}' from ALL DB_NAME backups"
db_names=$(echo "$db_names" | sed "/${db_exclude}/d" )
done
fi
else
db_names=$(echo "${dbname}" | tr ',' '\n')
fi
print_debug "Databases Found: $(echo ${db_names} | xargs | tr ' ' ',')"
if var_true "${SPLIT_DB}" ; then
DATABASES=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database | grep -v schema)
for db in "${DATABASES}" ; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
print_debug "Backing up everything except for information_schema and _* prefixes"
print_notice "Dumping MySQL/MariaDB database: '${db}'"
for db in ${db_names} ; do
pre_dbbackup
target=mysql_${db}_${dbhost}_${now}.sql
compression
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} --databases $db | $dumpoutput > ${TEMP_LOCATION}/${target}
print_notice "Dumping MySQL/MariaDB database: '${db}' ${compression_string}"
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} --databases $db | $compress_cmd > "${TEMP_LOCATION}"/"${target}"
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
fi
move_dbbackup
post_dbbackup
done
else
print_debug "Not splitting database dumps into their own files"
pre_dbbackup
target=mysql_all_${dbhost}_${now}.sql
compression
print_notice "Dumping MySQL/MariaDB database: '${DB_NAME}'"
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -A -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target}
print_notice "Dumping all MySQL / MariaDB databases: '$(echo ${db_names} | xargs | tr ' ' ',')' ${compression_string}"
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} --databases $(echo ${db_names} | xargs) | $compress_cmd > "${TEMP_LOCATION}"/"${target}"
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
move_dbbackup
post_dbbackup
fi
}
backup_pgsql() {
export PGPASSWORD=${dbpass}
if var_true "${SPLIT_DB}" ; then
authdb=${DB_USER}
[ -n "${DB_NAME}" ] && authdb=${DB_NAME}
DATABASES=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for db in "${DATABASES}"; do
print_notice "Dumping Postgresql database: $db"
if [ "${dbname,,}" = "all" ] ; then
print_debug "Preparing to back up all databases"
db_names=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
if [ -n "${DB_NAME_EXCLUDE}" ] ; then
db_names_exclusions=$(echo "${DB_NAME_EXCLUDE}" | tr ',' '\n')
for db_exclude in ${db_names_exclusions} ; do
print_debug "Excluding '${db_exclude}' from ALL DB_NAME backups"
db_names=$(echo "$db_names" | sed "/${db_exclude}/d" )
done
fi
else
db_names=$(echo "${dbname}" | tr ',' '\n')
fi
print_debug "Databases Found: $(echo ${db_names} | xargs | tr ' ' ',')"
if var_true "${SPLIT_DB}" ; then
for db in ${db_names} ; do
pre_dbbackup
target=pgsql_${db}_${dbhost}_${now}.sql
compression
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target}
print_notice "Dumping PostgresSQL database: '${db}' ${compression_string}"
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $compress_cmd > ${TEMP_LOCATION}/${target}
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
move_dbbackup
post_dbbackup
done
else
print_debug "Not splitting database dumps into their own files"
pre_dbbackup
target=pgsql_all_${dbhost}_${now}.sql
compression
print_notice "Dumping PostgreSQL: '${DB_NAME}'"
pg_dump -h ${dbhost} -U ${dbuser} -p ${dbport} ${dbname} ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target}
print_notice "Dumping all PostgreSQL databases: '$(echo ${db_names} | xargs | tr ' ' ',')' ${compression_string}"
tmp_db_names=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for r_db_name in $(echo $db_names | xargs); do
tmp_db_names=$(echo "$tmp_db_names" | xargs | sed "s|${r_db_name}||g" )
done
sleep 5
for x_db_name in ${tmp_db_names} ; do
pgexclude_arg=$(echo ${pgexclude_arg} --exclude-database=${x_db_name})
done
pg_dumpall -h ${dbhost} -U ${dbuser} -p ${dbport} ${pgexclude_arg} ${EXTRA_OPTS} | $compress_cmd > ${TEMP_LOCATION}/${target}
exit_code=$?
check_exit_code
check_exit_code $target
generate_checksum
move_backup
move_dbbackup
post_dbbackup
fi
}
backup_redis() {
pre_dbbackup
print_notice "Dumping Redis - Flushing Redis Cache First"
target=redis_${db}_${dbhost}_${now}.rdb
echo bgsave | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} --rdb ${TEMP_LOCATION}/${target} ${EXTRA_OPTS}
print_notice "Dumping Redis - Flushing Redis Cache First"
sleep 10
try=5
while [ $try -gt 0 ] ; do
@@ -225,24 +268,26 @@ backup_redis() {
done
target_original=${target}
compression
$dumpoutput "${TEMP_LOCATION}/${target_original}"
$compress_cmd "${TEMP_LOCATION}/${target_original}"
generate_checksum
move_backup
move_dbbackup
post_dbbackup
}
backup_sqlite3() {
pre_dbbackup
db=$(basename "$dbhost")
db="${db%.*}"
target=sqlite3_${db}_${now}.sqlite3
compression
print_notice "Dumping sqlite3 database: '${dbhost}'"
print_notice "Dumping sqlite3 database: '${dbhost}' ${compression_string}"
sqlite3 "${dbhost}" ".backup '${TEMP_LOCATION}/backup.sqlite3'"
exit_code=$?
check_exit_code
cat "${TEMP_LOCATION}"/backup.sqlite3 | $dumpoutput > "${TEMP_LOCATION}/${target}"
check_exit_code $target
cat "${TEMP_LOCATION}"/backup.sqlite3 | $compress_cmd > "${TEMP_LOCATION}/${target}"
generate_checksum
move_backup
move_dbbackup
post_dbbackup
}
check_availability() {
@@ -326,49 +371,82 @@ check_availability() {
}
check_exit_code() {
print_debug "Exit Code is ${exit_code}"
print_debug "DB Backup Exit Code is ${exit_code}"
case "${exit_code}" in
0 )
print_info "Backup completed successfully"
print_info "DB Backup of '${1}' completed successfully"
;;
* )
print_error "Backup reported errors - Aborting"
exit 1
print_error "DB Backup of '${1}' reported errors"
master_exit_code=1
;;
esac
}
cleanup_old_data() {
if [ -n "${DB_CLEANUP_TIME}" ]; then
if [ "${master_exit_code}" != 1 ]; then
print_info "Cleaning up old backups"
mkdir -p "${DB_DUMP_TARGET}"
find "${DB_DUMP_TARGET}"/ -mmin +"${DB_CLEANUP_TIME}" -iname "*" -exec rm {} \;
else
print_info "Skipping Cleaning up old backups because there were errors in backing up"
fi
fi
}
compression() {
case "${ENABLE_COMPRESSION,,}" in
if var_false "${ENABLE_PARALLEL_COMPRESSION}" ; then
PARALLEL_COMPRESSION_THREADS=1
fi
case "${COMPRESSION,,}" in
gz* )
print_notice "Compressing backup with gzip"
compress_cmd="pigz -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS} "
compression_type="gzip"
target=${target}.gz
dumpoutput="$gzip "
;;
bz* )
print_notice "Compressing backup with bzip2"
compress_cmd="pbzip2 -${COMPRESSION_LEVEL} -p${PARALLEL_COMPRESSION_THREADS} "
compression_type="bzip2"
target=${target}.bz2
dumpoutput="$bzip "
;;
xz* )
print_notice "Compressing backup with xzip"
compress_cmd="pixz -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS} "
compression_type="xzip"
target=${target}.xz
dumpoutput="$xzip "
;;
zst* )
print_notice "Compressing backup with zstd"
compress_cmd="zstd --rm -${COMPRESSION_LEVEL} -T${PARALLEL_COMPRESSION_THREADS}"
compression_type="zstd"
target=${target}.zst
dumpoutput="$zstd "
;;
"none" | "false")
print_notice "Not compressing backups"
dumpoutput="cat "
compress_cmd="cat "
compression_type="none"
;;
esac
case "${CONTAINER_LOG_LEVEL,,}" in
"debug" )
if [ "${compression_type}" = "none" ] ; then
compression_string="with '${PARALLEL_COMPRESSION_THREADS}' threads"
else
compression_string="and compressing with '${compression_type}:${COMPRESSION_LEVEL}' with '${PARALLEL_COMPRESSION_THREADS}' threads"
fi
;;
* )
if [ "${compression_type}" != "none" ] ; then
compression_string="and compressing with '${compression_type}'"
fi
;;
esac
}
generate_checksum() {
if var_true "${ENABLE_CHECKSUM}" ;then
if [ "${exit_code}" = "0" ] ; then
case "${CHECKSUM,,}" in
"md5" )
checksum_command="md5sum"
@@ -385,10 +463,13 @@ generate_checksum() {
${checksum_command} "${target}" > "${target}"."${checksum_extension}"
checksum_value=$(${checksum_command} "${target}" | awk ' { print $1}')
print_debug "${checksum_extension^^}: ${checksum_value} - ${target}"
else
print_warn "Skipping Checksum creation because backup did not complete successfully"
fi
fi
}
move_backup() {
move_dbbackup() {
case "$SIZE_VALUE" in
"b" | "bytes" )
SIZE_VALUE=1
@@ -422,29 +503,78 @@ move_backup() {
export AWS_DEFAULT_REGION=${S3_REGION}
if [ -f "${S3_CERT_CA_FILE}" ] ; then
print_debug "Using Custom CA for S3 Backups"
s3_ssl=" --ca-bundle ${S3_CERT_CA_FILE}"
s3_ca_cert="--ca-bundle ${S3_CERT_CA_FILE}"
fi
if var_true "${S3_CERT_SKIP_VERIFY}" ; then
print_debug "Skipping SSL verification for HTTPS S3 Hosts"
s3_ssl="${s3_ssl} --no-verify-ssl"
s3_ssl="--no-verify-ssl"
fi
[[ ( -n "${S3_HOST}" ) ]] && PARAM_AWS_ENDPOINT_URL=" --endpoint-url ${S3_PROTOCOL}://${S3_HOST}"
aws ${PARAM_AWS_ENDPOINT_URL} s3 cp ${TEMP_LOCATION}/${target} s3://${S3_BUCKET}/${S3_PATH}/${target} ${s3_ssl} ${S3_EXTRA_OPTS}
aws ${PARAM_AWS_ENDPOINT_URL} s3 cp ${TEMP_LOCATION}/${target} s3://${S3_BUCKET}/${S3_PATH}/${target} ${s3_ssl} ${s3_ca_cert} ${S3_EXTRA_OPTS}
unset s3_ssl
unset s3_ca_cert
rm -rf "${TEMP_LOCATION}"/*."${checksum_extension}"
rm -rf "${TEMP_LOCATION}"/"${target}"
;;
esac
}
pre_dbbackup() {
dbbackup_start_time=$(date +"%s")
now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
target=${dbtype}_${dbname}_${dbhost}_${now}.sql
}
post_dbbackup() {
dbbackup_finish_time=$(date +"%s")
dbbackup_total_time=$(echo $((dbbackup_finish_time-dbbackup_start_time)))
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_notice "Sending Backup Statistics to Zabbix"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o "$(stat -c%s "${DB_DUMP_TARGET}"/"${target}")"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.status -o "${exit_code}"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.backup_duration -o "$(echo $((dbbackup_finish_time-dbbackup_start_time)))"
fi
### Post Script Support
if [ -n "${POST_SCRIPT}" ] ; then
print_notice "Found POST_SCRIPT environment variable. Executing '${POST_SCRIPT}"
eval "${POST_SCRIPT}" "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${dbbackup_start_time}" "${dbbackup_finish_time}" "${dbbackup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
fi
### Post Backup Custom Script Support
if [ -d "/assets/custom-scripts/" ] ; then
print_notice "Found Post Backup Custom Script to execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_notice "Running Script: '${f}'"
## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE
${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${dbbackup_start_time}" "${dbbackup_finish_time}" "${dbbackup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
done
fi
print_notice "DB Backup for '${db}' time taken: $(echo ${dbbackup_total_time} | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
}
sanity_test() {
sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host"
file_env 'DB_USER'
file_env 'DB_PASS'
case "${dbtype,,}" in
"mysql" | "mariadb" )
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;;
postgres* | "pgsql" )
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;;
esac
if [ "${BACKUP_LOCATION,,}" = "s3" ] || [ "${BACKUP_LOCATION,,}" = "minio" ] ; then
sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_PATH "S3 Path"
@@ -458,7 +588,7 @@ setup_mode() {
if [ "${MODE,,}" = "auto" ] || [ ${MODE,,} = "default" ] ; then
print_debug "Running in Auto / Default Mode - Letting Image control scheduling"
else
print_info "Running in Manual mode - Execute 'backup_now' to run a manual backup"
print_info "Running in Manual mode - Execute 'backup_now' or '/etc/services.available/10-db-backup/run' to perform a manual backup"
service_stop 10-db-backup
if var_true "${MANUAL_RUN_FOREVER}" ; then
mkdir -p /etc/services.d/99-run_forever
@@ -470,6 +600,20 @@ do
done
EOF
chmod +x /etc/services.d/99-run_forever/run
else
if var_true "${CONTAINER_ENABLE_SCHEDULING}" ; then
print_error "Manual / Exit after execution mode doesn't work with 'CONTAINER_ENABLE_SCHEDULING=TRUE'"
exit 1
fi
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_error "Manual / Exit after execution mode doesn't work with 'CONTAINER_ENABLE_MONITORING=TRUE'"
exit 1
fi
if var_true "${CONTAINER_ENABLE_LOGSHIPPING}" ; then
print_error "Manual / Exit after execution mode doesn't work with 'CONTAINER_ENABLE_LOGSHIPPING=TRUE'"
exit 1
fi
fi
fi
}

View File

@@ -4,29 +4,16 @@ source /assets/functions/00-container
source /assets/functions/10-db-backup
source /assets/defaults/10-db-backup
PROCESS_NAME="db-backup"
CONTAINER_LOG_LEVEL=DEBUG
case "${1,,}" in
"now" | "manual" )
DB_DUMP_BEGIN=+0
manual=TRUE
;;
* )
sleep 5
;;
esac
bootstrap_compression
bootstrap_variables
### Container Startup
print_debug "Backup routines Initialized on $(date)"
### Wait for Next time to start backup
case "${1,,}" in
"now" | "manual" )
:
;;
* )
if [ "${MODE,,}" = "manual" ] || [ "${1,,}" = "manual" ] || [ "${1,,}" = "now" ]; then
DB_DUMP_BEGIN=+0
manual=TRUE
print_debug "Detected Manual Mode"
else
sleep 5
current_time=$(date +"%s")
today=$(date +"%Y%m%d")
@@ -43,19 +30,12 @@ case "${1,,}" in
print_debug "Wait Time: ${waittime} Target time: ${target_time} Current Time: ${current_time}"
print_info "Next Backup at $(date -d @${target_time} +"%Y-%m-%d %T %Z")"
sleep $waittime
;;
esac
fi
### Commence Backup
while true; do
mkdir -p "${TEMP_LOCATION}"
backup_start_time=$(date +"%s")
now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
target=${dbtype}_${dbname}_${dbhost}_${now}.sql
### Take a Dump
print_debug "Backup routines started time: $(date +'%Y-%m-%d %T %Z')"
case "${dbtype,,}" in
"couch" )
check_availability
@@ -93,48 +73,17 @@ while true; do
backup_finish_time=$(date +"%s")
backup_total_time=$(echo $((backup_finish_time-backup_start_time)))
if [ -z "$master_exit_code" ] ; then master_exit_code="0" ; fi
print_info "Backup routines finish time: $(date -d @${backup_finish_time} +"%Y-%m-%d %T %Z") with overall exit code ${master_exit_code}"
print_notice "Backup routines time taken: $(echo ${backup_total_time} | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
print_info "Backup finish time: $(date -d @${backup_finish_time} +"%Y-%m-%d %T %Z")"
print_notice "Backup time elapsed: $(echo ${backup_total_time} | awk '{printf "Hours: *%d* Minutes: *%02d* Seconds: *%02d*", $1/3600, ($1/60)%60, $1%60}')"
### Zabbix / Monitoring stats
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_notice "Sending Backup Statistics to Zabbix"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o "$(stat -c%s "${DB_DUMP_TARGET}"/"${target}")"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.status -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.backup_duration -o "$(echo $((backup_finish_time-backup_start_time)))"
fi
### Automatic Cleanup
if [ -n "${DB_CLEANUP_TIME}" ]; then
print_info "Cleaning up old backups"
mkdir -p "${DB_DUMP_TARGET}"
find "${DB_DUMP_TARGET}"/ -mmin +"${DB_CLEANUP_TIME}" -iname "*" -exec rm {} \;
fi
### Post Script Support
if [ -n "${POST_SCRIPT}" ] ; then
print_notice "Found POST_SCRIPT environment variable. Executing '${POST_SCRIPT}"
eval "${POST_SCRIPT}" "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${backup_start_timme}" "${backup_finish_time}" "${backup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
fi
### Post Backup Custom Script Support
if [ -d "/assets/custom-scripts/" ] ; then
print_notice "Found Post Backup Custom Script to execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_notice "Running Script: '${f}'"
## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE
${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${backup_start_timme}" "${backup_finish_time}" "${backup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
done
fi
cleanup_old_data
if var_true "${manual}" ; then
print_debug "Exitting due to manual mode"
exit ${exit_code};
exit ${master_exit_code};
else
### Go back to sleep until next backup time
sleep $(($DB_DUMP_FREQ*60-backup_total_time))
print_notice "Sleeping for another $(($DB_DUMP_FREQ*60-backup_total_time)) seconds. Waking up at $(date -d@"$(( $(date +%s)+$(($DB_DUMP_FREQ*60-backup_total_time))))" +"%Y-%m-%d %T %Z") "
sleep $(($DB_DUMP_FREQ*60-backup_total_time))
fi
done

View File

@@ -263,6 +263,10 @@ get_dbtype() {
p_dbtype=$(basename -- "${r_filename}" | cut -d _ -f 1)
case "${p_dbtype}" in
mongo* )
parsed_type=true
print_debug "Parsed DBType: MongoDB"
;;
mariadb | mysql )
parsed_type=true
print_debug "Parsed DBType: MariaDB/MySQL"
@@ -320,7 +324,9 @@ EOF
What Database Type are you looking to restore?
${q_dbtype_menu}
M ) MySQL / MariaDB
O ) MongoDB
P ) Postgresql
Q ) Quit
@@ -335,6 +341,10 @@ EOF
r_dbtype=mysql
break
;;
o* )
r_dbtype=mongo
break
;;
p* )
r_dbtype=postgresql
break
@@ -351,13 +361,17 @@ EOF
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}E${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype
case "${q_dbtype,,}" in
e* | "" )
r_dbtype=${db_name}
r_dbtype=${DB_TYPE}
break
;;
m* )
r_dbtype=mysql
break
;;
o* )
r_dbtype=mongo
break
;;
p* )
r_dbtype=postgresql
break
@@ -381,6 +395,10 @@ EOF
r_dbtype=mysql
break
;;
o* )
r_dbtype=mongo
break
;;
p* )
r_dbtype=postgresql
break
@@ -398,7 +416,7 @@ EOF
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}E${cdgy}\) \| \(${cwh}F${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype
case "${q_dbtype,,}" in
e* | "" )
r_dbtype=${dbtype}
r_dbtype=${DB_TYPE}
break
;;
f* )
@@ -697,9 +715,9 @@ EOF
c* )
counter=1
q_dbuser=" "
while [[ $q_dbname = *" "* ]]; do
while [[ $q_dbuser = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Usernames can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB User do you wish to use:\ ${coff})" q_dbname
read -e -p "$(echo -e ${clg}** ${cdgy}What DB User do you wish to use:\ ${coff})" q_dbuser
(( counter+=1 ))
done
r_dbuser=${q_dbuser}
@@ -735,7 +753,7 @@ EOF
q_dbpass_menu=$(cat <<EOF
C ) Custom Entered Database Password
E ) Environment Variable DB_PASS: '${DB_PASS}'
E ) Environment Variable DB_PASS
EOF
)
fi
@@ -766,9 +784,9 @@ EOF
c* )
counter=1
q_dbpass=" "
while [[ $q_dbname = *" "* ]]; do
while [[ $q_dbpass = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Passwords can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Password do you wish to use:\ ${coff})" q_dbname
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Password do you wish to use:\ ${coff})" q_dbpass
(( counter+=1 ))
done
r_dbpass=${q_dbpass}
@@ -826,7 +844,7 @@ if [ -n "${3}" ]; then
if [ ! -f "${3}" ]; then
get_dbhost
else
r_dbtype="${3}"
r_dbhost="${3}"
fi
else
get_dbhost
@@ -920,8 +938,23 @@ case "${r_dbtype}" in
pv ${r_filename} | ${decompress_cmd}cat | psql -d ${r_dbname} -h ${r_dbhost} -p ${r_dbport} -U ${r_dbuser}
exit_code=$?
;;
mongo )
print_info "Restoring '${r_filename}' into '${r_dbhost}'/'${r_dbname}'"
if [ "${ENABLE_COMPRESSION,,}" != "none" ] && [ "${ENABLE_COMPRESSION,,}" != "false" ] ; then
mongo_compression="--gzip"
fi
if [ -n "${r_dbuser}" ] ; then
mongo_user="-u ${r_dbuser}"
fi
if [ -n "${r_dbpass}" ] ; then
mongo_pass="-u ${r_dbpass}"
fi
mongorestore ${mongo_compression} -d ${r_dbname} -h ${r_dbhost} --port ${r_dbport} ${mongo_user} ${mongo_pass} --archive=${r_filename}
exit_code=$?
;;
* )
exit 3
print_info "Unable to restore DB of type '${r_dbtype}'"
exit_code=3
;;
esac

View File

@@ -0,0 +1,249 @@
{
"zabbix_export": {
"version": "6.0",
"date": "2022-03-18T13:32:12Z",
"groups": [
{
"uuid": "fa56524b5dbb4ec09d9777a6f7ccfbe4",
"name": "DB/Backup"
},
{
"uuid": "748ad4d098d447d492bb935c907f652f",
"name": "Templates/Databases"
}
],
"templates": [
{
"uuid": "5fc64d517afb4cc5bc09a3ef58b43ef7",
"template": "DB Backup",
"name": "DB Backup",
"description": "Template for Docker DB Backup Image\n\nMeant for use specifically with https://github.com/tiredofit/docker-db-backup\nLast tested with version 3.0.2",
"groups": [
{
"name": "DB/Backup"
},
{
"name": "Templates/Databases"
}
],
"items": [
{
"uuid": "72fd00fa2dd24e479f5affe03e8711d8",
"name": "DB Backup: Backup Duration",
"type": "TRAP",
"key": "dbbackup.backup_duration",
"delay": "0",
"history": "7d",
"units": "uptime",
"description": "How long the backup took",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
]
},
{
"uuid": "3549a2c9d56849babc6dc3c855484c1e",
"name": "DB Backup: Backup Time",
"type": "TRAP",
"key": "dbbackup.datetime",
"delay": "0",
"history": "7d",
"units": "unixtime",
"request_method": "POST",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"triggers": [
{
"uuid": "3ac1e074ffea46eb8002c9c08a85e7b4",
"expression": "nodata(/DB Backup/dbbackup.datetime,2d)=1",
"name": "DB-Backup: No backups detected in 2 days",
"priority": "DISASTER",
"manual_close": "YES"
},
{
"uuid": "b8b5933dfa1a488c9c37dd7f4784c1ff",
"expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)=0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)<>0",
"name": "DB Backup: No Backups occurred in 2 days",
"priority": "AVERAGE"
},
{
"uuid": "35c5f420d0e142cc9601bae38decdc40",
"expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)=0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)<>0",
"name": "DB Backup: No Backups occurred in 3 days",
"priority": "AVERAGE"
},
{
"uuid": "03c3719d82c241e886a0383c7d908a77",
"expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)=0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)<>0",
"name": "DB Backup: No Backups occurred in 4 days",
"priority": "AVERAGE"
},
{
"uuid": "1634a03e44964e42b7e0101f5f68499c",
"expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)=0",
"name": "DB Backup: No Backups occurred in 5 days or more",
"priority": "HIGH"
}
]
},
{
"uuid": "467dfec952b34f5aa4cc890b4351b62d",
"name": "DB Backup: Backup Size",
"type": "TRAP",
"key": "dbbackup.size",
"delay": "0",
"history": "7d",
"units": "B",
"request_method": "POST",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"triggers": [
{
"uuid": "a41eb49b8a3541afb6de247dca750e38",
"expression": "last(/DB Backup/dbbackup.size)/last(/DB Backup/dbbackup.size,#2)>1.2",
"name": "DB Backup: 20% Greater in Size",
"priority": "WARNING",
"manual_close": "YES"
},
{
"uuid": "422f66be5049403293f3d96fc53f20cd",
"expression": "last(/DB Backup/dbbackup.size)/last(/DB Backup/dbbackup.size,#2)<0.2",
"name": "DB Backup: 20% Smaller in Size",
"priority": "WARNING",
"manual_close": "YES"
},
{
"uuid": "d6d9d875b92f4d799d4bc89aabd4e90e",
"expression": "last(/DB Backup/dbbackup.size)<1K",
"name": "DB Backup: empty",
"priority": "HIGH"
}
]
},
{
"uuid": "a6b13e8b46a64abab64a4d44d620d272",
"name": "DB Backup: Last Backup Status",
"type": "TRAP",
"key": "dbbackup.status",
"delay": "0",
"history": "7d",
"description": "Maps Exit Codes received by backup applications",
"valuemap": {
"name": "DB Backup Status"
},
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"triggers": [
{
"uuid": "23d71e356f96493180f02d4b84a79fd6",
"expression": "last(/DB Backup/dbbackup.status)=1",
"name": "DB Backup: Failed Backup Detected",
"priority": "HIGH",
"manual_close": "YES"
}
]
}
],
"tags": [
{
"tag": "Service",
"value": "Backup"
},
{
"tag": "Service",
"value": "Database"
}
],
"dashboards": [
{
"uuid": "90c81bb47184401ca9663626784a6f30",
"name": "DB Backup",
"pages": [
{
"widgets": [
{
"type": "GRAPH_CLASSIC",
"name": "Backup Size",
"width": "23",
"height": "5",
"fields": [
{
"type": "GRAPH",
"name": "graphid",
"value": {
"name": "DB Backup: Backup Size",
"host": "DB Backup"
}
}
]
}
]
}
]
}
],
"valuemaps": [
{
"uuid": "82f3a3d01b3c42b8942b59d2363724e0",
"name": "DB Backup Status",
"mappings": [
{
"value": "0",
"newvalue": "OK"
},
{
"type": "GREATER_OR_EQUAL",
"value": "1",
"newvalue": "FAIL"
}
]
}
]
}
],
"graphs": [
{
"uuid": "6e02c200b76046bab76062cd1ab086b2",
"name": "DB Backup: Backup Duration",
"graph_items": [
{
"color": "199C0D",
"item": {
"host": "DB Backup",
"key": "dbbackup.backup_duration"
}
}
]
},
{
"uuid": "b881ee18f05c4f4c835982c9dfbb55d6",
"name": "DB Backup: Backup Size",
"type": "STACKED",
"graph_items": [
{
"sortorder": "1",
"color": "1A7C11",
"item": {
"host": "DB Backup",
"key": "dbbackup.size"
}
}
]
}
]
}
}

View File

@@ -1,270 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<zabbix_export>
<version>3.4</version>
<date>2018-02-02T19:03:49Z</date>
<groups>
<group>
<name>DB - Backup</name>
</group>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>Service - DB Backup</template>
<name>Service - DB Backup</name>
<description/>
<groups>
<group>
<name>DB - Backup</name>
</group>
<group>
<name>Templates</name>
</group>
</groups>
<applications>
<application>
<name>DB Backup</name>
</application>
</applications>
<items>
<item>
<name>Backup Time</name>
<type>2</type>
<snmp_community/>
<snmp_oid/>
<key>dbbackup.datetime</key>
<delay>0</delay>
<history>365d</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units>unixtime</units>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>DB Backup</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Backup Size</name>
<type>2</type>
<snmp_community/>
<snmp_oid/>
<key>dbbackup.size</key>
<delay>0</delay>
<history>365d</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units>byte</units>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>DB Backup</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
</items>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
</template>
</templates>
<triggers>
<trigger>
<expression>{Service - DB Backup:dbbackup.size.change()}&gt;20</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>DB Backup is 20% Greater in Size</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>2</priority>
<description/>
<type>0</type>
<manual_close>1</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - DB Backup:dbbackup.size.change()}&lt;20</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>DB Backup is 20% Smaller in Size</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>2</priority>
<description/>
<type>0</type>
<manual_close>1</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - DB Backup:dbbackup.size.last()}&lt;1K</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>DB Backup is empty</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>4</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - DB Backup:dbbackup.datetime.fuzzytime(172800)}=0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(259200)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(345600)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(432800)}&lt;&gt;0</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>No Backups occurred in 2 days</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>3</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - DB Backup:dbbackup.datetime.fuzzytime(172800)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(259200)}=0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(345600)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(432800)}&lt;&gt;0</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>No Backups occurred in 3 days</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>3</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - DB Backup:dbbackup.datetime.fuzzytime(172800)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(259200)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(345600)}=0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(432800)}&lt;&gt;0</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>No Backups occurred in 4 days</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>3</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - DB Backup:dbbackup.datetime.fuzzytime(172800)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(259200)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(345600)}&lt;&gt;0 and {Service - DB Backup:dbbackup.datetime.fuzzytime(432800)}=0</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>No Backups occurred in 5 days or more</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>4</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
</triggers>
<graphs>
<graph>
<name>Backup Size</name>
<width>900</width>
<height>200</height>
<yaxismin>0.0000</yaxismin>
<yaxismax>100.0000</yaxismax>
<show_work_period>1</show_work_period>
<show_triggers>1</show_triggers>
<type>1</type>
<show_legend>1</show_legend>
<show_3d>0</show_3d>
<percent_left>0.0000</percent_left>
<percent_right>0.0000</percent_right>
<ymin_type_1>0</ymin_type_1>
<ymax_type_1>0</ymax_type_1>
<ymin_item_1>0</ymin_item_1>
<ymax_item_1>0</ymax_item_1>
<graph_items>
<graph_item>
<sortorder>0</sortorder>
<drawtype>0</drawtype>
<color>1A7C11</color>
<yaxisside>0</yaxisside>
<calc_fnc>2</calc_fnc>
<type>0</type>
<item>
<host>Service - DB Backup</host>
<key>dbbackup.size</key>
</item>
</graph_item>
</graph_items>
</graph>
</graphs>
</zabbix_export>