Compare commits

..

13 Commits

Author SHA1 Message Date
Dave Conroy
d843d21a1b Release 3.1.2 - See CHANGELOG.md 2022-03-29 08:09:36 -07:00
Dave Conroy
24ed769429 Release 3.1.1 - See CHANGELOG.md 2022-03-28 10:29:00 -07:00
Dave Conroy
cbd87a5ede Update README.md 2022-03-23 19:16:51 -07:00
Dave Conroy
13214665c9 Release 3.1.0 - See CHANGELOG.md 2022-03-23 16:21:12 -07:00
Dave Conroy
2e71f617a1 Merge pull request #107 from piemonkey/mongo-restore
Add Mongo support to restore script
2022-03-23 12:24:43 -07:00
Dave Conroy
fbe9dde4a1 Release 3.0.16 - See CHANGELOG.md 2022-03-23 07:57:28 -07:00
Dave Conroy
eb2a18672b Release 3.0.15 - See CHANGELOG.md 2022-03-22 18:27:57 -07:00
Dave Conroy
5f784ed156 Tweak Example 2022-03-22 09:57:28 -07:00
Dave Conroy
d9a4690ea2 Release 3.0.14 - See CHANGELOG.md 2022-03-22 07:52:15 -07:00
Rich
e7eb88c32a Give feedback if restore script doesn't support db type 2022-03-22 09:12:34 +01:00
Rich
52dc510b89 Add auto restore support for mongodb 2022-03-22 09:12:16 +01:00
Rich
06677dbc8b Fix typo setting dbhost in restore script 2022-03-22 09:12:05 +01:00
Rich
e0dd2bc91b Set db type from env vars correctly during restore 2022-03-22 09:11:54 +01:00
7 changed files with 372 additions and 224 deletions

View File

@@ -1,3 +1,51 @@
## 3.1.2 2022-03-29 <dave at tiredofit dot ca>
### Changed
- Fix for blank Notice when individual backup is completed (time taken)
## 3.1.1 2022-03-28 <dave at tiredofit dot ca>
### Changed
- Resolve some issues with backups of Mongo and others not saving the proper timestamp
## 3.1.0 2022-03-23 <dave at tiredofit dot ca>
### Added
- Backup multiple databases by seperating with comma e.g. db1,db2
- Backup ALL databases bu setting DB_NAME to ALL
- Exclude databases from being backed up comma seperated when DB_NAME is all eg DB_NAME_EXCLUDE=db3,db4
- Backup timers execute per database, not per the whole script run
- Post scripts run after each database backup
- Checksum does not occur when database backup failed
- Database cleanup does not occur when any databases backups fail throughout session
- MongoDB now supported with 'restore' script - Credit to piemonkey@github
- Lots of reshuffling, optimizations with script due to botched 3.0 release
### Changed
- ZSTD replaces GZ as default compression type
- Output is cleaner when backups are occurring
## 3.0.16 2022-03-23 <dave at tiredofit dot ca>
### Changed
- Fix for SPLIT_DB not looping through all databse names properly
## 3.0.15 2022-03-22 <dave at tiredofit dot ca>
### Changed
- Rework compression function
- Fix for Bzip compression failing
## 3.0.14 2022-03-22 <dave at tiredofit dot ca>
### Changed
- Rearrange Notice stating when next backup is going to start
## 3.0.13 2022-03-21 <dave at tiredofit dot ca> ## 3.0.13 2022-03-21 <dave at tiredofit dot ca>
### Added ### Added

View File

@@ -16,7 +16,8 @@ Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis servers.
* dump to local filesystem or backup to S3 Compatible services * dump to local filesystem or backup to S3 Compatible services
* select database user and password * select database user and password
* backup all databases * backup all databases, single, or multiple databases
* backup all to seperate files or one singular file
* choose to have an MD5 or SHA1 sum after backup for verification * choose to have an MD5 or SHA1 sum after backup for verification
* delete old backups after specific amount of time * delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz, zstd) * choose compression type (none, gz, bz, xz, zstd)
@@ -127,24 +128,25 @@ Be sure to view the following repositories to understand all the customizable op
| `MODE` | `AUTO` mode to use internal scheduling routines or `MANUAL` to simply use this as manual backups only executed by your own means | `AUTO` | | `MODE` | `AUTO` mode to use internal scheduling routines or `MANUAL` to simply use this as manual backups only executed by your own means | `AUTO` |
| `MANUAL_RUN_FOREVER` | `TRUE` or `FALSE` if you wish to try to make the container exit after the backup | `TRUE` | | `MANUAL_RUN_FOREVER` | `TRUE` or `FALSE` if you wish to try to make the container exit after the backup | `TRUE` |
| `TEMP_LOCATION` | Perform Backups and Compression in this temporary directory | `/tmp/backups/` | | `TEMP_LOCATION` | Perform Backups and Compression in this temporary directory | `/tmp/backups/` |
| `DB_AUTH` | (Mongo Only - Optional) Authentication Database | |
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. | `FALSE` | | `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. | `FALSE` |
| `POST_SCRIPT` | Fill this variable in with a command to execute post the script backing up | | | `POST_SCRIPT` | Fill this variable in with a command to execute post the script backing up | |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. | `FALSE` | | `SPLIT_DB` | For each backup, create a new archive. `TRUE` or `FALSE` (MySQL and Postgresql Only) | `TRUE`
### Database Specific Options ### Database Specific Options
| Parameter | Description | Default | | Parameter | Description | Default |
| --------- | --------------------------------------------------------------------------------------------- | ------- | | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `DB_AUTH` | (Mongo Only - Optional) Authentication Database | |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `sqlite3` | | | `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `sqlite3` | |
| `DB_HOST` | Server Hostname e.g. `mariadb`. For `sqlite3`, full path to DB file e.g. `/backup/db.sqlite3` | | | `DB_HOST` | Server Hostname e.g. `mariadb`. For `sqlite3`, full path to DB file e.g. `/backup/db.sqlite3` | |
| `DB_NAME` | Schema Name e.g. `database` | | | `DB_NAME` | Schema Name e.g. `database` or `ALL` to backup all databases the user has access to. Backup multiple by seperating with commas eg `db1,db2` | |
| `DB_USER` | username for the database - use `root` to backup all MySQL of them. | | | `DB_NAME_EXCLUDE` | If using `ALL` - use this as to exclude databases seperated via commas from being backed up | |
| `DB_USER` | username for the database(s) - Can use `root` for MySQL | |
| `DB_PASS` | (optional if DB doesn't require it) password for the database | | | `DB_PASS` | (optional if DB doesn't require it) password for the database | |
| `DB_PORT` | (optional) Set port to connect to DB_HOST. Defaults are provided | varies | | `DB_PORT` | (optional) Set port to connect to DB_HOST. Defaults are provided | varies |
### Scheduling Options ### Scheduling Options
| Parameter | Description | Default | | Parameter | Description | Default |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day. | `1440` | | `DB_DUMP_FREQ` | How often to do a dump, in minutes after the first backup. Defaults to 1440 minutes, or once per day. | `1440` |
| `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats | | | `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats | |
| | Absolute HHMM, e.g. `2330` or `0415` | | | | Absolute HHMM, e.g. `2330` or `0415` | |
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half | | | | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half | |
@@ -154,7 +156,7 @@ Be sure to view the following repositories to understand all the customizable op
### Backup Options ### Backup Options
| Parameter | Description | Default | | Parameter | Description | Default |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | -------------- | | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | -------------- |
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` | `GZ` | | `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` | `ZSTD` |
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - | `3` | | `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - | `3` |
| `ENABLE_PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` | `TRUE` | | `ENABLE_PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` | `TRUE` |
| `PARALLEL_COMPRESSION_THREADS` | Maximum amount of threads to use when compressing - Integer value e.g. `8` | `autodetected` | | `PARALLEL_COMPRESSION_THREADS` | Maximum amount of threads to use when compressing - Integer value e.g. `8` | `autodetected` |
@@ -187,7 +189,6 @@ If `BACKUP_LOCATION` = `S3` then the following options are used.
## Maintenance ## Maintenance
### Shell Access ### Shell Access
For debugging and maintenance purposes you may want access the containers shell. For debugging and maintenance purposes you may want access the containers shell.
@@ -201,7 +202,7 @@ Manual Backups can be performed by entering the container and typing `backup-now
- Recently there was a request to have the container work with Kukbernetes cron scheduling. This can theoretically be accomplished by setting the container `MODE=MANUAL` and then setting `MANUAL_RUN_FOREVER=FALSE` - You would also want to disable a few features from the upstream base images specifically `CONTAINER_ENABLE_SCHEDULING` and `CONTAINER_ENABLE_MONITORING`. This should allow the container to start, execute a backup by executing and then exit cleanly. An alternative way to running the script is to execute `/etc/services.available/10-db-backup/run`. - Recently there was a request to have the container work with Kukbernetes cron scheduling. This can theoretically be accomplished by setting the container `MODE=MANUAL` and then setting `MANUAL_RUN_FOREVER=FALSE` - You would also want to disable a few features from the upstream base images specifically `CONTAINER_ENABLE_SCHEDULING` and `CONTAINER_ENABLE_MONITORING`. This should allow the container to start, execute a backup by executing and then exit cleanly. An alternative way to running the script is to execute `/etc/services.available/10-db-backup/run`.
### Restoring Databases ### Restoring Databases
Entering in the container and executing `restore` will execute a menu based script to restore your backups. Entering in the container and executing `restore` will execute a menu based script to restore your backups - MariaDB, Postgres, and Mongo supported.
You will be presented with a series of menus allowing you to choose: You will be presented with a series of menus allowing you to choose:
- What file to restore - What file to restore

View File

@@ -20,7 +20,7 @@ services:
- example-db - example-db
volumes: volumes:
- ./backups:/backup - ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh #- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment: environment:
- DB_TYPE=mariadb - DB_TYPE=mariadb
- DB_HOST=example-db - DB_HOST=example-db
@@ -30,8 +30,8 @@ services:
- DB_DUMP_FREQ=1440 - DB_DUMP_FREQ=1440
- DB_DUMP_BEGIN=0000 - DB_DUMP_BEGIN=0000
- DB_CLEANUP_TIME=8640 - DB_CLEANUP_TIME=8640
- CHECKSUM=MD5 - CHECKSUM=SHA1
- COMPRESSION=XZ - COMPRESSION=ZSTD
- SPLIT_DB=FALSE - SPLIT_DB=FALSE
restart: always restart: always

View File

@@ -2,7 +2,7 @@
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"} BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
CHECKSUM=${CHECKSUM:-"MD5"} CHECKSUM=${CHECKSUM:-"MD5"}
COMPRESSION=${COMPRESSION:-"GZ"} COMPRESSION=${COMPRESSION:-"ZSTD"}
COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"} COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0} DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440} DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}

View File

@@ -1,22 +1,5 @@
#!/command/with-contenv bash #!/command/with-contenv bash
bootstrap_compression() {
### Set Compression Options
if var_true "${ENABLE_PARALLEL_COMPRESSION}" ; then
print_debug "Utilizing '${PARALLEL_COMPRESSION_THREADS}' compression threads"
bzip="pbzip2 -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS}"
gzip="pigz -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS}"
xzip="pixz -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS}"
zstd="zstd --rm -${COMPRESSION_LEVEL} -T${PARALLEL_COMPRESSION_THREADS}"
else
print_debug "Utilizing single compression thread"
bzip="pbzip2 -${COMPRESSION_LEVEL} -p 1"
gzip="pigz -${COMPRESSION_LEVEL} -p 1"
xzip="pixz -${COMPRESSION_LEVEL} -p 1"
zstd="zstd --rm -${COMPRESSION_LEVEL} -T1"
fi
}
bootstrap_variables() { bootstrap_variables() {
case "${dbtype,,}" in case "${dbtype,,}" in
couch* ) couch* )
@@ -41,6 +24,7 @@ bootstrap_variables() {
dbtype=mysql dbtype=mysql
dbport=${DB_PORT:-3306} dbport=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS' [[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;; ;;
"mssql" | "microsoftsql" ) "mssql" | "microsoftsql" )
apkArch="$(apk --print-arch)"; \ apkArch="$(apk --print-arch)"; \
@@ -55,6 +39,7 @@ bootstrap_variables() {
dbtype=pgsql dbtype=pgsql
dbport=${DB_PORT:-5432} dbport=${DB_PORT:-5432}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS' [[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;; ;;
"redis" ) "redis" )
dbtype=redis dbtype=redis
@@ -92,59 +77,68 @@ bootstrap_variables() {
} }
backup_couch() { backup_couch() {
pre_dbbackup
target=couch_${dbname}_${dbhost}_${now}.txt target=couch_${dbname}_${dbhost}_${now}.txt
compression compression
print_notice "Dumping CouchDB database: '${dbname}'" print_notice "Dumping CouchDB database: '${dbname}' ${compression_string}"
curl -X GET http://${dbhost}:${dbport}/${dbname}/_all_docs?include_docs=true ${dumpoutput} | $dumpoutput > ${TEMP_LOCATION}/${target} curl -X GET http://${dbhost}:${dbport}/${dbname}/_all_docs?include_docs=true ${compress_cmd} | $compress_cmd > ${TEMP_LOCATION}/${target}
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup $dbname
} }
backup_influx() { backup_influx() {
if [ "${ENABLE_COMPRESSION,,}" = "none" ] || [ "${ENABLE_COMPRESSION,,}" = "false" ] ; then if [ "${ENABLE_COMPRESSION,,}" = "none" ] || [ "${ENABLE_COMPRESSION,,}" = "false" ] ; then
: :
else else
print_notice "Compressing InfluxDB backup with gzip"
influx_compression="-portable" influx_compression="-portable"
compression_string=" and compressing with gzip"
fi fi
for DB in ${DB_NAME}; do for db in ${DB_NAME}; do
print_notice "Dumping Influx database: '${DB}'" pre_dbbackup
target=influx_${DB}_${dbhost}_${now} target=influx_${db}_${dbhost}_${now}
influxd backup ${influx_compression} -database $DB -host ${dbhost}:${dbport} ${TEMP_LOCATION}/${target} print_notice "Dumping Influx database: '${db}' ${compression_string}"
influxd backup ${influx_compression} -database $db -host ${dbhost}:${dbport} ${TEMP_LOCATION}/${target}
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup $db
done done
} }
backup_mongo() { backup_mongo() {
pre_dbbackup
if [ "${ENABLE_COMPRESSION,,}" = "none" ] || [ "${ENABLE_COMPRESSION,,}" = "false" ] ; then if [ "${ENABLE_COMPRESSION,,}" = "none" ] || [ "${ENABLE_COMPRESSION,,}" = "false" ] ; then
target=${dbtype}_${dbname}_${dbhost}_${now}.archive target=${dbtype}_${dbname}_${dbhost}_${now}.archive
else else
print_notice "Compressing MongoDB backup with gzip"
target=${dbtype}_${dbname}_${dbhost}_${now}.archive.gz target=${dbtype}_${dbname}_${dbhost}_${now}.archive.gz
mongo_compression="--gzip" mongo_compression="--gzip"
compression_string="and compressing with gzip"
fi fi
print_notice "Dumping MongoDB database: '${DB_NAME}'" print_notice "Dumping MongoDB database: '${DB_NAME}' ${compression_string}"
mongodump --archive=${TEMP_LOCATION}/${target} ${mongo_compression} --host ${dbhost} --port ${dbport} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_AUTH_STR}${MONGO_DB_STR} ${EXTRA_OPTS} mongodump --archive=${TEMP_LOCATION}/${target} ${mongo_compression} --host ${dbhost} --port ${dbport} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_AUTH_STR}${MONGO_DB_STR} ${EXTRA_OPTS}
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
cd "${TEMP_LOCATION}" cd "${TEMP_LOCATION}"
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup
} }
backup_mssql() { backup_mssql() {
pre_dbbackup
target=mssql_${dbname}_${dbhost}_${now}.bak target=mssql_${dbname}_${dbhost}_${now}.bak
print_notice "Dumping MSSQL database: '${dbname}'" print_notice "Dumping MSSQL database: '${dbname}'"
/opt/mssql-tools/bin/sqlcmd -E -C -S ${dbhost}\,${dbport} -U ${dbuser} -P ${dbpass} Q "BACKUP DATABASE \[${dbname}\] TO DISK = N'${TEMP_LOCATION}/${target}' WITH NOFORMAT, NOINIT, NAME = '${dbname}-full', SKIP, NOREWIND, NOUNLOAD, STATS = 10" /opt/mssql-tools/bin/sqlcmd -E -C -S ${dbhost}\,${dbport} -U ${dbuser} -P ${dbpass} Q "BACKUP DATABASE \[${dbname}\] TO DISK = N'${TEMP_LOCATION}/${target}' WITH NOFORMAT, NOINIT, NAME = '${dbname}-full', SKIP, NOREWIND, NOUNLOAD, STATS = 10"
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup $dbname
} }
backup_mysql() { backup_mysql() {
@@ -154,64 +148,111 @@ backup_mysql() {
if var_true "${MYSQL_STORED_PROCEDURES}" ; then if var_true "${MYSQL_STORED_PROCEDURES}" ; then
stored_procedures="--routines" stored_procedures="--routines"
fi fi
if [ "${dbname,,}" = "all" ] ; then
print_debug "Preparing to back up everything except for information_schema and _* prefixes"
db_names=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database | grep -v schema )
if [ -n "${DB_NAME_EXCLUDE}" ] ; then
db_names_exclusions=$(echo "${DB_NAME_EXCLUDE}" | tr ',' '\n')
for db_exclude in ${db_names_exclusions} ; do
print_debug "Excluding '${db_exclude}' from ALL DB_NAME backups"
db_names=$(echo "$db_names" | sed "/${db_exclude}/d" )
done
fi
else
db_names=$(echo "${dbname}" | tr ',' '\n')
fi
print_debug "Databases Found: $(echo ${db_names} | xargs | tr ' ' ',')"
if var_true "${SPLIT_DB}" ; then if var_true "${SPLIT_DB}" ; then
DATABASES=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database | grep -v schema) for db in ${db_names} ; do
for db in "${DATABASES}" ; do pre_dbbackup
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
print_debug "Backing up everything except for information_schema and _* prefixes"
print_notice "Dumping MySQL/MariaDB database: '${db}'"
target=mysql_${db}_${dbhost}_${now}.sql target=mysql_${db}_${dbhost}_${now}.sql
compression compression
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} --databases $db | $dumpoutput > ${TEMP_LOCATION}/${target} print_notice "Dumping MySQL/MariaDB database: '${db}' ${compression_string}"
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} --databases $db | $compress_cmd > "${TEMP_LOCATION}"/"${target}"
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
fi post_dbbackup
done done
else else
print_debug "Not splitting database dumps into their own files"
pre_dbbackup
target=mysql_all_${dbhost}_${now}.sql
compression compression
print_notice "Dumping MySQL/MariaDB database: '${DB_NAME}'" print_notice "Dumping all MySQL / MariaDB databases: '$(echo ${db_names} | xargs | tr ' ' ',')' ${compression_string}"
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -A -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target} mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${single_transaction} ${stored_procedures} ${EXTRA_OPTS} --databases $(echo ${db_names} | xargs) | $compress_cmd > "${TEMP_LOCATION}"/"${target}"
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup all
fi fi
} }
backup_pgsql() { backup_pgsql() {
export PGPASSWORD=${dbpass} export PGPASSWORD=${dbpass}
if var_true "${SPLIT_DB}" ; then
authdb=${DB_USER} authdb=${DB_USER}
[ -n "${DB_NAME}" ] && authdb=${DB_NAME} if [ "${dbname,,}" = "all" ] ; then
DATABASES=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' ) print_debug "Preparing to back up all databases"
for db in "${DATABASES}"; do db_names=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
print_notice "Dumping Postgresql database: $db" if [ -n "${DB_NAME_EXCLUDE}" ] ; then
db_names_exclusions=$(echo "${DB_NAME_EXCLUDE}" | tr ',' '\n')
for db_exclude in ${db_names_exclusions} ; do
print_debug "Excluding '${db_exclude}' from ALL DB_NAME backups"
db_names=$(echo "$db_names" | sed "/${db_exclude}/d" )
done
fi
else
db_names=$(echo "${dbname}" | tr ',' '\n')
fi
print_debug "Databases Found: $(echo ${db_names} | xargs | tr ' ' ',')"
if var_true "${SPLIT_DB}" ; then
for db in ${db_names} ; do
pre_dbbackup
target=pgsql_${db}_${dbhost}_${now}.sql target=pgsql_${db}_${dbhost}_${now}.sql
compression compression
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target} print_notice "Dumping PostgresSQL database: '${db}' ${compression_string}"
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $compress_cmd > ${TEMP_LOCATION}/${target}
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup $db
done done
else else
print_debug "Not splitting database dumps into their own files"
pre_dbbackup
target=pgsql_all_${dbhost}_${now}.sql
compression compression
print_notice "Dumping PostgreSQL: '${DB_NAME}'" print_notice "Dumping all PostgreSQL databases: '$(echo ${db_names} | xargs | tr ' ' ',')' ${compression_string}"
pg_dump -h ${dbhost} -U ${dbuser} -p ${dbport} ${dbname} ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target} tmp_db_names=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for r_db_name in $(echo $db_names | xargs); do
tmp_db_names=$(echo "$tmp_db_names" | xargs | sed "s|${r_db_name}||g" )
done
sleep 5
for x_db_name in ${tmp_db_names} ; do
pgexclude_arg=$(echo ${pgexclude_arg} --exclude-database=${x_db_name})
done
pg_dumpall -h ${dbhost} -U ${dbuser} -p ${dbport} ${pgexclude_arg} ${EXTRA_OPTS} | $compress_cmd > ${TEMP_LOCATION}/${target}
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup all
fi fi
} }
backup_redis() { backup_redis() {
target=redis_${db}_${dbhost}_${now}.rdb pre_dbbackup
echo bgsave | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} --rdb ${TEMP_LOCATION}/${target} ${EXTRA_OPTS}
print_notice "Dumping Redis - Flushing Redis Cache First" print_notice "Dumping Redis - Flushing Redis Cache First"
target=redis_all_${dbhost}_${now}.rdb
echo bgsave | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} --rdb ${TEMP_LOCATION}/${target} ${EXTRA_OPTS}
sleep 10 sleep 10
try=5 try=5
while [ $try -gt 0 ] ; do while [ $try -gt 0 ] ; do
@@ -227,24 +268,26 @@ backup_redis() {
done done
target_original=${target} target_original=${target}
compression compression
$dumpoutput "${TEMP_LOCATION}/${target_original}" $compress_cmd "${TEMP_LOCATION}/${target_original}"
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup all
} }
backup_sqlite3() { backup_sqlite3() {
pre_dbbackup
db=$(basename "$dbhost") db=$(basename "$dbhost")
db="${db%.*}" db="${db%.*}"
target=sqlite3_${db}_${now}.sqlite3 target=sqlite3_${db}_${now}.sqlite3
compression compression
print_notice "Dumping sqlite3 database: '${dbhost}' ${compression_string}"
print_notice "Dumping sqlite3 database: '${dbhost}'"
sqlite3 "${dbhost}" ".backup '${TEMP_LOCATION}/backup.sqlite3'" sqlite3 "${dbhost}" ".backup '${TEMP_LOCATION}/backup.sqlite3'"
exit_code=$? exit_code=$?
check_exit_code check_exit_code $target
cat "${TEMP_LOCATION}"/backup.sqlite3 | $dumpoutput > "${TEMP_LOCATION}/${target}" cat "${TEMP_LOCATION}"/backup.sqlite3 | $compress_cmd > "${TEMP_LOCATION}/${target}"
generate_checksum generate_checksum
move_backup move_dbbackup
post_dbbackup $db
} }
check_availability() { check_availability() {
@@ -328,52 +371,82 @@ check_availability() {
} }
check_exit_code() { check_exit_code() {
print_debug "Exit Code is ${exit_code}" print_debug "DB Backup Exit Code is ${exit_code}"
case "${exit_code}" in case "${exit_code}" in
0 ) 0 )
print_info "Backup completed successfully" print_info "DB Backup of '${1}' completed successfully"
;; ;;
* ) * )
print_error "Backup reported errors" print_error "DB Backup of '${1}' reported errors"
master_exit_code=1
;; ;;
esac esac
} }
cleanup_old_data() {
if [ -n "${DB_CLEANUP_TIME}" ]; then
if [ "${master_exit_code}" != 1 ]; then
print_info "Cleaning up old backups"
mkdir -p "${DB_DUMP_TARGET}"
find "${DB_DUMP_TARGET}"/ -mmin +"${DB_CLEANUP_TIME}" -iname "*" -exec rm {} \;
else
print_info "Skipping Cleaning up old backups because there were errors in backing up"
fi
fi
}
compression() { compression() {
if var_false "${ENABLE_PARALLEL_COMPRESSION}" ; then
PARALLEL_COMPRESSION_THREADS=1
fi
case "${COMPRESSION,,}" in case "${COMPRESSION,,}" in
gz* ) gz* )
print_notice "Compressing backup with gzip" compress_cmd="pigz -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS} "
print_debug "Compression Level: '${COMPRESSION_LEVEL}'" compression_type="gzip"
target=${target}.gz target=${target}.gz
dumpoutput="$gzip "
;; ;;
bz* ) bz* )
print_notice "Compressing backup with bzip2" compress_cmd="pbzip2 -${COMPRESSION_LEVEL} -p${PARALLEL_COMPRESSION_THREADS} "
print_debug "Compression Level: '${COMPRESSION_LEVEL}'" compression_type="bzip2"
target=${target}.bz2 target=${target}.bz2
dumpoutput="$bzip "
;; ;;
xz* ) xz* )
print_notice "Compressing backup with xzip" compress_cmd="pixz -${COMPRESSION_LEVEL} -p ${PARALLEL_COMPRESSION_THREADS} "
print_debug "Compression Level: '${COMPRESSION_LEVEL}'" compression_type="xzip"
target=${target}.xz target=${target}.xz
dumpoutput="$xzip "
;; ;;
zst* ) zst* )
print_notice "Compressing backup with zstd" compress_cmd="zstd --rm -${COMPRESSION_LEVEL} -T${PARALLEL_COMPRESSION_THREADS}"
print_debug "Compression Level: '${COMPRESSION_LEVEL}'" compression_type="zstd"
target=${target}.zst target=${target}.zst
dumpoutput="$zstd "
;; ;;
"none" | "false") "none" | "false")
print_notice "Not compressing backups" compress_cmd="cat "
dumpoutput="cat " compression_type="none"
;;
esac
case "${CONTAINER_LOG_LEVEL,,}" in
"debug" )
if [ "${compression_type}" = "none" ] ; then
compression_string="with '${PARALLEL_COMPRESSION_THREADS}' threads"
else
compression_string="and compressing with '${compression_type}:${COMPRESSION_LEVEL}' with '${PARALLEL_COMPRESSION_THREADS}' threads"
fi
;;
* )
if [ "${compression_type}" != "none" ] ; then
compression_string="and compressing with '${compression_type}'"
fi
;; ;;
esac esac
} }
generate_checksum() { generate_checksum() {
if var_true "${ENABLE_CHECKSUM}" ;then if var_true "${ENABLE_CHECKSUM}" ;then
if [ "${exit_code}" = "0" ] ; then
case "${CHECKSUM,,}" in case "${CHECKSUM,,}" in
"md5" ) "md5" )
checksum_command="md5sum" checksum_command="md5sum"
@@ -390,10 +463,13 @@ generate_checksum() {
${checksum_command} "${target}" > "${target}"."${checksum_extension}" ${checksum_command} "${target}" > "${target}"."${checksum_extension}"
checksum_value=$(${checksum_command} "${target}" | awk ' { print $1}') checksum_value=$(${checksum_command} "${target}" | awk ' { print $1}')
print_debug "${checksum_extension^^}: ${checksum_value} - ${target}" print_debug "${checksum_extension^^}: ${checksum_value} - ${target}"
else
print_warn "Skipping Checksum creation because backup did not complete successfully"
fi
fi fi
} }
move_backup() { move_dbbackup() {
case "$SIZE_VALUE" in case "$SIZE_VALUE" in
"b" | "bytes" ) "b" | "bytes" )
SIZE_VALUE=1 SIZE_VALUE=1
@@ -445,12 +521,60 @@ move_backup() {
esac esac
} }
pre_dbbackup() {
dbbackup_start_time=$(date +"%s")
now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
target=${dbtype}_${dbname}_${dbhost}_${now}.sql
}
post_dbbackup() {
dbbackup_finish_time=$(date +"%s")
dbbackup_total_time=$(echo $((dbbackup_finish_time-dbbackup_start_time)))
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_notice "Sending Backup Statistics to Zabbix"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o "$(stat -c%s "${DB_DUMP_TARGET}"/"${target}")"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.status -o "${exit_code}"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.backup_duration -o "$(echo $((dbbackup_finish_time-dbbackup_start_time)))"
fi
### Post Script Support
if [ -n "${POST_SCRIPT}" ] ; then
print_notice "Found POST_SCRIPT environment variable. Executing '${POST_SCRIPT}"
eval "${POST_SCRIPT}" "${exit_code}" "${dbtype}" "${dbhost}" "${1}" "${dbbackup_start_time}" "${dbbackup_finish_time}" "${dbbackup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
fi
### Post Backup Custom Script Support
if [ -d "/assets/custom-scripts/" ] ; then
print_notice "Found Post Backup Custom Script to execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_notice "Running Script: '${f}'"
## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE
${f} "${exit_code}" "${dbtype}" "${dbhost}" "${1}" "${dbbackup_start_time}" "${dbbackup_finish_time}" "${dbbackup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
done
fi
print_notice "DB Backup for '${1}' time taken: $(echo ${dbbackup_total_time} | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
}
sanity_test() { sanity_test() {
sanity_var DB_TYPE "Database Type" sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host" sanity_var DB_HOST "Database Host"
file_env 'DB_USER' file_env 'DB_USER'
file_env 'DB_PASS' file_env 'DB_PASS'
case "${dbtype,,}" in
"mysql" | "mariadb" )
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;;
postgres* | "pgsql" )
sanity_var DB_NAME "Database Name to backup. Multiple seperated by commas"
;;
esac
if [ "${BACKUP_LOCATION,,}" = "s3" ] || [ "${BACKUP_LOCATION,,}" = "minio" ] ; then if [ "${BACKUP_LOCATION,,}" = "s3" ] || [ "${BACKUP_LOCATION,,}" = "minio" ] ; then
sanity_var S3_BUCKET "S3 Bucket" sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_PATH "S3 Path" sanity_var S3_PATH "S3 Path"
@@ -492,3 +616,4 @@ EOF
fi fi
fi fi
} }

View File

@@ -4,36 +4,16 @@ source /assets/functions/00-container
source /assets/functions/10-db-backup source /assets/functions/10-db-backup
source /assets/defaults/10-db-backup source /assets/defaults/10-db-backup
PROCESS_NAME="db-backup" PROCESS_NAME="db-backup"
CONTAINER_LOG_LEVEL=DEBUG
bootstrap_compression
bootstrap_variables bootstrap_variables
if [ "${MODE,,}" = "manual" ] ; then if [ "${MODE,,}" = "manual" ] || [ "${1,,}" = "manual" ] || [ "${1,,}" = "now" ]; then
DB_DUMP_BEGIN=+0 DB_DUMP_BEGIN=+0
manual=TRUE manual=TRUE
print_debug "Detected Manual Mode" print_debug "Detected Manual Mode"
fi else
case "${1,,}" in
"now" | "manual" )
DB_DUMP_BEGIN=+0
manual=TRUE
;;
* )
sleep 5 sleep 5
;;
esac
### Container Startup
print_debug "Backup routines Initialized on $(date)"
### Wait for Next time to start backup
case "${1,,}" in
"now" | "manual" )
:
;;
* )
if [ "${manual,,}" != "true" ]; then
current_time=$(date +"%s") current_time=$(date +"%s")
today=$(date +"%Y%m%d") today=$(date +"%Y%m%d")
@@ -51,19 +31,11 @@ case "${1,,}" in
print_info "Next Backup at $(date -d @${target_time} +"%Y-%m-%d %T %Z")" print_info "Next Backup at $(date -d @${target_time} +"%Y-%m-%d %T %Z")"
sleep $waittime sleep $waittime
fi fi
;;
esac
### Commence Backup
while true; do while true; do
mkdir -p "${TEMP_LOCATION}" mkdir -p "${TEMP_LOCATION}"
backup_start_time=$(date +"%s") backup_start_time=$(date +"%s")
now=$(date +"%Y%m%d-%H%M%S") print_debug "Backup routines started time: $(date +'%Y-%m-%d %T %Z')"
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
target=${dbtype}_${dbname}_${dbhost}_${now}.sql
### Take a Dump
case "${dbtype,,}" in case "${dbtype,,}" in
"couch" ) "couch" )
check_availability check_availability
@@ -101,48 +73,17 @@ while true; do
backup_finish_time=$(date +"%s") backup_finish_time=$(date +"%s")
backup_total_time=$(echo $((backup_finish_time-backup_start_time))) backup_total_time=$(echo $((backup_finish_time-backup_start_time)))
if [ -z "$master_exit_code" ] ; then master_exit_code="0" ; fi
print_info "Backup routines finish time: $(date -d @${backup_finish_time} +"%Y-%m-%d %T %Z") with overall exit code ${master_exit_code}"
print_notice "Backup routines time taken: $(echo ${backup_total_time} | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
print_info "Backup finish time: $(date -d @${backup_finish_time} +"%Y-%m-%d %T %Z") with exit code ${exit_code}" cleanup_old_data
print_notice "Backup time elapsed: $(echo ${backup_total_time} | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
### Zabbix / Monitoring stats
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_notice "Sending Backup Statistics to Zabbix"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o "$(stat -c%s "${DB_DUMP_TARGET}"/"${target}")"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.status -o "${exit_code}"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.backup_duration -o "$(echo $((backup_finish_time-backup_start_time)))"
fi
### Automatic Cleanup
if [ -n "${DB_CLEANUP_TIME}" ]; then
print_info "Cleaning up old backups"
mkdir -p "${DB_DUMP_TARGET}"
find "${DB_DUMP_TARGET}"/ -mmin +"${DB_CLEANUP_TIME}" -iname "*" -exec rm {} \;
fi
### Post Script Support
if [ -n "${POST_SCRIPT}" ] ; then
print_notice "Found POST_SCRIPT environment variable. Executing '${POST_SCRIPT}"
eval "${POST_SCRIPT}" "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${backup_start_time}" "${backup_finish_time}" "${backup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
fi
### Post Backup Custom Script Support
if [ -d "/assets/custom-scripts/" ] ; then
print_notice "Found Post Backup Custom Script to execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_notice "Running Script: '${f}'"
## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE
${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${backup_start_time}" "${backup_finish_time}" "${backup_total_time}" "${target}" "${FILESIZE}" "${checksum_value}"
done
fi
if var_true "${manual}" ; then if var_true "${manual}" ; then
print_debug "Exitting due to manual mode" print_debug "Exitting due to manual mode"
exit ${exit_code}; exit ${master_exit_code};
else else
### Go back to sleep until next backup time
sleep $(($DB_DUMP_FREQ*60-backup_total_time))
print_notice "Sleeping for another $(($DB_DUMP_FREQ*60-backup_total_time)) seconds. Waking up at $(date -d@"$(( $(date +%s)+$(($DB_DUMP_FREQ*60-backup_total_time))))" +"%Y-%m-%d %T %Z") " print_notice "Sleeping for another $(($DB_DUMP_FREQ*60-backup_total_time)) seconds. Waking up at $(date -d@"$(( $(date +%s)+$(($DB_DUMP_FREQ*60-backup_total_time))))" +"%Y-%m-%d %T %Z") "
sleep $(($DB_DUMP_FREQ*60-backup_total_time))
fi fi
done done

View File

@@ -263,6 +263,10 @@ get_dbtype() {
p_dbtype=$(basename -- "${r_filename}" | cut -d _ -f 1) p_dbtype=$(basename -- "${r_filename}" | cut -d _ -f 1)
case "${p_dbtype}" in case "${p_dbtype}" in
mongo* )
parsed_type=true
print_debug "Parsed DBType: MongoDB"
;;
mariadb | mysql ) mariadb | mysql )
parsed_type=true parsed_type=true
print_debug "Parsed DBType: MariaDB/MySQL" print_debug "Parsed DBType: MariaDB/MySQL"
@@ -320,7 +324,9 @@ EOF
What Database Type are you looking to restore? What Database Type are you looking to restore?
${q_dbtype_menu} ${q_dbtype_menu}
M ) MySQL / MariaDB M ) MySQL / MariaDB
O ) MongoDB
P ) Postgresql P ) Postgresql
Q ) Quit Q ) Quit
@@ -335,6 +341,10 @@ EOF
r_dbtype=mysql r_dbtype=mysql
break break
;; ;;
o* )
r_dbtype=mongo
break
;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
break break
@@ -351,13 +361,17 @@ EOF
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}E${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}E${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype
case "${q_dbtype,,}" in case "${q_dbtype,,}" in
e* | "" ) e* | "" )
r_dbtype=${db_name} r_dbtype=${DB_TYPE}
break break
;; ;;
m* ) m* )
r_dbtype=mysql r_dbtype=mysql
break break
;; ;;
o* )
r_dbtype=mongo
break
;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
break break
@@ -381,6 +395,10 @@ EOF
r_dbtype=mysql r_dbtype=mysql
break break
;; ;;
o* )
r_dbtype=mongo
break
;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
break break
@@ -398,7 +416,7 @@ EOF
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}E${cdgy}\) \| \(${cwh}F${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}E${cdgy}\) \| \(${cwh}F${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype
case "${q_dbtype,,}" in case "${q_dbtype,,}" in
e* | "" ) e* | "" )
r_dbtype=${dbtype} r_dbtype=${DB_TYPE}
break break
;; ;;
f* ) f* )
@@ -735,7 +753,7 @@ EOF
q_dbpass_menu=$(cat <<EOF q_dbpass_menu=$(cat <<EOF
C ) Custom Entered Database Password C ) Custom Entered Database Password
E ) Environment Variable DB_PASS: '${DB_PASS}' E ) Environment Variable DB_PASS
EOF EOF
) )
fi fi
@@ -826,7 +844,7 @@ if [ -n "${3}" ]; then
if [ ! -f "${3}" ]; then if [ ! -f "${3}" ]; then
get_dbhost get_dbhost
else else
r_dbtype="${3}" r_dbhost="${3}"
fi fi
else else
get_dbhost get_dbhost
@@ -920,8 +938,23 @@ case "${r_dbtype}" in
pv ${r_filename} | ${decompress_cmd}cat | psql -d ${r_dbname} -h ${r_dbhost} -p ${r_dbport} -U ${r_dbuser} pv ${r_filename} | ${decompress_cmd}cat | psql -d ${r_dbname} -h ${r_dbhost} -p ${r_dbport} -U ${r_dbuser}
exit_code=$? exit_code=$?
;; ;;
mongo )
print_info "Restoring '${r_filename}' into '${r_dbhost}'/'${r_dbname}'"
if [ "${ENABLE_COMPRESSION,,}" != "none" ] && [ "${ENABLE_COMPRESSION,,}" != "false" ] ; then
mongo_compression="--gzip"
fi
if [ -n "${r_dbuser}" ] ; then
mongo_user="-u ${r_dbuser}"
fi
if [ -n "${r_dbpass}" ] ; then
mongo_pass="-u ${r_dbpass}"
fi
mongorestore ${mongo_compression} -d ${r_dbname} -h ${r_dbhost} --port ${r_dbport} ${mongo_user} ${mongo_pass} --archive=${r_filename}
exit_code=$?
;;
* ) * )
exit 3 print_info "Unable to restore DB of type '${r_dbtype}'"
exit_code=3
;; ;;
esac esac