Compare commits

..

4 Commits
4.0.18 ... 3

Author SHA1 Message Date
dave@tiredofit.ca
d48a15d37f Release 3-3.12.2 - See CHANGELOG.md 2023-12-03 22:05:28 -08:00
Dave Conroy
6b7b16c42b Merge pull request #303 from alwynpan/feature/remove-deprecation-warning
#298 Use pip to install awscli to remove the deprecation warning
2023-12-03 22:03:40 -08:00
Alwyn Pan
b64bd3168d #298 Use pip to install awscli to remove the deprecation warning 2023-12-04 15:32:42 +11:00
dave@tiredofit.ca
35c806c369 Release 3-3.12.1 - See CHANGELOG.md 2023-11-30 07:21:58 -08:00
18 changed files with 1482 additions and 3519 deletions

View File

@@ -1,185 +1,14 @@
## 4.0.18 2023-11-18 <joergmschulz@github> ## 3-3.12.2 2023-12-03 <dave at tiredofit dot ca>
### Changed
- Fix loading msmtp configuration
## 4.0.17 2023-11-17 <dave at tiredofit dot ca>
### Changed
- Provide more details when notifying via instant messages
## 4.0.16 2023-11-17 <dave at tiredofit dot ca>
### Changed
- Switch to using msmtp instead of s-mail for notify()
## 4.0.15 2023-11-16 <dave at tiredofit dot ca>
### Changed
- Fix cleanup of old backups
## 4.0.14 2023-11-13 <dave at tiredofit dot ca>
### Changed
- Bugfix when PRE/POST scripts found not giving legacy warning
- Run pre / post scripts as root
## 4.0.13 2023-11-12 <dave at tiredofit dot ca>
### Changed
- Check for any quotes if using MONGO_CUSTOM_URI and remove
## 4.0.12 2023-11-12 <dave at tiredofit dot ca>
### Changed
- Allow creating schedulers if _MONGO_CUSTOM_URI is set and _DB_HOST blank
## 4.0.11 2023-11-11 <dave at tiredofit dot ca>
### Changed
- Resolve issue with backing up ALL databases with PGSQL and MySQL
## 4.0.10 2023-11-11 <dave at tiredofit dot ca>
### Changed
- Change environment variable parsing routines to properly accomodate for Passwords containing '=='
## 4.0.9 2023-11-11 <dave at tiredofit dot ca>
### Changed
- Fix issue with quotes being wrapped around _PASS variables
## 4.0.8 2023-11-11 <dave at tiredofit dot ca>
### Changed
- Tidy up file_encryption() routines
- Change environment variable _ENCRYPT_PUBKEY to _ENCRYPT_PUBLIC_KEY
- Add new environment variable _ENCRYPT_PRIVATE_KEY
## 4.0.7 2023-11-11 <dave at tiredofit dot ca>
### Added ### Added
- Add seperate permissions for _FILESYSTEM_PATH - Update AWS CLI 1.31.5
- Use pip to install awscli as opposed to via git repo
### Changed
- More output and debugging additions
- SQLite3 now backs up without running into file permission/access problems
- Cleanup old sqlite backups from temp directory
- Handle multiple SQLite3 backups concurrently
## 4.0.6 2023-11-10 <dave at tiredofit dot ca> ## 3-3.12.1 2023-11-30 <dave at tiredofit dot ca>
### Added ### Added
- Add additional DEBUG_ statements - Update AWS CLI to 1.31.4
### Changed
- Fix issue with Influx DB not properly detecting the correct version
## 4.0.5 2023-11-10 <dave at tiredofit dot ca>
### Added
- Add undocumented DBBACKUP_USER|GROUP environment variables for troubleshooting permissions
- Add more verbosity when using DEBUG_ statements
### Changed
- Change _FILESYSTEM_PERMISSION to 600 from 700
## 4.0.4 2023-11-09 <dave at tiredofit dot ca>
### Added
- Add support for restoring from different DB_ variables in restore script
## 4.0.3 2023-11-09 <dave at tiredofit dot ca>
### Changed
- Resolve issue with _MYSQL_TLS_CERT_FILE not being read
## 4.0.2 2023-11-09 <dave at tiredofit dot ca>
### Changed
- Properly use custom _S3_HOST variables
## 4.0.1 2023-11-09 <dave at tiredofit dot ca>
### Changed
- Restore - Stop using DB_DUMP_TARGET and instead browse using DEFAULT_BACKUP_PATH
## 4.0.0 2023-11-08 <dave at tiredofit dot ca>
This is the fourth major release to the DB Backup image which started as a basic MySQL backup service in early 2017. With each major release brings enhancements, bugfixes, removals along with breaking changes and this one is no different.
This release brings functionality requested by the community such as multiple host backup support by means of independent scheduler tasks,blackout periods, better resource usage, better security via file encryption, file permissions, and more verbosity via log files. , and also merges contributions from other developers.
Upgrading to this image should for the most part work for most users, but will involve event upgrading environment variables as the formathas changed significantly. Old variables should continue to work, however are unsupported and will be removed with the `4.3.0` release, whenever that will be.
A significant amount of development hours were put in to accomodate for feature requests by the community. If you are using this in a commercial setting or find this image valuable, please consider sponsoring my work for a period of time or engaging in a private support offering. More details at https://www.tiredofit.ca/sponsor
### Added
- Backup Multiple Hosts in same image all with different options (scheduling, compression, destination, cleanup) (Use `DBXX_option` variables)
- Backup limits how many backup jobs run concurrently
- Backup Scheduling now allows using a timestamp (e.g. `Dec 12 2023 03:00:00`) - credit benvia@github
- Backup Scheduling now allows using a cron expression (e.g `00 03 * * *`)
- Backup Blackout period to skip backing up during a period of time
- Backup runs as dedicated user (no longer root)
- Backup can have specific file permissions set upon completion (e.g. `700` or `rwx------`)
- Backups can run with reduced priority mode to allow for fair scheduling across system
- Backups - MySQL/MariaDB now has ability to backup events
- Backups - Microsoft SQL server now has option to backup transaction logs
- Backups - Postgres now backs up globals - credit oscarsiles@github
- Backups with Azure synchronize storage before upload - credit eoehen@github
- Encrypt backup file with a passphrase or a GPG Public Key ability
- Log backup jobs to file along with log rotation
- Notification support upon job failure via Email, Mattermost, Matrix, Rocketchat
- Zabbix Metrics now auto discovers new jobs
- Zabbix Metrics sends metrics relating to backed up filename, checksum hash, and the duration of backup/compression time, checksum time, encryption time
- New Debug Capabilities
### Changed
- Reworked Documentation
- Reworked all functions and renamed all variables
- Many variables now use a prefix of `DEFAULT_` to operate on all backup jobs
- Can be overridden per backup job by setting `DB_<option>` or to unset default variable `DB_<option>=unset`
- Renamed variables and terms
- `_DUMP_LOCATION` -> `_BACKUP_LOCATION`
- `_DUMP_BEGIN` -> `_BACKUP_BEGIN`
- `_DUMP_FREQ` -> `_BACKUP_INTERVAL`
- `_DUMP_TARGET`` -> `_FILESYSTEM_PATH`
- `_DUMP_ARCHIVE`` -> `_FILESYSTEM_PATH`
- `EXTRA_DUMP_OPTS`` -> `_EXTRA_BACKUP_OPTS`
- `TEMP_LOCATION`` -> `TEMP_PATH`
- Backups - AWS CLI updated to 1.29.78
- Backups - InfluxDB 2 Client version updated to 2.7.3
- Backups - Microsoft SQL server now compresses files post initial backup
- Backups - Manual backups handle aborting gracefully
- Checksum routines now complete in half the time
- Checksum variable now supports "NONE"
- Zabbix metrics sending occurs in one process as opposed to singular
- Cleanup Only cleanup files that match same backup name pattern
- Cleanup/Archive uses relative path instead of absolute with creating_latest_symlink
- A handful of code optimizations and cleanup
### Removed
- `ENABLE_CHECKSUM` - has been wrapped into `_CHECKSUM=none`
## 3.12.0 2023-10-29 <alwynpan@github> ## 3.12.0 2023-10-29 <alwynpan@github>

View File

@@ -5,69 +5,64 @@ FROM docker.io/tiredofit/${DISTRO}:${DISTRO_VARIANT}
LABEL maintainer="Dave Conroy (github.com/tiredofit)" LABEL maintainer="Dave Conroy (github.com/tiredofit)"
### Set Environment Variables ### Set Environment Variables
ENV INFLUX1_CLIENT_VERSION=1.8.0 \ ENV INFLUX_VERSION=1.8.0 \
INFLUX2_CLIENT_VERSION=2.7.3 \ INFLUX2_VERSION=2.4.0 \
MSODBC_VERSION=18.3.2.1-1 \ MSODBC_VERSION=18.3.2.1-1 \
MSSQL_VERSION=18.3.1.1-1 \ MSSQL_VERSION=18.3.1.1-1 \
AWS_CLI_VERSION=1.29.78 \ AWS_CLI_VERSION=1.31.5 \
CONTAINER_ENABLE_MESSAGING=TRUE \ CONTAINER_ENABLE_MESSAGING=FALSE \
CONTAINER_ENABLE_MONITORING=TRUE \ CONTAINER_ENABLE_MONITORING=TRUE \
CONTAINER_PROCESS_RUNAWAY_PROTECTOR=FALSE \
IMAGE_NAME="tiredofit/db-backup" \ IMAGE_NAME="tiredofit/db-backup" \
IMAGE_REPO_URL="https://github.com/tiredofit/docker-db-backup/" IMAGE_REPO_URL="https://github.com/tiredofit/docker-db-backup/"
### Dependencies ### Dependencies
RUN source /assets/functions/00-container && \ RUN source /assets/functions/00-container && \
set -ex && \ set -ex && \
addgroup -S -g 10000 dbbackup && \
adduser -S -D -H -u 10000 -G dbbackup -g "Tired of I.T! DB Backup" dbbackup && \
\
package update && \ package update && \
package upgrade && \ package upgrade && \
package install .db-backup-build-deps \ package install .db-backup-build-deps \
build-base \ build-base \
bzip2-dev \ bzip2-dev \
cargo \ cargo \
git \ git \
go \ go \
libarchive-dev \ libarchive-dev \
openssl-dev \ openssl-dev \
libffi-dev \ libffi-dev \
python3-dev \ python3-dev \
py3-pip \ py3-pip \
xz-dev \ xz-dev \
&& \ && \
\ \
package install .db-backup-run-deps \ package install .db-backup-run-deps \
bzip2 \ bzip2 \
coreutils \ groff \
gpg \ libarchive \
gpg-agent \ mariadb-client \
groff \ mariadb-connector-c \
libarchive \ mongodb-tools \
mariadb-client \ openssl \
mariadb-connector-c \ pigz \
mongodb-tools \ postgresql16 \
openssl \ postgresql16-client \
pigz \ pv \
postgresql16 \ py3-botocore \
postgresql16-client \ py3-colorama \
pv \ py3-cryptography \
py3-botocore \ py3-docutils \
py3-colorama \ py3-jmespath \
py3-cryptography \ py3-rsa \
py3-docutils \ py3-setuptools \
py3-jmespath \ py3-s3transfer \
py3-rsa \ py3-yaml \
py3-setuptools \ python3 \
py3-s3transfer \ redis \
py3-yaml \ sqlite \
python3 \ xz \
redis \ zip \
sqlite \ zstd \
xz \ && \
zip \
zstd \
&& \
\ \
apkArch="$(uname -m)"; \ apkArch="$(uname -m)"; \
case "$apkArch" in \ case "$apkArch" in \
@@ -77,10 +72,9 @@ RUN source /assets/functions/00-container && \
esac; \ esac; \
\ \
if [ $mssql = "true" ] ; then curl -O https://download.microsoft.com/download/3/5/5/355d7943-a338-41a7-858d-53b259ea33f5/msodbcsql18_${MSODBC_VERSION}_${mssql_arch}.apk ; curl -O https://download.microsoft.com/download/3/5/5/355d7943-a338-41a7-858d-53b259ea33f5/mssql-tools18_${MSSQL_VERSION}_${mssql_arch}.apk ; ls -l ; echo y | apk add --allow-untrusted msodbcsql18_${MSODBC_VERSION}_${mssql_arch}.apk mssql-tools18_${MSSQL_VERSION}_${mssql_arch}.apk ; else echo >&2 "Detected non x86_64 or ARM64 build variant, skipping MSSQL installation" ; fi; \ if [ $mssql = "true" ] ; then curl -O https://download.microsoft.com/download/3/5/5/355d7943-a338-41a7-858d-53b259ea33f5/msodbcsql18_${MSODBC_VERSION}_${mssql_arch}.apk ; curl -O https://download.microsoft.com/download/3/5/5/355d7943-a338-41a7-858d-53b259ea33f5/mssql-tools18_${MSSQL_VERSION}_${mssql_arch}.apk ; ls -l ; echo y | apk add --allow-untrusted msodbcsql18_${MSODBC_VERSION}_${mssql_arch}.apk mssql-tools18_${MSSQL_VERSION}_${mssql_arch}.apk ; else echo >&2 "Detected non x86_64 or ARM64 build variant, skipping MSSQL installation" ; fi; \
if [ $influx2 = "true" ] ; then curl -sSL https://dl.influxdata.com/influxdb/releases/influxdb2-client-${INFLUX2_CLIENT_VERSION}-linux-${influx_arch}.tar.gz | tar xvfz - --strip=1 -C /usr/src/ ; chmod +x /usr/src/influx ; mv /usr/src/influx /usr/sbin/ ; else echo >&2 "Unable to build Influx 2 on this system" ; fi ; \ if [ $influx2 = "true" ] ; then curl -sSL https://dl.influxdata.com/influxdb/releases/influxdb2-client-${INFLUX2_VERSION}-linux-${influx_arch}.tar.gz | tar xvfz - --strip=1 -C /usr/src/ ; chmod +x /usr/src/influx ; mv /usr/src/influx /usr/sbin/ ; else echo >&2 "Unable to build Influx 2 on this system" ; fi ; \
clone_git_repo https://github.com/aws/aws-cli "${AWS_CLI_VERSION}" && \ pip3 install --break-system-packages awscli==${AWS_CLI_VERSION} && \
python3 setup.py install --prefix=/usr && \ clone_git_repo https://github.com/influxdata/influxdb "${INFLUX_VERSION}" && \
clone_git_repo https://github.com/influxdata/influxdb "${INFLUX1_CLIENT_VERSION}" && \
go build -o /usr/sbin/influxd ./cmd/influxd && \ go build -o /usr/sbin/influxd ./cmd/influxd && \
strip /usr/sbin/influxd && \ strip /usr/sbin/influxd && \
mkdir -p /usr/src/pbzip2 && \ mkdir -p /usr/src/pbzip2 && \
@@ -111,4 +105,5 @@ RUN source /assets/functions/00-container && \
/tmp/* \ /tmp/* \
/usr/src/* /usr/src/*
COPY install / COPY install /

View File

@@ -1,6 +1,6 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2023 Dave Conroy Copyright (c) 2022 Dave Conroy
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

936
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,67 +0,0 @@
services:
example-db:
hostname: example-db-host
container_name: example-db
image: tiredofit/mariadb:10.11
ports:
- 3306:3306
volumes:
- ./db:/var/lib/mysql
environment:
- ROOT_PASS=examplerootpassword
- DB_NAME=example
- DB_USER=example
- DB_PASS=examplepassword
restart: always
networks:
- example-db-network
example-db-backup:
container_name: example-db-backup
image: tiredofit/db-backup
volumes:
- ./backups:/backup
#- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- TIMEZONE=America/Vancouver
- CONTAINER_NAME=example-db-backup
- CONTAINER_ENABLE_MONITORING=FALSE
# - DEBUG_MODE=TRUE
- BACKUP_JOB_CONCURRENCY=1 # Only run one job at a time
- DEFAULT_CHECKSUM=NONE # Don't create checksums
- DEFAULT_COMPRESSION=ZSTD # Compress all with ZSTD
- DEFAULT_DUMP_INTERVAL=1440 # Backup every 1440 minutes
- DEFAULT_DUMP_BEGIN=0000 # Start backing up at midnight
- DEFAULT_CLEANUP_TIME=8640 # Cleanup backups after a week
- DB01_TYPE=mariadb
- DB01_HOST=example-db-host
- DB01_NAME=example
- DB01_USER=example
- DB01_PASS=examplepassword
- DB01_DUMP_INTERVAL=30 # (override) Backup every 30 minutes
- DB01_DUMP_BEGIN=+1 # (override) Backup starts immediately
- DB01_CLEANUP_TIME=180 # (override) Cleanup backups they are older than 180 minutes
- DB01_CHECKSUM=SHA1 # (override) Create a SHA1 checksum
- DB01_COMPRESSION=GZ # (override) Compress with GZIP
#- DB02_TYPE=postgres
#- DB02_HOST=example-postgres-host
#- DB02_NAME=example
#- DB02_USER=example
#- DB02_PASS=examplepassword
#- DB02_DUMP_INTERVAL=60 # (override) Backup every 60 minutes
#- DB02_DUMP_BEGIN=+10 # (override) Backup starts in ten minutes
#- DB02_CLEANUP_TIME=240 # (override) Cleanup backups they are older than 240 minutes
#- DB02_CHECKSUM=MD5 # (override) Create a SHA1 checksum
#- DB02_COMPRESSION=BZ # (override) Compress with BZIP
restart: always
networks:
- example-db-network
networks:
example-db-network:
name: example-db-network

View File

@@ -3,6 +3,12 @@
# upload with blobxfer to azure storage # upload with blobxfer to azure storage
# #
version: '2'
networks:
example-mssql-blobxfer-net:
name: example-mssql-blobxfer-net
services: services:
example-mssql-s3-db: example-mssql-s3-db:
hostname: example-db-host hostname: example-db-host
@@ -26,7 +32,7 @@ services:
# execute in terminal --> docker build -t tiredofit/db-backup-mssql-blobxfer . # execute in terminal --> docker build -t tiredofit/db-backup-mssql-blobxfer .
# replace --> image: tiredofit/db-backup-mssql # replace --> image: tiredofit/db-backup-mssql
# image: tiredofit/db-backup # image: tiredofit/db-backup
image: tiredofit/db-backup image: tiredofit/db-backup-mssql-blobxfer
links: links:
- example-mssql-s3-db - example-mssql-s3-db
volumes: volumes:
@@ -34,35 +40,30 @@ services:
- ./tmp/backups:/tmp/backups # shared tmp backup directory - ./tmp/backups:/tmp/backups # shared tmp backup directory
#- ./post-script.sh:/assets/custom-scripts/post-script.sh #- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment: environment:
- TIMEZONE=America/Vancouver
- CONTAINER_ENABLE_MONITORING=FALSE
- CONTAINER_NAME=example-mssql-blobxfer-db-backup
# - DEBUG_MODE=TRUE # - DEBUG_MODE=TRUE
- DB01_TYPE=mssql - DB_TYPE=mssql
- DB01_HOST=example-db-host - DB_HOST=example-db-host
# - DB01_PORT=1488 # - DB_PORT=1488
# - DB_NAME=ALL # [ALL] not working on sql server.
# create database with name `test1` manually first # create database with name `test1` manually first
- DB01_NAME=test1 # Create this database - DB_NAME=test1 # Create this database
- DB01_USER=sa - DB_USER=sa
- DB01_PASS=5hQa0utRFBpIY3yhoIyE - DB_PASS=5hQa0utRFBpIY3yhoIyE
- DB01_DUMP_INTERVAL=5 # backup every 5 minute - DB_DUMP_FREQ=1 # backup every 5 minute
# - DB01_DUMP_BEGIN=0000 # backup starts at midnight vs not set immediately # - DB_DUMP_BEGIN=0000 # backup starts immediately
- DB01_CLEANUP_TIME=60 # clean backups they are older than 60 minutes - DB_CLEANUP_TIME=3 # clean backups they are older than 60 minutes
- DB01_CHECKSUM=SHA1 # Set Checksum to be SHA1 - ENABLE_CHECKSUM=TRUE
- DB01_COMPRESSION=GZ # Set compression to use GZIP - CHECKSUM=SHA1
- COMPRESSION=GZ
- SPLIT_DB=FALSE
- CONTAINER_ENABLE_MONITORING=FALSE
# === S3 Blobxfer === # === S3 Blobxfer ===
- DB01_BACKUP_LOCATION=blobxfer - BACKUP_LOCATION=blobxfer
# Add here azure storage account # Add here azure storage account
- DB01_BLOBXFER_STORAGE_ACCOUNT={TODO Add Storage Name} - BLOBXFER_STORAGE_ACCOUNT={TODO Add Storage Name}
# Add here azure storage account key # Add here azure storage account key
- SB01_BLOBXFER_STORAGE_ACCOUNT_KEY={TODO Add Key} - BLOBXFER_STORAGE_ACCOUNT_KEY={TODO Add Key}
- DB01_BLOBXFER_REMOTE_PATH=docker-db-backup - BLOBXFER_REMOTE_PATH=docker-db-backup
restart: always restart: always
networks: networks:
example-mssql-blobxfer-net: example-mssql-blobxfer-net:
networks:
example-mssql-blobxfer-net:
name: example-mssql-blobxfer-net

View File

@@ -2,6 +2,12 @@
# Example for Microsoft SQL Server # Example for Microsoft SQL Server
# #
version: '2'
networks:
example-mssql-net:
name: example-mssql-net
services: services:
example-mssql-db: example-mssql-db:
hostname: example-db-host hostname: example-db-host
@@ -25,7 +31,7 @@ services:
# execute in terminal --> docker build -t tiredofit/db-backup-mssql . # execute in terminal --> docker build -t tiredofit/db-backup-mssql .
# replace --> image: tiredofit/db-backup-mssql # replace --> image: tiredofit/db-backup-mssql
# image: tiredofit/db-backup # image: tiredofit/db-backup
image: tiredofit/db-backup image: tiredofit/db-backup-mssql
links: links:
- example-mssql-db - example-mssql-db
volumes: volumes:
@@ -33,28 +39,23 @@ services:
- ./tmp/backups:/tmp/backups # shared tmp backup directory - ./tmp/backups:/tmp/backups # shared tmp backup directory
#- ./post-script.sh:/assets/custom-scripts/post-script.sh #- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment: environment:
- TIMEZONE=America/Vancouver
- CONTAINER_ENABLE_MONITORING=FALSE
- CONTAINER_NAME=example-mssql-blobxfer-db-backup
# - DEBUG_MODE=TRUE # - DEBUG_MODE=TRUE
- DB01_TYPE=mssql - DB_TYPE=mssql
- DB01_HOST=example-db-host - DB_HOST=example-db-host
# - DB_PORT=1488 # - DB_PORT=1488
# - DB_NAME=ALL # [ALL] not working on sql server. # - DB_NAME=ALL # [ALL] not working on sql server.
# create database with name `test1` manually first # create database with name `test1` manually first
- DB01_NAME=test1 - DB_NAME=test1
- DB01_USER=sa - DB_USER=sa
- DB01_PASS=5hQa0utRFBpIY3yhoIyE - DB_PASS=5hQa0utRFBpIY3yhoIyE
- DB01_DUMP_INTERVAL=1 # backup every minute - DB_DUMP_FREQ=1 # backup every minute
# - DB01_DUMP_BEGIN=0000 # backup starts at midnight vs unset immediately # - DB_DUMP_BEGIN=0000 # backup starts immediately
- DB01_CLEANUP_TIME=5 # clean backups they are older than 5 minute - DB_CLEANUP_TIME=5 # clean backups they are older than 5 minute
- DB01_CHECKSUM=NONE - ENABLE_CHECKSUM=FALSE
- DB01_COMPRESSION=GZ - CHECKSUM=SHA1
- COMPRESSION=GZ
- SPLIT_DB=FALSE
- CONTAINER_ENABLE_MONITORING=FALSE
restart: always restart: always
networks: networks:
example-mssql-net: example-mssql-net:
networks:
example-mssql-net:
name: example-mssql-net

View File

@@ -0,0 +1,53 @@
version: '2'
networks:
example-db-network:
name: example-db-network
services:
example-db:
hostname: example-db-host
container_name: example-db
image: mariadb:latest
ports:
- 13306:3306
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=examplerootpassword
- MYSQL_DATABASE=example
- MYSQL_USER=example
- MYSQL_PASSWORD=examplepassword
restart: always
networks:
- example-db-network
example-db-backup:
container_name: example-db-backup
image: tiredofit/db-backup
links:
- example-db
volumes:
- ./backups:/backup
#- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- TIMEZONE=America/Vancouver
- CONTAINER_ENABLE_MONITORING=FALSE
# - DEBUG_MODE=TRUE
- DB_TYPE=mariadb
- DB_HOST=example-db-host
- DB_NAME=example
- DB_USER=example
- DB_PASS=examplepassword
- DB_DUMP_FREQ=1 # backup every minute
# - DB_DUMP_BEGIN=0000 # backup starts immediately
- DB_CLEANUP_TIME=5 # clean backups they are older than 5 minute
- CHECKSUM=SHA1
- COMPRESSION=GZ
- SPLIT_DB=FALSE
restart: always
networks:
- example-db-network

View File

@@ -4,7 +4,7 @@
# #### $1=EXIT_CODE (After running backup routine) # #### $1=EXIT_CODE (After running backup routine)
# #### $2=DB_TYPE (Type of Backup) # #### $2=DB_TYPE (Type of Backup)
# #### $3=DB_HOST (Backup Host) # #### $3=DB_HOST (Backup Host)
# #### #4=DB_NAME (Name of Database backed up) # #### #4=DB_NAME (Name of Database backed up
# #### $5=BACKUP START TIME (Seconds since Epoch) # #### $5=BACKUP START TIME (Seconds since Epoch)
# #### $6=BACKUP FINISH TIME (Seconds since Epoch) # #### $6=BACKUP FINISH TIME (Seconds since Epoch)
# #### $7=BACKUP TOTAL TIME (Seconds between Start and Finish) # #### $7=BACKUP TOTAL TIME (Seconds between Start and Finish)

View File

@@ -1,111 +0,0 @@
#!/command/with-contenv bash
source /assets/functions/00-container
PROCESS_NAME="db-backup{{BACKUP_NUMBER}}-scheduler"
check_container_initialized
check_service_initialized init 10-db-backup
source /assets/functions/10-db-backup
source /assets/defaults/10-db-backup
bootstrap_variables backup_init {{BACKUP_NUMBER}}
bootstrap_variables parse_variables {{BACKUP_NUMBER}}
if [ -z "${backup_job_db_name}" ]; then
PROCESS_NAME="{{BACKUP_NUMBER}}${backup_job_db_host//\//_}"
else
PROCESS_NAME="{{BACKUP_NUMBER}}-${backup_job_db_host//\//_}__${backup_job_db_name}"
fi
trap ctrl_c INT
if [[ "${MODE,,}" =~ "standalone" ]] || [ "${1,,}" = "manual" ] || [ "${1,,}" = "now" ]; then
print_debug "Detected Manual Mode"
persist=false
backup_job_backup_begin=+0
else
silent sleep {{BACKUP_NUMBER}}
time_last_run=0
time_current=$(date +'%s')
if [[ "${backup_job_backup_begin}" =~ ^\+(.*)$ ]]; then
print_debug "BACKUP_BEGIN is a jump of minute starting with +"
timer plusvalue
elif [[ "${backup_job_backup_begin}" =~ ^[0-9]{4}$ ]]; then
print_debug "BACKUP_BEGIN is a HHMM value"
timer time
elif [[ "${backup_job_backup_begin}" =~ ([0-9]{4})-([0-9]{2})-([0-9]{2})[[:space:]]([0-9]{2}):([0-9]{2}):([0-9]{2}) ]]; then
print_debug "BACKUP_BEGIN is a full date timestamp"
timer datetime
elif echo "${backup_job_backup_begin//\*/#}" | grep -qP "^(((\d+,)+\d+|(\d+(\/|-)\d+)|\d+|#) ?){5}$" ; then
print_debug "BACKUP_BEGIN is a cron expression"
time_last_run=$(date +"%s")
timer cron "${backup_job_backup_begin}" "${time_current}" "${time_last_run}"
else
print_error "_BACKUP_BEGIN is invalid - Unable to perform scheduling"
cat <<EOF
Valid Methods:
+(number) - Start in however many minutes
HHMM - Start at hour (00-24) and minute (00-59)
YYYY-MM-DD HH:mm:ss - Start at a specific date and time
0 23 * * * - Cron expression
EOF
print_error "Stopping backup_scheduler {{BACKUP_NUMBER}} due to detected errors. Fix and restart container."
stop_scheduler_backup=true
s6-svc -d /var/run/s6/legacy-services/dbbackup-{{BACKUP_NUMBER}}
fi
print_debug "Wait Time: ${time_wait} Future execution time: ${time_future} Current Time: ${time_current}"
print_info "Next Backup at $(date -d @"${time_future}" +'%Y-%m-%d %T %Z')"
silent sleep "${time_wait}"
fi
while true; do
if [ -n "${backup_job_blackout_start}" ] && [ -n "${backup_job_blackout_finish}" ] ; then
time_current_hour_minute=$(date +%H%M)
if [[ "${time_current_hour_minute}" > "${backup_job_blackout_start}" ]] && [[ "${time_current_hour_minute}" < "${backup_job_blackout_finish}" ]] ; then
blackout=true
else
blackout=false
fi
fi
if var_true "${blackout}" ; then
print_notice "Detected Blackout Period - Not performing backup operations"
else
timer job start
process_limiter
echo "{{BACKUP_NUMBER}}" >> /tmp/.container/db-backup-backups
print_debug "Backup {{BACKUP_NUMBER}} routines started time: $(date +'%Y-%m-%d %T %Z')"
bootstrap_filesystem
check_availability
backup_"${dbtype,,}"
timer job stop
if [ -z "${exitcode_backup}" ] ; then exitcode_backup="0" ; fi
print_info "Backup {{BACKUP_NUMBER}} routines finish time: $(date -d @"${backup_job_finish_time}" +'%Y-%m-%d %T %Z') with exit code ${exitcode_backup}"
print_notice "Backup {{BACKUP_NUMBER}} routines time taken: $(echo "${backup_job_total_time}" | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
sed -i "/^{{BACKUP_NUMBER}}/d" /tmp/.container/db-backup-backups
fi
symlink_log
cleanup_old_data
if var_false "${persist}" ; then
print_debug "Exiting due to manual mode"
exit "${exitcode_backup}";
else
if var_true "${stop_scheduler_backup}" ; then
print_error "Stopping backup_scheduler {{BACKUP_NUMBER}} due to detected errors. Fix and restart container."
s6-svc -d /var/run/s6/legacy-services/dbbackup-{{BACKUP_NUMBER}}
else
if [ ! "${time_cron}" = "true" ]; then
print_notice "Sleeping for another $(($backup_job_backup_interval*60-backup_job_total_time)) seconds. Waking up at $(date -d@"$(( $(date +%s)+$(($backup_job_backup_interval*60-backup_job_total_time))))" +'%Y-%m-%d %T %Z') "
silent sleep $(($backup_job_backup_interval*60-backup_job_total_time))
else
time_last_run=$(date +"%s")
timer cron "${backup_job_backup_begin}" "${time_current}" "${time_last_run}"
print_notice "Sleeping for another ${time_wait} seconds. Waking up at $(date -d@"${time_future}" +'%Y-%m-%d %T %Z') "
silent sleep "${time_wait}"
fi
fi
fi
done

View File

@@ -1,46 +1,32 @@
#!/command/with-contenv bash #!/command/with-contenv bash
BACKUP_JOB_CONCURRENCY=${BACKUP_JOB_CONCURRENCY:-"1"} BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
DBBACKUP_USER=${DBBACKUP_USER:-"dbbackup"} BLOBXFER_REMOTE_PATH=${BLOBXFER_REMOTE_PATH:-"/docker-db-backup"}
DBBACKUP_GROUP=${DBBACKUP_GROUP:-"${DBBACKUP_USER}"} # Must go after DBBACKUP_USER CHECKSUM=${CHECKSUM:-"MD5"}
DEFAULT_BACKUP_BEGIN=${DEFAULT_BACKUP_BEGIN:-+0} COMPRESSION=${COMPRESSION:-"ZSTD"}
DEFAULT_BACKUP_INTERVAL=${DEFAULT_BACKUP_INTERVAL:-1440} COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DEFAULT_BACKUP_INTERVAL=${DEFAULT_BACKUP_INTERVAL:-1440} CREATE_LATEST_SYMLINK=${CREATE_LATEST_SYMLINK:-"TRUE"}
DEFAULT_BACKUP_LOCATION=${DEFAULT_BACKUP_LOCATION:-"FILESYSTEM"} DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DEFAULT_BLOBXFER_REMOTE_PATH=${DEFAULT_BLOBXFER_REMOTE_PATH:-"/docker-db-backup"} DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DEFAULT_CHECKSUM=${DEFAULT_CHECKSUM:-"MD5"} DB_DUMP_TARGET=${DB_DUMP_TARGET:-"/backup"}
DEFAULT_COMPRESSION=${DEFAULT_COMPRESSION:-"ZSTD"} DB_DUMP_TARGET_ARCHIVE=${DB_DUMP_TARGET_ARCHIVE:-"${DB_DUMP_TARGET}/archive/"}
DEFAULT_COMPRESSION_LEVEL=${DEFAULT_COMPRESSION_LEVEL:-"3"} ENABLE_CHECKSUM=${ENABLE_CHECKSUM:-"TRUE"}
DEFAULT_CREATE_LATEST_SYMLINK=${DEFAULT_CREATE_LATEST_SYMLINK:-"TRUE"} ENABLE_PARALLEL_COMPRESSION=${ENABLE_PARALLEL_COMPRESSION:-"TRUE"}
DEFAULT_ENABLE_PARALLEL_COMPRESSION=${DEFAULT_ENABLE_PARALLEL_COMPRESSION:-"TRUE"}
DEFAULT_ENCRYPT=${DEFAULT_ENCRYPT:-"FALSE"}
DEFAULT_FILESYSTEM_PATH=${DEFAULT_FILESYSTEM_PATH:-"/backup"}
DEFAULT_FILESYSTEM_PATH_PERMISSION=${DEFAULT_FILESYSTEM_PATH_PERMISSION:-"700"}
DEFAULT_FILESYSTEM_PERMISSION=${DEFAULT_FILESYSTEM_PERMISSION:-"600"}
DEFAULT_FILESYSTEM_ARCHIVE_PATH=${DEFAULT_FILESYSTEM_ARCHIVE_PATH:-"${DEFAULT_FILESYSTEM_PATH}/archive/"}
DEFAULT_LOG_LEVEL=${DEFAULT_LOG_LEVEL:-"notice"}
DEFAULT_MYSQL_ENABLE_TLS=${DEFAULT_MYSQL_ENABLE_TLS:-"FALSE"}
DEFAULT_MYSQL_EVENTS=${DEFAULT_MYSQL_EVENTS:-"TRUE"}
DEFAULT_MYSQL_MAX_ALLOWED_PACKET=${DEFAULT_MYSQL_MAX_ALLOWED_PACKET:-"512M"}
DEFAULT_MYSQL_SINGLE_TRANSACTION=${DEFAULT_MYSQL_SINGLE_TRANSACTION:-"TRUE"}
DEFAULT_MYSQL_STORED_PROCEDURES=${DEFAULT_MYSQL_STORED_PROCEDURES:-"TRUE"}
DEFAULT_MYSQL_TLS_CA_FILE=${DEFAULT_MYSQL_TLS_CA_FILE:-"/etc/ssl/cert.pem"}
DEFAULT_MYSQL_TLS_VERIFY=${DEFAULT_MYSQL_TLS_VERIFY:-"FALSE"}
DEFAULT_MYSQL_TLS_VERSION=${DEFAULT_MYSQL_TLS_VERSION:-"TLSv1.1,TLSv1.2,TLSv1.3"}
DEFAULT_MSSQL_MODE=${DEFAULT_MSSQL_MODE:-"database"}
DEFAULT_PARALLEL_COMPRESSION_THREADS=${DEFAULT_PARALLEL_COMPRESSION_THREADS:-"$(nproc)"}
DEFAULT_RESOURCE_OPTIMIZED=${DEFAULT_RESOURCE_OPTIMIZED:-"FALSE"}
DEFAULT_S3_CERT_SKIP_VERIFY=${DEFAULT_S3_CERT_SKIP_VERIFY:-"TRUE"}
DEFAULT_S3_PROTOCOL=${DEFAULT_S3_PROTOCOL:-"https"}
DEFAULT_SCRIPT_LOCATION_PRE=${DEFAULT_SCRIPT_LOCATION_PRE:-"/assets/scripts/pre/"}
DEFAULT_SCRIPT_LOCATION_POST=${DEFAULT_SCRIPT_LOCATION_POST:-"/assets/scripts/post/"}
DEFAULT_SIZE_VALUE=${DEFAULT_SIZE_VALUE:-"bytes"}
DEFAULT_SKIP_AVAILABILITY_CHECK=${DEFAULT_SKIP_AVAILABILITY_CHECK:-"FALSE"}
DEFAULT_SPLIT_DB=${DEFAULT_SPLIT_DB:-"TRUE"}
LOG_PATH=${LOG_PATH:-"/logs"}
MANUAL_RUN_FOREVER=${MANUAL_RUN_FOREVER:-"TRUE"} MANUAL_RUN_FOREVER=${MANUAL_RUN_FOREVER:-"TRUE"}
MODE=${MODE:-"AUTO"} MODE=${MODE:-"AUTO"}
MYSQL_ENABLE_TLS=${MYSQL_ENABLE_TLS:-"FALSE"}
TEMP_PATH=${TEMP_PATH:-"/tmp/backups"} MYSQL_MAX_ALLOWED_PACKET=${MYSQL_MAX_ALLOWED_PACKET:-"512M"}
if [ -n "${TEMP_LOCATION}" ] ; then TEMP_PATH=${TEMP_LOCATION:-"/tmp/backups"} ; fi # To be removed 4.3.0 MYSQL_SINGLE_TRANSACTION=${MYSQL_SINGLE_TRANSACTION:-"TRUE"}
MYSQL_STORED_PROCEDURES=${MYSQL_STORED_PROCEDURES:-"TRUE"}
MYSQL_TLS_CA_FILE=${MYSQL_TLS_CA_FILE:-"/etc/ssl/cert.pem"}
MYSQL_TLS_VERIFY=${MYSQL_TLS_VERIFY:-"FALSE"}
MYSQL_TLS_VERSION=${MYSQL_TLS_VERSION:-"TLSv1.1,TLSv1.2,TLSv1.3"}
PARALLEL_COMPRESSION_THREADS=${PARALLEL_COMPRESSION_THREADS:-"$(nproc)"}
S3_CERT_SKIP_VERIFY=${S3_CERT_SKIP_VERIFY:-"TRUE"}
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
SCRIPT_LOCATION_PRE=${SCRIPT_LOCATION_PRE:-"/assets/scripts/pre/"}
SCRIPT_LOCATION_POST=${SCRIPT_LOCATION_POST:-"/assets/scripts/post/"}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SKIP_AVAILABILITY_CHECK=${SKIP_AVAILABILITY_CHECK:-"FALSE"}
SPLIT_DB=${SPLIT_DB:-"TRUE"}
TEMP_LOCATION=${TEMP_LOCATION:-"/tmp/backups"}

File diff suppressed because it is too large Load Diff

View File

@@ -6,9 +6,9 @@ prepare_service 03-monitoring
PROCESS_NAME="db-backup" PROCESS_NAME="db-backup"
output_off output_off
bootstrap_variables
sanity_test
setup_mode setup_mode
db_backup_container_init create_zabbix dbbackup
create_schedulers backup
create_zabbix dbbackup4
liftoff liftoff

View File

@@ -0,0 +1,88 @@
#!/command/with-contenv bash
source /assets/functions/00-container
source /assets/functions/10-db-backup
source /assets/defaults/10-db-backup
PROCESS_NAME="db-backup"
bootstrap_variables
if [ "${MODE,,}" = "manual" ] || [ "${1,,}" = "manual" ] || [ "${1,,}" = "now" ]; then
DB_DUMP_BEGIN=+0
manual=TRUE
print_debug "Detected Manual Mode"
else
sleep 5
current_time=$(date +"%s")
today=$(date +"%Y%m%d")
if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then
waittime=$(( ${BASH_REMATCH[1]} * 60 ))
target_time=$(($current_time + $waittime))
else
target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s")
if [[ "$target_time" < "$current_time" ]]; then
target_time=$(($target_time + 24*60*60))
fi
waittime=$(($target_time - $current_time))
fi
print_debug "Wait Time: ${waittime} Target time: ${target_time} Current Time: ${current_time}"
print_info "Next Backup at $(date -d @${target_time} +"%Y-%m-%d %T %Z")"
sleep $waittime
fi
while true; do
mkdir -p "${TEMP_LOCATION}"
backup_start_time=$(date +"%s")
print_debug "Backup routines started time: $(date +'%Y-%m-%d %T %Z')"
case "${dbtype,,}" in
"couch" )
check_availability
backup_couch
;;
"influx" )
check_availability
backup_influx
;;
"mssql" )
check_availability
backup_mssql
;;
"mysql" )
check_availability
backup_mysql
;;
"mongo" )
check_availability
backup_mongo
;;
"pgsql" )
check_availability
backup_pgsql
;;
"redis" )
check_availability
backup_redis
;;
"sqlite3" )
check_availability
backup_sqlite3
;;
esac
backup_finish_time=$(date +"%s")
backup_total_time=$(echo $((backup_finish_time-backup_start_time)))
if [ -z "$master_exit_code" ] ; then master_exit_code="0" ; fi
print_info "Backup routines finish time: $(date -d @${backup_finish_time} +"%Y-%m-%d %T %Z") with overall exit code ${master_exit_code}"
print_notice "Backup routines time taken: $(echo ${backup_total_time} | awk '{printf "Hours: %d Minutes: %02d Seconds: %02d", $1/3600, ($1/60)%60, $1%60}')"
cleanup_old_data
if var_true "${manual}" ; then
print_debug "Exiting due to manual mode"
exit ${master_exit_code};
else
print_notice "Sleeping for another $(($DB_DUMP_FREQ*60-backup_total_time)) seconds. Waking up at $(date -d@"$(( $(date +%s)+$(($DB_DUMP_FREQ*60-backup_total_time))))" +"%Y-%m-%d %T %Z") "
sleep $(($DB_DUMP_FREQ*60-backup_total_time))
fi
done

View File

@@ -0,0 +1,4 @@
#!/command/with-contenv bash
echo '** Performing Manual Backup'
/etc/services.available/10-db-backup/run manual

View File

@@ -1,24 +0,0 @@
#!/command/with-contenv bash
source /assets/functions/00-container
source /assets/defaults/05-logging
source /assets/defaults/10-db-backup
## Compress each log 2 days old
timestamp_2dayold_unixtime="$(stat -c %Y "${LOG_PATH}"/"$(date --date='2 days ago' +'%Y%m%d')")"
for logfile in "${LOG_PATH}"/"$(date --date='2 days ago' +'%Y%m%d')"/"$(date --date='2 days ago' +'%Y%m%d')"_*.log ; do
sudo -u restic zstd --rm --rsyncable "${logfile}"
done
touch -t $(date -d"@${timestamp_2dayold_unixtime}" +'%Y%m%d%H%m.%S') "${LOG_PATH}"/"$(date --date='2 days ago' +'%Y%m%d')"
# Look fook files older than certain day and delete
if [ -n "${LOG_PATH}" ] && [ -d "${LOG_PATH}" ] ; then
find "${LOG_PATH}" -mtime +"${LOGROTATE_RETAIN_DAYS}" -type d -exec rm -rf {} +
fi
# Look for stale symbolic links and delete accordingly
for symbolic_link in "${LOG_PATH}"/latest*.log ; do
if [ ! -e "${symbolic_link}" ] ; then
rm -rf "${symbolic_link}"
fi
done

View File

@@ -29,6 +29,7 @@ bdgy="\e[100m" # Background Color Dark Gray
blr="\e[101m" # Background Color Light Red blr="\e[101m" # Background Color Light Red
boff="\e[49m" # Background Color Off boff="\e[49m" # Background Color Off
bootstrap_variables
if [ -z "${1}" ] ; then if [ -z "${1}" ] ; then
interactive_mode=true interactive_mode=true
@@ -37,7 +38,7 @@ else
"-h" ) "-h" )
cat <<EOF cat <<EOF
${IMAGE_NAME} Restore Tool ${IMAGE_VERSION} ${IMAGE_NAME} Restore Tool ${IMAGE_VERSION}
(c) 2023 Dave Conroy (https://github.com/tiredofit) (https://www.tiredofit.ca) (c) 2022 Dave Conroy (https://github.com/tiredofit)
This script will assist you in recovering databases taken by the Docker image. This script will assist you in recovering databases taken by the Docker image.
You will be presented with a series of menus allowing you to choose: You will be presented with a series of menus allowing you to choose:
@@ -74,17 +75,10 @@ EOF
esac esac
fi fi
control_c() {
if [ -f "${restore_vars}" ] ; then rm -rf "${restore_vars}" ; fi
print_warn "User aborted"
exit
}
get_filename() { get_filename() {
COLUMNS=12 COLUMNS=12
prompt="Please select a file to restore:" prompt="Please select a file to restore:"
options=( $(find "${DEFAULT_FILESYSTEM_PATH}" -type f -maxdepth 2 -not -name '*.md5' -not -name '*.sha1' -print0 | sort -z | xargs -0) ) options=( $(find "${DB_DUMP_TARGET}" -type f -maxdepth 1 -not -name '*.md5' -not -name '*.sha1' -print0 | sort -z | xargs -0) )
PS3="$prompt " PS3="$prompt "
select opt in "${options[@]}" "Custom" "Quit" ; do select opt in "${options[@]}" "Custom" "Quit" ; do
if (( REPLY == 2 + ${#options[@]} )) ; then if (( REPLY == 2 + ${#options[@]} )) ; then
@@ -110,17 +104,13 @@ get_filename() {
get_dbhost() { get_dbhost() {
p_dbhost=$(basename -- "${r_filename}" | cut -d _ -f 3) p_dbhost=$(basename -- "${r_filename}" | cut -d _ -f 3)
if [ -n "${p_dbhost}" ]; then if [ -n "${p_dbhost}" ]; then
parsed_host=true parsed_host=true
print_debug "Parsed DBHost: ${p_dbhost}" print_debug "Parsed DBHost: ${p_dbhost}"
if grep -q "${p_dbhost}" "${restore_vars}" ; then
detected_host_num=$(grep "${p_dbhost}" "${restore_vars}" | head -n1 | cut -c 3,4)
detected_host_value=$(grep "${p_dbhost}" "${restore_vars}" | head -n1 | cut -d '=' -f 2)
fi
fi fi
if [ -z "${detected_host_value}" ] && [ -z "${parsed_host}" ]; then if [ -z "${DB_HOST}" ] && [ -z "${parsed_host}" ]; then
print_debug "Parsed DBHost Variant: 1 - No Env, No Parsed Filename" print_debug "Parsed DBHost Variant: 1 - No Env, No Parsed Filename"
q_dbhost_variant=1 q_dbhost_variant=1
q_dbhost_menu=$(cat <<EOF q_dbhost_menu=$(cat <<EOF
@@ -129,18 +119,18 @@ EOF
) )
fi fi
if [ -n "${detected_host_value}" ] && [ -z "${parsed_host}" ]; then if [ -n "${DB_HOST}" ] && [ -z "${parsed_host}" ]; then
print_debug "Parsed DBHost Variant: 2 - Env, No Parsed Filename" print_debug "Parsed DBHost Variant: 2 - Env, No Parsed Filename"
q_dbhost_variant=2 q_dbhost_variant=2
q_dbhost_menu=$(cat <<EOF q_dbhost_menu=$(cat <<EOF
C ) Custom Entered Hostname C ) Custom Entered Hostname
E ) Environment Variable DB${detected_host_num}_HOST: '${detected_host_value}' E ) Environment Variable DB_HOST: '${DB_HOST}'
EOF EOF
) )
fi fi
if [ -z "${detected_host_value}" ] && [ -n "${parsed_host}" ]; then if [ -z "${DB_HOST}" ] && [ -n "${parsed_host}" ]; then
print_debug "Parsed DBHost Variant: 3 - No Env, Parsed Filename" print_debug "Parsed DBHost Variant: 3 - No Env, Parsed Filename"
q_dbhost_variant=3 q_dbhost_variant=3
q_dbhost_menu=$(cat <<EOF q_dbhost_menu=$(cat <<EOF
@@ -151,13 +141,13 @@ EOF
) )
fi fi
if [ -n "${detected_host_value}" ] && [ -n "${parsed_host}" ]; then if [ -n "${DB_HOST}" ] && [ -n "${parsed_host}" ]; then
print_debug "Parsed DBHost Variant: 4 - Env, Parsed Filename" print_debug "Parsed DBHost Variant: 4 - Env, Parsed Filename"
q_dbhost_variant=4 q_dbhost_variant=4
q_dbhost_menu=$(cat <<EOF q_dbhost_menu=$(cat <<EOF
C ) Custom Entered Hostname C ) Custom Entered Hostname
E ) Environment Variable DB${detected_host_num}_HOST: '${detected_host_value}' E ) Environment Variable DB_HOST: '${DB_HOST}'
F ) Parsed Filename Host: '${p_dbhost}' F ) Parsed Filename Host: '${p_dbhost}'
EOF EOF
) )
@@ -184,7 +174,7 @@ EOF
;; ;;
2 ) 2 )
while true; do while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E\*${cdgy}\) : ${cwh}${coff}) " q_dbhost_menu read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E${cdgy}\) : ${cwh}${coff}) " q_dbhost_menu
case "${q_dbhost_menu,,}" in case "${q_dbhost_menu,,}" in
c* ) c* )
counter=1 counter=1
@@ -198,7 +188,7 @@ EOF
break break
;; ;;
e* | "" ) e* | "" )
r_dbhost=${detected_host_value} r_dbhost=${DB_HOST}
break break
;; ;;
q* ) q* )
@@ -210,7 +200,7 @@ EOF
;; ;;
3 ) 3 )
while true; do while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}F\*${cdgy}\) : ${cwh}${coff}) " q_dbhost_menu read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}F${cdgy}\) : ${cwh}${coff}) " q_dbhost_menu
case "${q_dbhost_menu,,}" in case "${q_dbhost_menu,,}" in
c* ) c* )
counter=1 counter=1
@@ -237,7 +227,7 @@ EOF
4 ) 4 )
while true; do while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E\*${cdgy}\) \| \(${cwh}F${cdgy}\) : ${cwh}${coff}) " q_dbhost_menu read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E${cdgy}\) \| \(${cwh}F${cdgy}\) : ${cwh}${coff}) " q_dbhost_menu
case "${q_dbhost_menu,,}" in case "${q_dbhost_menu,,}" in
c* ) c* )
counter=1 counter=1
@@ -251,7 +241,7 @@ EOF
break break
;; ;;
e* | "" ) e* | "" )
r_dbhost=${detected_host_value} r_dbhost=${DB_HOST}
break break
;; ;;
f* ) f* )
@@ -268,337 +258,6 @@ EOF
esac esac
} }
get_dbname() {
p_dbname=$(basename -- "${r_filename}" | cut -d _ -f 2)
if [ -n "${p_dbname}" ]; then
parsed_name=true
print_debug "Parsed DBName: ${p_dbname}"
fi
if grep -q "^DB${detected_host_num}_NAME=${p_dbname}" "${restore_vars}" ; then
detected_name_value=$(grep -q "^DB${detected_host_num}_NAME=${p_dbname}" "${restore_vars}" | head -n1 | cut -d '=' -f 2)
fi
if [ -z "${detected_name_value}" ] && [ -z "${parsed_name}" ]; then
print_debug "Parsed DBName Variant: 1 - No Env, No Parsed Filename"
q_dbname_variant=1
q_dbname_menu=$(cat <<EOF
EOF
)
fi
if [ -n "${detected_name_value}" ] && [ -z "${parsed_name}" ]; then
print_debug "Parsed DBName Variant: 2 - Env, No Parsed Filename"
q_dbname_variant=2
q_dbname_menu=$(cat <<EOF
C ) Custom Entered Database Name
E ) Environment Variable DB${detected_host_num}_NAME: '${detected_name_value}'
EOF
)
fi
if [ -z "${detected_name_value}" ] && [ -n "${parsed_name}" ]; then
print_debug "Parsed DBName Variant: 3 - No Env, Parsed Filename"
q_dbname_variant=3
q_dbname_menu=$(cat <<EOF
C ) Custom Entered Database Name
F ) Parsed Filename DB Name: '${p_dbname}'
EOF
)
fi
if [ -n "${detected_name_value}" ] && [ -n "${parsed_name}" ]; then
print_debug "Parsed DBname Variant: 4 - Env, Parsed Filename"
q_dbname_variant=4
q_dbname_menu=$(cat <<EOF
C ) Custom Entered Database Name
E ) Environment Variable DB${detected_host_num}_NAME: '${detected_name_value}'
F ) Parsed Filename DB Name: '${p_dbname}'
EOF
)
fi
cat << EOF
What Database Name do you want to restore to?
${q_dbname_menu}
Q ) Quit
EOF
case "${q_dbname_variant}" in
1 )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
;;
2 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E\*${cdgy}\) : ${cwh}${coff}) " q_dbname_menu
case "${q_dbname_menu,,}" in
c* )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
break
;;
e* | "" )
r_dbname=${detected_name_value}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
3 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}F\*${cdgy}\) : ${cwh}${coff}) " q_dbname_menu
case "${q_dbname_menu,,}" in
c* )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
break
;;
f* | "" )
r_dbname=${p_dbname}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
4 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E\*${cdgy}\) \| \(${cwh}F${cdgy}\) : ${cwh}${coff}) " q_dbname_menu
case "${q_dbname_menu,,}" in
c* )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
break
;;
e* | "" )
r_dbname=${detected_name_value}
break
;;
f* )
r_dbname=${p_dbname}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
esac
}
get_dbpass() {
if grep -q "^DB${detected_host_num}_PASS=" "${restore_vars}" ; then
detected_pass_value=$(grep "^DB${detected_host_num}_PASS=" "${restore_vars}" | head -n1 | cut -d '=' -f 2)
fi
if [ -z "${detected_pass_value}" ] ; then
print_debug "Parsed DBPass Variant: 1 - No Env"
q_dbpass_variant=1
q_dbpass_menu=$(cat <<EOF
EOF
)
fi
if [ -n "${detected_pass_value}" ] ; then
print_debug "Parsed DBPass Variant: 2 - Env"
q_dbpass_variant=2
q_dbpass_menu=$(cat <<EOF
C ) Custom Entered Database Password
E ) Environment Variable DB${detected_host_num}_PASS
EOF
)
fi
cat << EOF
What Database Password will be used to restore?
${q_dbpass_menu}
Q ) Quit
EOF
case "${q_dbpass_variant}" in
1 )
counter=1
q_dbpass=" "
while [[ $q_dbpass = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Passwords can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Password do you wish to use:\ ${coff})" q_dbpass
(( counter+=1 ))
done
r_dbpass=${q_dbpass}
;;
2 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E\*${cdgy}\) : ${cwh}${coff}) " q_dbpass_menu
case "${q_dbpass_menu,,}" in
c* )
counter=1
q_dbpass=" "
while [[ $q_dbpass = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Passwords can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Password do you wish to use:\ ${coff})" q_dbpass
(( counter+=1 ))
done
r_dbpass=${q_dbpass}
break
;;
e* | "" )
r_dbpass=${detected_pass_value}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
esac
}
get_dbport() {
if grep -q "^DB${detected_host_num}_PORT=" "${restore_vars}" ; then
detected_port_value=$(grep "^DB${detected_host_num}_PORT=" "${restore_vars}" | head -n1 | cut -d '=' -f 2)
fi
if [ -z "${detected_port_value}" ] ; then
print_debug "Parsed DBPort Variant: 1 - No Env"
q_dbport_variant=1
q_dbport_menu_opt_default="| (${cwh}D${cdgy}) * "
q_dbport_menu=$(cat <<EOF
C ) Custom Entered Database Port
D ) Default Port for Database type '${r_dbtype}': '${DEFAULT_PORT}'
EOF
)
fi
if [ -n "${detected_port_value}" ] ; then
print_debug "Parsed DBPort Variant: 2 - Env"
q_dbport_variant=2
q_dbport_menu=$(cat <<EOF
C ) Custom Entered Database Port
D ) Default Port for Database type '${r_dbtype}': '${DEFAULT_PORT}'
E ) Environment Variable DB${detected_host_num}_PORT: '${detected_port_value}'
EOF
)
fi
cat << EOF
What Database Port do you wish to use? MySQL/MariaDB typcially listens on port 3306. Postrgresql port 5432. MongoDB 27017
${q_dbport_menu}
Q ) Quit
EOF
case "${q_dbport_variant}" in
1 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}D\*${cdgy}\) : ${cwh}${coff}) " q_dbport_menu
case "${q_dbport_menu,,}" in
c* )
counter=1
q_dbport=" "
q_dbportre='^[0-9]+$'
while ! [[ $q_dbport =~ ${q_dbportre} ]]; do
if [ $counter -gt 1 ] ; then print_error "Must be a port number, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Port do you want to use:\ ${coff})" q_dbport
(( counter+=1 ))
done
r_dbport=${q_dbport}
break
;;
d* | "" )
r_dbport=${DEFAULT_PORT}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
2 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}D${cdgy}\) \| \(${cwh}E\*${cdgy}\) : ${cwh}${coff}) " q_dbport_menu
case "${q_dbport_menu,,}" in
c* )
counter=1
q_dbport=" "
q_dbportre='^[0-9]+$'
while ! [[ $q_dbport =~ ${q_dbportre} ]]; do
if [ $counter -gt 1 ] ; then print_error "Must be a port number, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Port do you want to use:\ ${coff})" q_dbport
(( counter+=1 ))
done
r_dbport=${q_dbport}
break
;;
d* )
r_dbport=${DEFAULT_PORT}
break
;;
e* | "" )
r_dbport=${detected_port_value}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
esac
}
get_dbtype() { get_dbtype() {
p_dbtype=$(basename -- "${r_filename}" | cut -d _ -f 1) p_dbtype=$(basename -- "${r_filename}" | cut -d _ -f 1)
@@ -606,17 +265,14 @@ get_dbtype() {
case "${p_dbtype}" in case "${p_dbtype}" in
mongo* ) mongo* )
parsed_type=true parsed_type=true
DEFAULT_PORT=${DEFAULT_PORT:-"27017"}
print_debug "Parsed DBType: MongoDB" print_debug "Parsed DBType: MongoDB"
;; ;;
mariadb | mysql ) mariadb | mysql )
parsed_type=true parsed_type=true
DEFAULT_PORT=${DEFAULT_PORT:-"3306"}
print_debug "Parsed DBType: MariaDB/MySQL" print_debug "Parsed DBType: MariaDB/MySQL"
;; ;;
pgsql | postgres* ) pgsql | postgres* )
parsed_type=true parsed_type=true
DEFAULT_PORT=${DEFAULT_PORT:-"5432"}
print_debug "Parsed DBType: Postgresql" print_debug "Parsed DBType: Postgresql"
;; ;;
* ) * )
@@ -683,17 +339,14 @@ EOF
case "${q_dbtype,,}" in case "${q_dbtype,,}" in
m* ) m* )
r_dbtype=mysql r_dbtype=mysql
DEFAULT_PORT=${DEFAULT_PORT:-"3306"}
break break
;; ;;
o* ) o* )
r_dbtype=mongo r_dbtype=mongo
DEFAULT_PORT=${DEFAULT_PORT:-"27017"}
break break
;; ;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
DEFAULT_PORT=${DEFAULT_PORT:-"5432"}
break break
;; ;;
q* ) q* )
@@ -713,17 +366,14 @@ EOF
;; ;;
m* ) m* )
r_dbtype=mysql r_dbtype=mysql
DEFAULT_PORT=${DEFAULT_PORT:-"3306"}
break break
;; ;;
o* ) o* )
r_dbtype=mongo r_dbtype=mongo
DEFAULT_PORT=${DEFAULT_PORT:-"27017"}
break break
;; ;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
DEFAULT_PORT=${DEFAULT_PORT:-"5432"}
break break
;; ;;
q* ) q* )
@@ -735,36 +385,22 @@ EOF
;; ;;
3 ) 3 )
while true; do while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}F${cdgy}\) \(Default\) \| \(${cwh}M${cdgy}\) \| \(${cwh}O${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}F${cdgy}\) \| \(${cwh}M${cdgy}\) \| \(${cwh}O${cdgy}\) \| \(${cwh}P${cdgy}\) : ${cwh}${coff}) " q_dbtype
case "${q_dbtype,,}" in case "${q_dbtype,,}" in
f* | "" ) f* | "" )
r_dbtype=${p_dbtype} r_dbtype=${p_dbtype}
case "${r_dbtype}" in
mongo )
DEFAULT_PORT=${DEFAULT_PORT:-"27017"}
;;
mysql )
DEFAULT_PORT=${DEFAULT_PORT:-"3306"}
;;
pgsql )
DEFAULT_PORT=${DEFAULT_PORT:-"5432"}
;;
esac
break break
;; ;;
m* ) m* )
r_dbtype=mysql r_dbtype=mysql
DEFAULT_PORT=${DEFAULT_PORT:-"3306"}
break break
;; ;;
o* ) o* )
r_dbtype=mongo r_dbtype=mongo
DEFAULT_PORT=${DEFAULT_PORT:-"27017"}
break break
;; ;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
DEFAULT_PORT=${DEFAULT_PORT:-"5432"}
break break
;; ;;
q* ) q* )
@@ -789,17 +425,14 @@ EOF
;; ;;
m* ) m* )
r_dbtype=mysql r_dbtype=mysql
DEFAULT_PORT=${DEFAULT_PORT:-"3306"}
break break
;; ;;
o* ) o* )
r_dbtype=mongo r_dbtype=mongo
DEFAULT_PORT=${DEFAULT_PORT:-"27017"}
break break
;; ;;
p* ) p* )
r_dbtype=postgresql r_dbtype=postgresql
DEFAULT_PORT=${DEFAULT_PORT:-"5432"}
break break
;; ;;
q* ) q* )
@@ -812,12 +445,235 @@ EOF
esac esac
} }
get_dbuser() { get_dbname() {
if grep -q "^DB${detected_host_num}_USER=" "${restore_vars}" ; then p_dbname=$(basename -- "${r_filename}" | cut -d _ -f 2)
detected_user_value=$(grep "^DB${detected_host_num}_USER=" "${restore_vars}" | head -n1 | cut -d '=' -f 2)
if [ -n "${p_dbname}" ]; then
parsed_name=true
print_debug "Parsed DBName: ${p_dbhost}"
fi fi
if [ -z "${detected_user_value}" ] ; then if [ -z "${DB_NAME}" ] && [ -z "${parsed_name}" ]; then
print_debug "Parsed DBName Variant: 1 - No Env, No Parsed Filename"
q_dbname_variant=1
q_dbname_menu=$(cat <<EOF
EOF
)
fi
if [ -n "${DB_NAME}" ] && [ -z "${parsed_name}" ]; then
print_debug "Parsed DBName Variant: 2 - Env, No Parsed Filename"
q_dbname_variant=2
q_dbname_menu=$(cat <<EOF
C ) Custom Entered Database Name
E ) Environment Variable DB_NAME: '${DB_NAME}'
EOF
)
fi
if [ -z "${DB_NAME}" ] && [ -n "${parsed_name}" ]; then
print_debug "Parsed DBName Variant: 3 - No Env, Parsed Filename"
q_dbname_variant=3
q_dbname_menu=$(cat <<EOF
C ) Custom Entered Database Name
F ) Parsed Filename DB Name: '${p_dbname}'
EOF
)
fi
if [ -n "${DB_NAME}" ] && [ -n "${parsed_name}" ]; then
print_debug "Parsed DBname Variant: 4 - Env, Parsed Filename"
q_dbname_variant=4
q_dbname_menu=$(cat <<EOF
C ) Custom Entered Database Name
E ) Environment Variable DB_NAME: '${DB_NAME}'
F ) Parsed Filename DB Name: '${p_dbname}'
EOF
)
fi
cat << EOF
What Database Name do you want to restore to?
${q_dbname_menu}
Q ) Quit
EOF
case "${q_dbname_variant}" in
1 )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
;;
2 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E${cdgy}\) : ${cwh}${coff}) " q_dbname_menu
case "${q_dbname_menu,,}" in
c* )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
break
;;
e* | "" )
r_dbname=${DB_NAME}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
3 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}F${cdgy}\) : ${cwh}${coff}) " q_dbname_menu
case "${q_dbname_menu,,}" in
c* )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
break
;;
f* | "" )
r_dbname=${p_dbname}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
4 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E${cdgy}\) \| \(${cwh}F${cdgy}\) : ${cwh}${coff}) " q_dbname_menu
case "${q_dbname_menu,,}" in
c* )
counter=1
q_dbname=" "
while [[ $q_dbname = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB names can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB name do you want to restore to:\ ${coff})" q_dbname
(( counter+=1 ))
done
r_dbname=${q_dbname}
break
;;
e* | "" )
r_dbname=${DB_NAME}
break
;;
f* )
r_dbname=${p_dbname}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
esac
}
get_dbport() {
if [ -z "${DB_PORT}" ] ; then
print_debug "Parsed DBPort Variant: 1 - No Env"
q_dbport_variant=1
q_dbport_menu=$(cat <<EOF
EOF
)
fi
if [ -n "${DB_PORT}" ] ; then
print_debug "Parsed DBPort Variant: 2 - Env"
q_dbport_variant=2
q_dbport_menu=$(cat <<EOF
C ) Custom Entered Database Port
E ) Environment Variable DB_PORT: '${DB_PORT}'
EOF
)
fi
cat << EOF
What Database Port do you wish to use? MySQL/MariaDB typcially listens on port 3306. Postrgresql port 5432. MongoDB 27017
${q_dbport_menu}
Q ) Quit
EOF
case "${q_dbport_variant}" in
1 )
counter=1
q_dbport=" "
q_dbportre='^[0-9]+$'
while ! [[ $q_dbport =~ ${q_dbportre} ]]; do
if [ $counter -gt 1 ] ; then print_error "Must be a port number, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Port do you want to use:\ ${coff})" q_dbport
(( counter+=1 ))
done
r_dbport=${q_dbport}
;;
2 )
while true; do
read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E${cdgy}\) : ${cwh}${coff}) " q_dbport_menu
case "${q_dbport_menu,,}" in
c* )
counter=1
q_dbport=" "
q_dbportre='^[0-9]+$'
while ! [[ $q_dbport =~ ${q_dbportre} ]]; do
if [ $counter -gt 1 ] ; then print_error "Must be a port number, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Port do you want to use:\ ${coff})" q_dbport
(( counter+=1 ))
done
r_dbport=${q_dbport}
break
;;
e* | "" )
r_dbport=${DB_PORT}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
esac
}
get_dbuser() {
if [ -z "${DB_USER}" ] ; then
print_debug "Parsed DBUser Variant: 1 - No Env" print_debug "Parsed DBUser Variant: 1 - No Env"
q_dbuser_variant=1 q_dbuser_variant=1
q_dbuser_menu=$(cat <<EOF q_dbuser_menu=$(cat <<EOF
@@ -826,13 +682,13 @@ EOF
) )
fi fi
if [ -n "${detected_user_value}" ] ; then if [ -n "${DB_USER}" ] ; then
print_debug "Parsed DBUser Variant: 2 - Env" print_debug "Parsed DBUser Variant: 2 - Env"
q_dbuser_variant=2 q_dbuser_variant=2
q_dbuser_menu=$(cat <<EOF q_dbuser_menu=$(cat <<EOF
C ) Custom Entered Database User C ) Custom Entered Database User
E ) Environment Variable DB${detected_host_num}_USER: '${detected_user_value}' E ) Environment Variable DB_USER: '${DB_USER}'
EOF EOF
) )
fi fi
@@ -872,7 +728,7 @@ EOF
break break
;; ;;
e* | "" ) e* | "" )
r_dbuser=${detected_user_value} r_dbuser=${DB_USER}
break break
;; ;;
q* ) q* )
@@ -885,37 +741,76 @@ EOF
esac esac
} }
get_filename() { get_dbpass() {
COLUMNS=12 if [ -z "${DB_PASS}" ] ; then
prompt="Please select a file to restore:" print_debug "Parsed DBPass Variant: 1 - No Env"
options=( $(find "${DEFAULT_FILESYSTEM_PATH}" -type f -maxdepth 2 -not -name '*.md5' -not -name '*.sha1' -not -name '*.gpg' -print0 | sort -z | xargs -0) ) q_dbpass_variant=1
PS3="$prompt " q_dbpass_menu=$(cat <<EOF
select opt in "${options[@]}" "Custom" "Quit" ; do
if (( REPLY == 2 + ${#options[@]} )) ; then EOF
echo "Bye!" )
exit 2 fi
elif (( REPLY == 1 + ${#options[@]} )) ; then
while [ ! -f "${opt}" ] ; do if [ -n "${DB_PASS}" ] ; then
read -p "What path and filename to restore: " opt print_debug "Parsed DBPass Variant: 2 - Env"
if [ ! -f "${opt}" ] ; then q_dbpass_variant=2
print_error "File not found. Please retry.." q_dbpass_menu=$(cat <<EOF
fi
C ) Custom Entered Database Password
E ) Environment Variable DB_PASS
EOF
)
fi
cat << EOF
What Database Password will be used to restore?
${q_dbpass_menu}
Q ) Quit
EOF
case "${q_dbpass_variant}" in
1 )
counter=1
q_dbpass=" "
while [[ $q_dbpass = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Passwords can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Password do you wish to use:\ ${coff})" q_dbpass
(( counter+=1 ))
done done
break r_dbpass=${q_dbpass}
elif (( REPLY > 0 && REPLY <= ${#options[@]} )) ; then ;;
break 2 )
else while true; do
echo "Invalid option. Try another one." read -p "$(echo -e ${clg}** ${cdgy}Enter Value \(${cwh}C${cdgy}\) \| \(${cwh}E${cdgy}\) : ${cwh}${coff}) " q_dbpass_menu
fi case "${q_dbpass_menu,,}" in
done c* )
COLUMNS=$oldcolumns counter=1
r_filename=${opt} q_dbpass=" "
while [[ $q_dbpass = *" "* ]]; do
if [ $counter -gt 1 ] ; then print_error "DB Passwords can't have spaces in them, please re-enter." ; fi ;
read -e -p "$(echo -e ${clg}** ${cdgy}What DB Password do you wish to use:\ ${coff})" q_dbpass
(( counter+=1 ))
done
r_dbpass=${q_dbpass}
break
;;
e* | "" )
r_dbpass=${DB_PASS}
break
;;
q* )
print_info "Quitting Script"
exit 1
;;
esac
done
;;
esac
} }
#### SCRIPT START #### SCRIPT START
trap control_c INT
bootstrap_variables restore_init
cat << EOF cat << EOF
## ${IMAGE_NAME} Restore Script ## ${IMAGE_NAME} Restore Script

View File

@@ -1,9 +1,10 @@
{ {
"zabbix_export": { "zabbix_export": {
"version": "6.4", "version": "6.0",
"template_groups": [ "date": "2022-03-18T13:32:12Z",
"groups": [
{ {
"uuid": "10b88d2b3a3a4c72b43bdce9310e1162", "uuid": "fa56524b5dbb4ec09d9777a6f7ccfbe4",
"name": "DB/Backup" "name": "DB/Backup"
}, },
{ {
@@ -13,10 +14,10 @@
], ],
"templates": [ "templates": [
{ {
"uuid": "5a16c1bd694145389eed5ee803d954cc", "uuid": "5fc64d517afb4cc5bc09a3ef58b43ef7",
"template": "DB Backup4", "template": "DB Backup",
"name": "DB Backup4", "name": "DB Backup",
"description": "Template for Docker DB Backup Image\n\nMeant for use specifically with https://github.com/tiredofit/docker-db-backup Version > 4.0.0\n\nSupports auto discovery of backup jobs and creates graphs and triggers", "description": "Template for Docker DB Backup Image\n\nMeant for use specifically with https://github.com/tiredofit/docker-db-backup\nLast tested with version 3.0.2",
"groups": [ "groups": [
{ {
"name": "DB/Backup" "name": "DB/Backup"
@@ -25,260 +26,134 @@
"name": "Templates/Databases" "name": "Templates/Databases"
} }
], ],
"discovery_rules": [ "items": [
{ {
"uuid": "94bb6f862e1841f8b2834b04c41c1d86", "uuid": "72fd00fa2dd24e479f5affe03e8711d8",
"name": "Backup", "name": "DB Backup: Backup Duration",
"type": "TRAP", "type": "TRAP",
"key": "dbbackup.backup", "key": "dbbackup.backup_duration",
"delay": "0", "delay": "0",
"item_prototypes": [ "history": "7d",
"units": "uptime",
"description": "How long the backup took",
"tags": [
{ {
"uuid": "5a2c4d1cacf844829bc1fbf912e071c5", "tag": "Application",
"name": "[{#NAME}] Checksum - Duration", "value": "DB Backup"
"type": "TRAP", }
"key": "dbbackup.backup.checksum.duration.[{#NAME}]", ]
"delay": "0", },
"history": "7d", {
"units": "uptime", "uuid": "3549a2c9d56849babc6dc3c855484c1e",
"tags": [ "name": "DB Backup: Backup Time",
{ "type": "TRAP",
"tag": "Application", "key": "dbbackup.datetime",
"value": "DB Backup" "delay": "0",
} "history": "7d",
] "units": "unixtime",
}, "request_method": "POST",
"tags": [
{ {
"uuid": "6e49769ec07344a4974b13dab00c3539", "tag": "Application",
"name": "[{#NAME}] Checksum - Hash", "value": "DB Backup"
"type": "TRAP",
"key": "dbbackup.backup.checksum.hash.[{#NAME}]",
"delay": "0",
"history": "30d",
"trends": "0",
"value_type": "TEXT",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
]
},
{
"uuid": "bb6472e30bff4d9c908b1d34b893e622",
"name": "[{#NAME}] Backup - Last Backup",
"type": "TRAP",
"key": "dbbackup.backup.datetime.[{#NAME}]",
"delay": "0",
"history": "7d",
"units": "unixtime",
"description": "Datestamp of last database backup",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"trigger_prototypes": [
{
"uuid": "3681b56bb882466fb304a48b4beb15f0",
"expression": "fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],172800s)=0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],259200s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],345600s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],432800s)<>0",
"name": "[{#NAME}] No backups detected in 2 days",
"priority": "HIGH",
"manual_close": "YES"
},
{
"uuid": "6c70136c84994197b6396a143b4e956f",
"expression": "fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],172800s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],259200s)=0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],345600s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],432800s)<>0",
"name": "[{#NAME}] No backups detected in 3 days",
"priority": "DISASTER",
"manual_close": "YES"
},
{
"uuid": "d2038025cab643019cb9610c301f0cb9",
"expression": "fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],172800s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],259200s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],345600s)=0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],432800s)<>0",
"name": "[{#NAME}] No backups detected in 4 days",
"priority": "DISASTER",
"manual_close": "YES"
},
{
"uuid": "ea85f02d032c4a1dbc1b6e91a3b2b37b",
"expression": "fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],172800s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],259200s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],345600s)<>0 and fuzzytime(/DB Backup4/dbbackup.backup.datetime.[{#NAME}],432800s)=0",
"name": "[{#NAME}] No backups detected in 5 days",
"priority": "DISASTER",
"manual_close": "YES"
}
]
},
{
"uuid": "8ec2b2f44ddf4f36b3dbb2aa15e3a32f",
"name": "[{#NAME}] Backup - Duration",
"type": "TRAP",
"key": "dbbackup.backup.duration.[{#NAME}]",
"delay": "0",
"history": "7d",
"units": "uptime",
"description": "How long the DB Backup job took",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
]
},
{
"uuid": "3f0dc3c75261447c93482815c3d69524",
"name": "[{#NAME}] Encrypt - Duration",
"type": "TRAP",
"key": "dbbackup.backup.encrypt.duration.[{#NAME}]",
"delay": "0",
"history": "7d",
"units": "uptime",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
]
},
{
"uuid": "c3d5ad0789c443859d6a673e03db9cec",
"name": "[{#NAME}] Backup - Filename",
"type": "TRAP",
"key": "dbbackup.backup.filename.[{#NAME}]",
"delay": "0",
"history": "30d",
"trends": "0",
"value_type": "TEXT",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
]
},
{
"uuid": "43b700c03897465eb7e49bbfe8fc9fc5",
"name": "[{#NAME}] Backup - Size",
"type": "TRAP",
"key": "dbbackup.backup.size.[{#NAME}]",
"delay": "0",
"history": "7d",
"description": "Backup Size",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"trigger_prototypes": [
{
"uuid": "849f8660bee04427aff55af47b6f509c",
"expression": "last(/DB Backup4/dbbackup.backup.size.[{#NAME}])/last(/DB Backup4/dbbackup.backup.size.[{#NAME}],#2)>1.2",
"name": "[{#NAME}] Backup 20% Greater in size",
"priority": "WARNING",
"manual_close": "YES"
},
{
"uuid": "74d16a7680544c65af22cc568ce3d59d",
"expression": "last(/DB Backup4/dbbackup.backup.size.[{#NAME}])/last(/DB Backup4/dbbackup.backup.size.[{#NAME}],#2)<0.2",
"name": "[{#NAME}] Backup 20% Smaller in Size",
"priority": "WARNING",
"manual_close": "YES"
},
{
"uuid": "5595d769c73f4eaeadda95c84c2c0f17",
"expression": "last(/DB Backup4/dbbackup.backup.size.[{#NAME}])<1K",
"name": "[{#NAME}] Backup Empty",
"priority": "HIGH",
"manual_close": "YES"
}
]
},
{
"uuid": "a6fc542a565c4baba8429ed9ab31b5ae",
"name": "[{#NAME}] Backup - Status",
"type": "TRAP",
"key": "dbbackup.backup.status.[{#NAME}]",
"delay": "0",
"history": "7d",
"description": "Maps exit code by DB Backup procedure",
"valuemap": {
"name": "Backup Status"
},
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"trigger_prototypes": [
{
"uuid": "74b91e28453b4c2a84743f5e371495c1",
"expression": "last(/DB Backup4/dbbackup.backup.status.[{#NAME}])=1",
"name": "[{#NAME}] Backup - Failed with errors",
"priority": "WARNING",
"manual_close": "YES"
}
]
} }
], ],
"graph_prototypes": [ "triggers": [
{ {
"uuid": "b5e8e9fe0c474fedba2b06366234afdf", "uuid": "3ac1e074ffea46eb8002c9c08a85e7b4",
"name": "[{#NAME}] Backup Duration", "expression": "nodata(/DB Backup/dbbackup.datetime,2d)=1",
"graph_items": [ "name": "DB-Backup: No backups detected in 2 days",
{ "priority": "DISASTER",
"color": "199C0D", "manual_close": "YES"
"calc_fnc": "ALL",
"item": {
"host": "DB Backup4",
"key": "dbbackup.backup.duration.[{#NAME}]"
}
}
]
}, },
{ {
"uuid": "99b5deb4e28f40059c50846c7be2ef26", "uuid": "b8b5933dfa1a488c9c37dd7f4784c1ff",
"name": "[{#NAME}] Backup Size", "expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)=0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)<>0",
"graph_items": [ "name": "DB Backup: No Backups occurred in 2 days",
{ "priority": "AVERAGE"
"color": "199C0D",
"calc_fnc": "ALL",
"item": {
"host": "DB Backup4",
"key": "dbbackup.backup.size.[{#NAME}]"
}
}
]
}, },
{ {
"uuid": "8c641e33659e4c8b866da64e252cfc2a", "uuid": "35c5f420d0e142cc9601bae38decdc40",
"name": "[{#NAME}] Checksum Duration", "expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)=0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)<>0",
"graph_items": [ "name": "DB Backup: No Backups occurred in 3 days",
{ "priority": "AVERAGE"
"color": "199C0D",
"calc_fnc": "ALL",
"item": {
"host": "DB Backup4",
"key": "dbbackup.backup.checksum.duration.[{#NAME}]"
}
}
]
}, },
{ {
"uuid": "65b8770f71ed4cff9111b82c42b17571", "uuid": "03c3719d82c241e886a0383c7d908a77",
"name": "[{#NAME}] Encrypt Duration", "expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)=0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)<>0",
"graph_items": [ "name": "DB Backup: No Backups occurred in 4 days",
{ "priority": "AVERAGE"
"color": "199C0D", },
"calc_fnc": "ALL", {
"item": { "uuid": "1634a03e44964e42b7e0101f5f68499c",
"host": "DB Backup4", "expression": "fuzzytime(/DB Backup/dbbackup.datetime,172800s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,259200s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,345600s)<>0 and fuzzytime(/DB Backup/dbbackup.datetime,432800s)=0",
"key": "dbbackup.backup.encrypt.duration.[{#NAME}]" "name": "DB Backup: No Backups occurred in 5 days or more",
} "priority": "HIGH"
} }
] ]
},
{
"uuid": "467dfec952b34f5aa4cc890b4351b62d",
"name": "DB Backup: Backup Size",
"type": "TRAP",
"key": "dbbackup.size",
"delay": "0",
"history": "7d",
"units": "B",
"request_method": "POST",
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"triggers": [
{
"uuid": "a41eb49b8a3541afb6de247dca750e38",
"expression": "last(/DB Backup/dbbackup.size)/last(/DB Backup/dbbackup.size,#2)>1.2",
"name": "DB Backup: 20% Greater in Size",
"priority": "WARNING",
"manual_close": "YES"
},
{
"uuid": "422f66be5049403293f3d96fc53f20cd",
"expression": "last(/DB Backup/dbbackup.size)/last(/DB Backup/dbbackup.size,#2)<0.2",
"name": "DB Backup: 20% Smaller in Size",
"priority": "WARNING",
"manual_close": "YES"
},
{
"uuid": "d6d9d875b92f4d799d4bc89aabd4e90e",
"expression": "last(/DB Backup/dbbackup.size)<1K",
"name": "DB Backup: empty",
"priority": "HIGH"
}
]
},
{
"uuid": "a6b13e8b46a64abab64a4d44d620d272",
"name": "DB Backup: Last Backup Status",
"type": "TRAP",
"key": "dbbackup.status",
"delay": "0",
"history": "7d",
"description": "Maps Exit Codes received by backup applications",
"valuemap": {
"name": "DB Backup Status"
},
"tags": [
{
"tag": "Application",
"value": "DB Backup"
}
],
"triggers": [
{
"uuid": "23d71e356f96493180f02d4b84a79fd6",
"expression": "last(/DB Backup/dbbackup.status)=1",
"name": "DB Backup: Failed Backup Detected",
"priority": "HIGH",
"manual_close": "YES"
} }
] ]
} }
@@ -293,10 +168,38 @@
"value": "Database" "value": "Database"
} }
], ],
"dashboards": [
{
"uuid": "90c81bb47184401ca9663626784a6f30",
"name": "DB Backup",
"pages": [
{
"widgets": [
{
"type": "GRAPH_CLASSIC",
"name": "Backup Size",
"width": "23",
"height": "5",
"fields": [
{
"type": "GRAPH",
"name": "graphid",
"value": {
"name": "DB Backup: Backup Size",
"host": "DB Backup"
}
}
]
}
]
}
]
}
],
"valuemaps": [ "valuemaps": [
{ {
"uuid": "92a87279388b4fd1ac51c1e417e1776e", "uuid": "82f3a3d01b3c42b8942b59d2363724e0",
"name": "Backup Status", "name": "DB Backup Status",
"mappings": [ "mappings": [
{ {
"value": "0", "value": "0",
@@ -311,6 +214,36 @@
} }
] ]
} }
],
"graphs": [
{
"uuid": "6e02c200b76046bab76062cd1ab086b2",
"name": "DB Backup: Backup Duration",
"graph_items": [
{
"color": "199C0D",
"item": {
"host": "DB Backup",
"key": "dbbackup.backup_duration"
}
}
]
},
{
"uuid": "b881ee18f05c4f4c835982c9dfbb55d6",
"name": "DB Backup: Backup Size",
"type": "STACKED",
"graph_items": [
{
"sortorder": "1",
"color": "1A7C11",
"item": {
"host": "DB Backup",
"key": "dbbackup.size"
}
}
]
}
] ]
} }
} }