Compare commits

...

146 Commits
1.10 ... 2.11.3

Author SHA1 Message Date
Dave Conroy
fa1dde53c0 Release 2.11.3 - See CHANGELOG.md 2022-02-09 19:34:27 -08:00
Dave Conroy
789a9e5f5a Release 2.11.2 - See CHANGELOG.md 2022-02-09 17:14:25 -08:00
Dave Conroy
b80d61f997 Merge pull request #102 from jacksgt/fix-s3-variable
Fix S3 variables (again)
2022-02-08 13:58:33 -08:00
Jack Henschel
9e23def7b4 Fix S3 variables (again)
S3_ENDPOINT is not used anywhere in the code, but S3_HOST.
2022-02-08 21:52:53 +01:00
Dave Conroy
ca31eb840d 2.11.1 - See CHANGELOG.md 2022-02-06 17:03:36 -08:00
Dave Conroy
6bec188633 Merge pull request #101 from jacksgt/add-s3-endpoint
Correct several variables for S3 backups
2022-02-06 17:01:18 -08:00
Dave Conroy
43562077b0 Merge pull request #100 from jacksgt/patch-1
Use exit code 0 when running in manual mode
2022-02-06 16:59:00 -08:00
Jack Henschel
c39bdeebae Correct several variables for S3 backups
* S3_URI_STYLE is not used anywhere (anymore)
* BACKUP_TYPE is not documented anywhere, redundant with BACKUP_LOCATION
* S3_HOST is not used anymore, document S3_ENDPOINT instead

ref 4d6419fd18
2022-02-06 00:50:35 +01:00
Jack Henschel
36b5909483 Use exit code 0 when running in manual mode
When running this container as a Kubernetes cronjob, it is really inconvenient that the script exits with return code "1", thereby marking the job as a failure.
Instead, the script should return "0" because everything was successful.
2022-02-06 00:20:30 +01:00
Dave Conroy
6033b1b0a9 Update README to talk about quoting the values 2022-01-31 13:59:53 -08:00
Dave Conroy
88218915e1 Release 2.11.0 - See CHANGELOG.md 2022-01-20 09:23:06 -08:00
Dave Conroy
065887f789 Release 2.10.3 - See CHANGELOG.md 2022-01-07 06:33:46 -08:00
Dave Conroy
5aba713b73 Release 2.10.2 - See CHANGELOG.md 2021-12-28 14:16:46 -08:00
Dave Conroy
9a6039d71d Release 2.10.1 - See CHANGELOG.md 2021-12-24 17:43:18 -08:00
Dave Conroy
c5f1618231 Merge pull request #93 from milenkara/master
Provide region when using S3
2021-12-24 17:42:16 -08:00
milenkara
7dd9fa890f Provide region when using S3 2021-12-24 16:26:03 +01:00
Dave Conroy
b62554ceff Release 2.10.0 - See CHANGELOG.md 2021-12-22 14:29:35 -08:00
Dave Conroy
7729743ccf Release 2.9.7 - See CHANGELOG.md 2021-12-15 07:17:57 -08:00
Dave Conroy
d56efc0ee9 Release 2.9.6 - See CHANGELOG.md 2021-12-13 17:40:43 -08:00
Dave Conroy
7d87e474e0 Merge branch 'master' of https://github.com/tiredofit/docker-db-backup 2021-12-13 17:38:25 -08:00
Dave Conroy
342c252d9a Release 2.9.5 - See CHANGELOG.md 2021-12-13 17:33:14 -08:00
Dave Conroy
25f3cab21f Merge pull request #91 from alexbarcelo/minio_fix
MINIO support by reacting to S3_HOST
2021-12-13 17:30:50 -08:00
Dave Conroy
e63d56c753 Merge pull request #92 from alexbarcelo/targettimeprint
Fixing the print_notice for Next Backup when `DB_DUMP_BEGIN` is +XXX
2021-12-13 17:27:58 -08:00
Alex Barcelo
86722a8e8a defining target_time variable in that branch 2021-12-13 21:56:45 +01:00
Alex Barcelo
4d6419fd18 reacting to S3_HOST config envvar by setting the --endpoint-url parameter on AWS CLI 2021-12-13 21:49:02 +01:00
Dave Conroy
99153ac6d1 Release 2.9.5 2021-12-07 15:10:28 -08:00
Dave Conroy
142967135d Release 2.9.4 - See CHANGELOG.md 2021-12-07 15:02:47 -08:00
Dave Conroy
1df66853fb Release 2.9.3 - See CHANGELOG.md 2021-11-24 09:53:49 -08:00
Dave Conroy
c019efeb74 Release 2.9.2 - See CHANGELOG.md 2021-10-22 07:12:05 -07:00
Dave Conroy
f276af2512 Merge pull request #86 from teenigma/backup-redis
Fix compression on Redis backup - Fixes #58
2021-10-22 07:10:00 -07:00
Teerapatr K
4e41e66eff Fix compression on Redis backup file 2021-10-22 14:06:37 +07:00
Teerapatr K
4488d113ef Jump out loop when complete 2021-10-22 14:04:54 +07:00
Dave Conroy
1cd014b165 Add manual CI pipeline 2021-10-17 09:41:53 -07:00
Dave Conroy
39bd8537ff Add manual CI pipeline 2021-10-17 09:41:18 -07:00
Dave Conroy
75ded7599c Release 2.9.1 - See CHANGELOG.md 2021-10-15 16:27:19 -07:00
Dave Conroy
5e5986db69 Merge pull request #83 from sbrunecker/bug/79/fix-mysql-8
fix: not able to connect to mysql 8 db with caching_sha2_password
2021-10-15 16:25:20 -07:00
Dave Conroy
1d61f40d0c Release 2.9.1 - See CHANGELOG.md 2021-10-15 16:24:46 -07:00
Dave Conroy
ee8bbb370b Merge pull request #84 from sbrunecker/bug/80/empty-password
fix: db available check getting stuck with empty password
2021-10-15 16:23:26 -07:00
Stefan Brunecker
ae201814fa fix: db available check getting stuck with empty password 2021-10-16 00:50:15 +02:00
Stefan Brunecker
bf8bce0893 fix: not able to connect to mysql 8 db with caching_sha2_password 2021-10-16 00:49:27 +02:00
Dave Conroy
3e8585394d Release 2.9.0 - See CHANGELOG.md 2021-10-15 10:08:27 -07:00
Dave Conroy
2e0c0d9248 Release 2.8.2 - See CHANGELOG.md 2021-10-15 09:52:25 -07:00
Dave Conroy
c81d8e2713 Release 2.8.1 - See CHANGELOG.md 2021-09-01 07:43:01 -07:00
Dave Conroy
b808b35624 Release 2.8.0 - See CHANGELOG.md 2021-08-27 07:22:59 -07:00
Dave Conroy
e060aeb0e5 Merge pull request #70 from the1ts/master
Fix syntax error - Issue #69
2021-06-22 08:26:58 -07:00
Paul Mansfield
03c16cc582 Fix syntax error
Case statement is missing double semi-colon.
2021-06-22 14:06:07 +01:00
Dave Conroy
e45d916b00 Release 2.7.0 - See CHANGELOG.md 2021-06-17 08:45:57 -07:00
Dave Conroy
eb0ee61662 Mongo DB Authentication Database 2021-06-17 07:22:38 -07:00
Dave Conroy
8e737eb579 Update to Alpine 3.14 2021-06-17 07:17:34 -07:00
Dave Conroy
5d2043a603 Merge pull request #63 from james-song/master
Fixed #62
2021-06-12 09:03:41 -07:00
Dave Conroy
89fe251321 Update CHANGELOG.md 2021-06-08 14:16:31 -07:00
Dave Conroy
e9052f1cd6 Release 2.6.1 - See CHANGELOG.md 2021-06-08 13:36:22 -07:00
Dave Conroy
a73528c5b2 Merge pull request #66 from jwillmer/patch-1
Fix for issue #14 - Use db_name in query
2021-06-08 13:33:35 -07:00
Jens Willmer
7d1e112dfc Fix for issue #14 - Use db_name in query
As pointed out by @wglambert the db_name needs to be specified when POSTGRES_DB was set: https://github.com/docker-library/postgres/issues/854#issuecomment-856877311
2021-06-08 22:10:58 +02:00
Dave Conroy
e0bb4c313b Merge pull request #65 from MadWalnut/master
Fix Docker image name and URL
2021-06-01 16:22:32 -07:00
MadWalnut
cb5333a59c Fix Docker image name and URL 2021-06-02 00:34:00 +02:00
Dave Conroy
7b59632378 Update README 2021-05-27 08:40:48 -07:00
james-song
353b1af7b4 Fixed #62, for upload backup files at S3 using awscli 2021-05-10 16:13:21 +09:00
Dave Conroy
a27b46a43a Update README 2021-05-02 16:18:28 -07:00
Dave Conroy
e5b9ee2cc0 Add Issue Templates 2021-05-02 14:48:24 -07:00
Dave Conroy
6ddb749bc6 Add Issue Templates 2021-05-02 14:47:56 -07:00
Dave Conroy
fb299c41dd Update README.md 2021-03-15 17:48:00 -07:00
Dave Conroy
d7d4f1cc19 Merge branch 'master' of https://github.com/tiredofit/docker-db-backup 2021-02-19 08:33:48 -08:00
Dave Conroy
c8c9a80533 Release 2.6.0 - See CHANGELOG.md 2021-02-19 08:33:43 -08:00
Dave Conroy
018234b9bc Merge pull request #56 from tpansino/add-sqlite-support
Add sqlite support
2021-02-19 08:32:52 -08:00
Tom Pansino
912e60edd8 Exit on failed file checks 2021-02-18 00:35:06 -08:00
Tom Pansino
46fddb533c Add sqlite3 to README 2021-02-18 00:09:17 -08:00
Tom Pansino
e8a1859d1a Add initial sqlite3 support 2021-02-17 23:55:22 -08:00
Dave Conroy
30fe2f181c #55 - Fix xz parallel compression 2021-02-14 09:06:21 -08:00
Dave Conroy
f57ce461e9 GitHub CI 2021-01-25 17:05:25 -08:00
Dave Conroy
34aab69cc2 Release 2.5.0 - See CHANGELOG.md 2021-01-25 16:39:42 -08:00
Dave Conroy
1930358775 Multi Arch CI 2021-01-21 15:35:16 -08:00
Dave Conroy
f207f375cc Release 2.4.0 - See CHANGELOG.md 2020-12-07 15:27:20 -08:00
Dave Conroy
88b58bffc5 Release 2.3.2 - See CHANGELOG.md 2020-11-14 12:37:58 -08:00
Dave Conroy
738f7fad25 Release 2.3.1 - See CHANGELOG.md 2020-11-11 13:45:05 -08:00
Dave Conroy
8c4733bf7f Merge pull request #52 from bambi73/master
#51 Fix backup of multiple InfluxDB databases failure
2020-11-11 13:43:56 -08:00
Bambi125
be4d8c0747 #51 Fix backup of multiple InfluxDB databases failure 2020-11-11 22:38:04 +01:00
Dave Conroy
a13849df0a Release 2.3.0 - See CHANGELOG.md 2020-10-15 08:15:10 -07:00
Dave Conroy
cb5347afe5 Release 2.2.2 - See CHANGELOG.md 2020-09-22 21:14:37 -07:00
Dave Conroy
ca03c5369d Merge pull request #47 from tpansino/bug/46-fix-docker-secrets
Fix Docker Secrets injection from DB_USER_FILE/DB_PASS_FILE
2020-09-22 21:02:05 -07:00
Tom Pansino
3008d9125f Fix Docker Secrets injection from DB_USER_FILE/DB_PASS_FILE 2020-09-22 20:32:09 -07:00
Dave Conroy
19cf3d007f Release 2.2.1 - See CHANGELOG.md 2020-09-17 21:39:27 -07:00
Dave Conroy
0bbf142349 Merge pull request #45 from alwynpan/fix-backup-now-date-error-message
Fix backup now date error message
2020-09-17 21:38:10 -07:00
Yao (Alwyn) Pan
1bc357866f #42 Update README 2020-09-18 14:34:06 +10:00
Yao (Alwyn) Pan
b38ad7a5cc #44 Remove 'invalid date' error message when performing backup-now 2020-09-18 14:32:08 +10:00
Dave Conroy
8bc02ee6c8 Release 2.2.0 - See CHANGELOG.md 2020-09-14 07:07:44 -07:00
Dave Conroy
3e71c377c6 Merge pull request #43 from alwynpan/fix-optional-vars
#42 Make DB_USER and DB_PASS optional for some dbtypes; update alpine repo URI
2020-09-14 07:05:29 -07:00
Yao (Alwyn) Pan
76a857239f #42 Make DB_USER and DB_PASS optional for some dbtypes; update alpine repo URI 2020-09-14 19:22:57 +10:00
Dave Conroy
02880d6541 Release 5.1.1 - See CHANGELOG.md 2020-09-01 09:57:58 -07:00
Dave Conroy
564613f329 Merge pull request #41 from zicklag/patch-1
Fix POST_SCRIPT Environment Vairable Run
2020-09-01 09:55:37 -07:00
Zicklag
2606d3c4d5 Fix POST_SCRIPT Environment Vairable Run 2020-09-01 09:46:43 -05:00
Dave Conroy
51f0206e17 Release 2.1.0 - See CHANGELOG.md 2020-08-29 07:43:24 -07:00
Dave Conroy
8d7bea3315 Merge branch 'master' of https://github.com/tiredofit/docker-db-backup into master 2020-08-29 07:37:10 -07:00
Dave Conroy
30c56229cf Release 2.1.0 - See CHANGELOG.md 2020-08-29 07:37:03 -07:00
Dave Conroy
04594087ed Create FUNDING.yml 2020-06-24 17:08:09 -07:00
Dave Conroy
b57683e992 Update README.md 2020-06-17 08:53:22 -07:00
Dave Conroy
1323966e22 Update README.md 2020-06-17 08:21:12 -07:00
Dave Conroy
310edda88c Reduce size of temporarily files
Changed way backups are performed to reduce temporary files
Removed Rethink Support
Rework MongoDB compression
Remove function prefix from functions
Rename case on variables for easier reading
2020-06-17 08:15:34 -07:00
Dave Conroy
955a08a21b Release 1.23.0 - See CHANGELOG.md 2020-06-15 09:44:07 -07:00
Dave Conroy
bf97c3ab97 Update README.md 2020-06-10 05:48:03 -07:00
Dave Conroy
11969da1ea Release 1.22.0 - See CHANGELOG.md 2020-06-10 05:45:49 -07:00
Dave Conroy
7998156576 Release 1.21.3 - See CHANGELOG.md 2020-06-10 05:19:24 -07:00
Dave Conroy
6655d5a12a Release 1.21.2 - See CHANGELOG.md 2020-06-08 21:29:54 -07:00
Dave Conroy
bd141cc865 Release 1.21.1 - See CHANGELOG.md 2020-06-04 05:59:29 -07:00
Dave Conroy
abf2a877f7 Fix example 2020-06-03 20:41:17 -07:00
Dave Conroy
3115cb3440 Release 1.21.0 - See CHANGELOG.md 2020-06-03 05:55:46 -07:00
Dave Conroy
859ce5ffa3 Release 1.20.1 - See CHANGELOG.md 2020-04-24 15:45:52 -07:00
Dave Conroy
4d1577e553 Fix malformed backtick 2020-04-22 14:39:09 -07:00
Dave Conroy
4f103a5b36 Release 1.20.0 - See CHANGELOG.md 2020-04-22 14:19:20 -07:00
Dave Conroy
0472cba83d Update README.md 2020-04-22 05:36:48 -07:00
Dave Conroy
fe6fab857f Merge branch 'master' of https://github.com/tiredofit/docker-db-backup 2020-04-22 05:36:08 -07:00
Dave Conroy
6113bf64b2 Update README.md 2020-04-22 05:36:03 -07:00
Dave Conroy
95d1129a12 Merge pull request #22 from pascalberger/patch-1
Fix typo
2020-04-22 05:34:48 -07:00
Dave Conroy
f8bab5f045 Release 1.19.0 - See CHANGELOG.md 2020-04-22 05:21:02 -07:00
Dave Conroy
71802d2a28 Release 1.18.2 - See CHANGELOG.md 2020-04-08 12:08:42 -07:00
Dave Conroy
c96a2179b5 Merge pull request #27 from hyun007/master
changed mysql password to env variable
2020-04-08 12:04:55 -07:00
Hyun Jo
42d3aa0fef changed mysql password to env variable 2020-03-19 20:17:49 -04:00
Dave Conroy
e6009e7a1e Release 1.18.1 - See CHANGELOG.md 2020-03-14 07:59:31 -07:00
Pascal Berger
b5466d5b97 Fix typo 2020-03-01 19:43:49 +01:00
Dave Conroy
06b6e685c7 Support new tiredofit/alpine base image 2019-12-30 07:40:13 -08:00
Dave Conroy
d7bdcbd0dc Release 1.17.0 - See CHANGELOG.md 2019-12-09 13:58:11 -08:00
Dave Conroy
62ee7ad3dc Update README.md 2019-10-19 09:09:25 -07:00
Dave Conroy
65c879172f Fix docker-compose.yml example 2019-10-19 09:08:20 -07:00
Dave Conroy
33c942551a Merge pull request #18 from alwynpan/1.16.1
Fix couchdb backup endpoint; Set ENABLE_ZABBIX to FALSE by default
2019-10-01 18:22:55 -07:00
Yao (Alwyn) Pan
36e4d9a2a2 Fix couchdb backup endpoint; Set ENABLE_ZABBIX to FALSE by default 2019-10-02 10:27:26 +10:00
Dave Conroy
aa1c8b3591 Merge pull request #16 from tito/tito-patch-1
Fix usage of DB_PORT for single mariadb database
2019-09-16 07:12:54 -07:00
Mathieu Virbel
243bbb9709 Fix usage of DB_PORT for single mariadb database
Fixes #15
2019-09-16 12:32:16 +02:00
Dave Conroy
78e7434a85 Check for host availability before backup 2019-06-17 14:54:29 -07:00
Dave Conroy
d1d093b87d Merge pull request #13 from spumer/master
Allow override MYSQL DB_PORT
2019-06-16 08:40:20 -07:00
spumer
ea00709aa1 Allow override MYSQL DB_PORT 2019-06-16 16:13:34 +05:00
Dave Conroy
219a6463f3 Merge pull request #12 from claudioaltamura/master
Update README
2019-05-29 07:46:30 -07:00
Claudio Altamura
0ed89369a9 changed DB_SERVER into DB_HOST 2019-05-29 13:26:09 +02:00
Dave Conroy
6bd534258e Add support to backup password protected Redis Hosts 2019-05-24 13:58:54 -07:00
Dave Conroy
48bea7aeee Merge pull request #11 from claudioaltamura/master
chg: added AUTH for redis
2019-05-24 13:55:59 -07:00
Claudio Altamura
c3179d58ba chg: added AUTH for redis 2019-05-22 13:03:57 +02:00
Dave Conroy
fcafa1753d Update Changelog for 1.14 2019-04-20 07:13:52 -07:00
Dave Conroy
d74a516967 Switch to locally installed MongoDB packages 2019-04-20 07:10:13 -07:00
Dave Conroy
b0a5fafc4c Update README.md 2019-03-09 07:32:13 -08:00
Dave Conroy
9051ba559a Postgres backup fixes 2019-03-09 07:29:51 -08:00
Dave Conroy
bf672c0fda Fix for XZ compression failing 2019-03-01 14:31:10 -08:00
Dave Conroy
9993ad2970 Merge pull request #9 from steve-todorov/issue-2
Issue 2: Fixing typo which was causing the script to fail.
2019-03-01 14:30:20 -08:00
Steve Todorov
dc17d60b7b Issue 2: Fixing typo which was causing the script to fail. 2019-03-02 00:16:43 +02:00
Dave Conroy
49356629fe Update Changelog 2018-11-18 15:18:46 -08:00
Dave Conroy
b938c8a761 Merge pull request #6 from skylord123/skylord123-patch-1
Fix influx args as well as some errors in the log
2018-11-18 15:16:03 -08:00
Skylar Sadlier
5807ba07e3 Update run
influx command actually doesn't accept a port argument, so instead pass it in as part of the host (to keep a uniform config with the other DB types)
2018-11-18 16:09:57 -07:00
Skylar Sadlier
96f3120e35 Update run
Fix "unary operator expected" that appears in the first couple lines of the log
2018-11-18 16:08:07 -07:00
17 changed files with 1435 additions and 932 deletions

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: [tiredofit]

42
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,42 @@
---
name: Bug report
about: If something isn't working right..
title: ''
labels: bug
assignees: ''
---
### Summary
<!-- Summarize the bug encountered -->
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. Please use an ordered list. -->
### What is the expected *correct* behavior?
<!-- Describe what should be seen instead. -->
### Relevant logs and/or screenshots
<!-- Paste any relevant logs - please use code blocks (```) to format console output, logs, and code as it's tough to read otherwise. -->
### Environment
<!--Your Configuration (please complete the following information): -->
- Image version / tag:
- Host OS:
<details>
<summary>Any logs | docker-compose.yml</summary>
</details>
<!-- Include anything additional -->
### Possible fixes
<!-- If you can, provide details to the root cause that might be responsible for the problem. -->

View File

@@ -0,0 +1,23 @@
---
name: Feature request
about: Suggest an idea or feature
title: ''
labels: enhancement
assignees: ''
---
---
name: Feature Request
about: Suggest an idea for this project
---
**Description of the feature**
<!-- A clear description of the feature you'd like implemented -->
**Benftits of feature**
<!-- Explain the measurable benefits this feature would achieve. -->
**Additional context**
<!--Add any other context or screenshots about the feature request here. -->

110
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
### Application Level Image CI
### Dave Conroy <dave at tiredofit dot ca>
name: 'build'
on:
push:
paths:
- '**'
- '!README.md'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${GITHUB_REPOSITORY/docker-/}
set -x
if [[ $GITHUB_REF == refs/heads/* ]]; then
if [[ $GITHUB_REF == refs/heads/*/* ]] ; then
BRANCH="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed "s|refs/heads/||g" | sed "s|/|-|g")"
else
BRANCH=${GITHUB_REF#refs/heads/}
fi
case ${BRANCH} in
"main" | "master" )
BRANCHTAG="${DOCKER_IMAGE}:latest"
;;
"develop" )
BRANCHTAG="${DOCKER_IMAGE}:develop"
;;
* )
if [ -n "${{ secrets.LATEST }}" ] ; then
if [ "${BRANCHTAG}" = "${{ secrets.LATEST }}" ]; then
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest,${DOCKER_IMAGE}:latest"
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
;;
esac
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
GITTAG="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed 's|refs/tags/||g')"
fi
if [ -n "${BRANCHTAG}" ] && [ -n "${GITTAG}" ]; then
TAGS=${BRANCHTAG},${GITTAG}
else
TAGS="${BRANCHTAG}${GITTAG}"
fi
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Label
id: Label
run: |
if [ -f "Dockerfile" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_repository=\"https://github.com/${GITHUB_REPOSITORY}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_commit=\"${GITHUB_SHA}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_committed_by=\"${GITHUB_ACTOR}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.image_build_date=\"$(date +'%Y-%m-%d %H:%M:%S')\"" Dockerfile
if [ -f "CHANGELOG.md" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_changelog_version=\"$(head -n1 ./CHANGELOG.md | awk '{print $2}')\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_tag=\"${GITHUB_REF#refs/tags/v}\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/heads/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_branch=\"${GITHUB_REF#refs/heads/}\"" Dockerfile
fi
fi
- name: Build
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm/v7,linux/arm64
push: true
tags: ${{ steps.prep.outputs.tags }}

110
.github/workflows/manual.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
# Manual Workflow (Application)
name: manual
on:
workflow_dispatch:
inputs:
Manual Build:
description: 'Manual Build'
required: false
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${GITHUB_REPOSITORY/docker-/}
set -x
if [[ $GITHUB_REF == refs/heads/* ]]; then
if [[ $GITHUB_REF == refs/heads/*/* ]] ; then
BRANCH="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed "s|refs/heads/||g" | sed "s|/|-|g")"
else
BRANCH=${GITHUB_REF#refs/heads/}
fi
case ${BRANCH} in
"main" | "master" )
BRANCHTAG="${DOCKER_IMAGE}:latest"
;;
"develop" )
BRANCHTAG="${DOCKER_IMAGE}:develop"
;;
* )
if [ -n "${{ secrets.LATEST }}" ] ; then
if [ "${BRANCHTAG}" = "${{ secrets.LATEST }}" ]; then
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest,${DOCKER_IMAGE}:latest"
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
;;
esac
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
GITTAG="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed 's|refs/tags/||g')"
fi
if [ -n "${BRANCHTAG}" ] && [ -n "${GITTAG}" ]; then
TAGS=${BRANCHTAG},${GITTAG}
else
TAGS="${BRANCHTAG}${GITTAG}"
fi
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Label
id: Label
run: |
if [ -f "Dockerfile" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_repository=\"https://github.com/${GITHUB_REPOSITORY}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_commit=\"${GITHUB_SHA}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_committed_by=\"${GITHUB_ACTOR}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image_build_date=\"$(date +'%Y-%m-%d %H:%M:%S')\"" Dockerfile
if [ -f "CHANGELOG.md" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_changelog_version=\"$(head -n1 ./CHANGELOG.md | awk '{print $2}')\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_tag=\"${GITHUB_REF#refs/tags/v}\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/heads/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_branch=\"${GITHUB_REF#refs/heads/}\"" Dockerfile
fi
fi
- name: Build
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm/v7,linux/arm64
push: true
tags: ${{ steps.prep.outputs.tags }}

View File

@@ -1,3 +1,355 @@
## 2.11.3 2022-02-09 <dave at tiredofit dot ca>
### Changed
- Rework to support new base image
## 2.11.2 2022-02-09 <dave at tiredofit dot ca>
### Changed
- Refresh base image
## 2.11.1 2022-01-20 <jacksgt@github>
### Changed
- Modernized S3 variables and sanity checks
- Change exit code to 0 when executing a manual backup
## 2.11.0 2022-01-20 <dave at tiredofit dot ca>
### Added
- Add capability to select `TEMP_LOCATION` for initial backup and compression before backup completes to avoid filling system memory
### Changed
- Cleanup for MariaDB/MySQL DB ready routines that half worked in 2.10.3
- Code cleanup
## 2.10.3 2022-01-07 <dave at tiredofit dot ca>
### Changed
- Change the way MariaD/MySQL connectivity check is performed to allow for better compatibility without requiring the DB_USER to have PROCESS privileges
## 2.10.2 2021-12-28 <dave at tiredofit dot ca>
### Changed
- Remove logrotate configuration for redis which shouldn't exist in the first place
## 2.10.1 2021-12-22 <milenkara@github>
### Added
- Allow for choosing region when backing up to S3
## 2.10.0 2021-12-22 <dave at tiredofit dot ca>
### Changed
- Revert back to Postgresql 14 from packages as its now in the repositories
- Fix for Zabbix Monitoring
## 2.9.7 2021-12-15 <dave at tiredofit dot ca>
### Changed
- Fixup for Zabbix Autoagent registration
## 2.9.6 2021-12-03 <alexbarcello@githuba>
### Changed
- Fix for S3 Minio backup targets
- Fix for annoying output on certain target time print conditions
## 2.9.5 2021-12-07 <dave at tiredofit dot ca>
### Changed
- Fix for 2.9.3
## 2.9.4 2021-12-07 <dave at tiredofit dot ca>
### Added
- Add Zabbix auto register support for templates
## 2.9.3 2021-11-24 <dave at tiredofit dot ca>
### Added
- Alpine 3.15 base
## 2.9.2 2021-10-22 <teenigma@github>
### Fixed
- Fix compression failing on Redis backup
## 2.9.1 2021-10-15 <sbrunecker@github>
### Fixed
- Allow MariaDB 8.0 servers to be backed up
- Fixed DB available check getting stuck with empty password
## 2.9.0 2021-10-15 <dave at tiredofit dot ca>
### Added
- Postgresql 14 Support (compiled)
- MSSQL 17.8.1.1
## 2.8.2 2021-10-15 <dave at tiredofit dot ca>
### Changed
- Change to using aws cli from Alpine repositories (fixes #81)
## 2.8.1 2021-09-01 <dave at tiredofit dot ca>
### Changed
- Modernize image with updated environment varialbes from upstream
## 2.8.0 2021-08-27 <dave at tiredofit dot ca>
### Added
- Alpine 3.14 Base
### Changed
- Fix for syntax error in 2.7.0 Release (Credit the1ts@github)
- Cleanup image and leftover cache with AWS CLI installation
## 2.7.0 2021-06-17 <dave at tiredofit dot ca>
### Added
- MongoDB Authentication Database support (DB_AUTH)
## 2.6.1 2021-06-08 <jwillmer@github>
### Changed
- Fix for Issue #14 - SPLIT_DB=TRUE was not working for Postgres DB server
## 2.6.0 2021-02-19 <tpansino@github>
### Added
- SQLite support
## 2.5.1 2021-02-14 <dave at tiredofit dot ca>
### Changed
- Fix xz backups with `PARALLEL_COMPRESION=TRUE`
## 2.5.0 2021-01-25 <dave at tiredofit dot ca>
### Added
- Multi Platform Build Variants (ARMv7 AMD64 AArch64)
### Changed
- Alpine 3.13 Base
- Compile Pixz as opposed to relying on testing repository
- MSSQL Support only available under AMD64. Container exits if any other platform detected when MSSQL set to be backed up.
## 2.4.0 2020-12-07 <dave at tiredofit dot ca>
### Added
- Switch back to packges for Postgresql (now 13.1)
## 2.3.2 2020-11-14 <dave at tiredofit dot ca>
### Changed
- Reapply S6-Overlay into filesystem as Postgresql build is removing S6 files due to edge containing S6 overlay
## 2.3.1 2020-11-11 <bambi73@github>
### Fixed
- Multiple Influx DB's not being backed up correctly
## 2.3.0 2020-10-15 <dave at tiredofit dot ca>
### Added
- Microsoft SQL Server support (experimental)
### Changed
- Compiled Postgresql 13 from source to backup psql/13 hosts
## 2.2.2 2020-09-22 <tpansino@github>
### Fixed
- Patch for 2.2.0 release fixing Docker Secrets Support. Was skipping password check.
## 2.2.1 2020-09-17 <alwynpan@github>
### Fixed
- Ondemand/Manual backup with `backup-now` was throwing errors not being able to find a proper date
## 2.2.0 2020-09-14 <alwynpan@github>
### Fixed
- Allow to use MariaDB and MongoDBs with no username and password while still allowing Docker Secrets
- Changed source of Alpine package repositories
## 2.1.1 2020-09-01 <zicklag@github>
### Fixed
- Add eval to POST_SCRIPT execution
## 2.1.0 2020-08-29 <dave at tiredofit dot ca>
### Added
- Add Exit Code variable to be used for custom scripts - See README.md for placement
- Add POST_SCRIPT environment variable to execute command instead of relying on custom script
## 2.0.0 2020-06-17 <dave at tiredofit dot ca>
### Added
- Reworked compression routines to remove dependency on temporary files
- Changed the way that MongoDB compression works - only supports GZ going forward
### Changed
- Code cleanup (removed function prefixes, added verbosity)
### Reverted
- Removed Rethink Support
## 1.23.0 2020-06-15 <dave at tiredofit dot ca>
### Added
- Add zstd compression support
- Add choice of compression level
## 1.22.0 2020-06-10 <dave at tiredofit dot ca>
### Added
- Added EXTRA_OPTS variable to all backup commands to pass extra arguments
## 1.21.3 2020-06-10 <dave at tiredofit dot ca>
### Changed
- Fix `backup-now` manual script due to services.available change
## 1.21.2 2020-06-08 <dave at tiredofit dot ca>
### Added
- Change to support tiredofit/alpine base image 5.0.0
## 1.21.1 2020-06-04 <dave at tiredofit dot ca>
### Changed
- Bugfix to initalization routine
## 1.21.0 2020-06-03 <dave at tiredofit dot ca>
### Added
- Add S3 Compatible Storage Support
### Changed
- Switch some variables to support tiredofit/alpine base image better
- Fix issue with parallel compression not working correctly
## 1.20.1 2020-04-24 <dave at tiredofit dot ca>
### Changed
- Fix Auto Cleanup routines when using `root` as username
## 1.20.0 2020-04-22 <dave at tiredofit dot ca>
### Added
- Docker Secrets Support for DB_USER and DB_PASS variables
## 1.19.0 2020-04-22 <dave at tiredofit dot ca>
### Added
- Custom Script support to execute upon compleition of backup
## 1.18.2 2020-04-08 <hyun007 @ github>
### Changed
- Rework to allow passwords with spaces in them for MariaDB / MySQL
## 1.18.1 2020-03-14 <dave at tiredofit dot ca>
### Changed
- Allow for passwords with spaces in them for MariaDB / MySQL
## 1.18.0 2019-12-29 <dave at tiredofit dot ca>
### Added
- Update image to support new tiredofit/alpine base images
## 1.17.3 2019-12-12 <dave at tiredofit dot ca>
### Changed
- Quiet down Zabbix Agent
## 1.17.2 2019-12-12 <dave at tiredofit dot ca>
### Changed
- Re Enable ZABBIX
## 1.17.1 2019-12-10 <dave at tiredofit dot ca>
### Changed
- Fix spelling mistake in container initialization
## 1.17.0 2019-12-09 <dave at tiredofit dot ca>
### Changed
- Stop compiling mongodb-tools as it is back in Alpine:edge repositories
- Cleanup Code
## 1.16 - 2019-06-16 - <dave at tiredofit dot ca>
* Check to see if Database Exists before performing backup
* Fix for MysQL/MariaDB custom ports - Credit to <spumer@github>
## 1.15 - 2019-05-24 - <claudioaltamura @ github>
* Added abaility to backup password protected Redis Hosts
## 1.14 - 2019-04-20 - <dave at tiredofit dot ca>
* Switch to using locally built mongodb-tools from tiredofit/mongo-builder due to Alpine removing precompiled packages from repositories
## 1.13 - 2019-03-09 - <dave at tiredofit dot ca>
* Fixed Postgres backup without SPLIT_DB enabled (credit MelwinKfr@github)
* Added DB_PORT reference to properly backup Postgres with non default ports (thanks Maxximus007@github)
## 1.12 - 2019-03-01 - <stevetodorov at github>
* Fix for XZ Compression failing
## 1.11 - 2018-11-19 - <skylord123 at github>
* Fix for Urnary Operator Error
## 1.10 - 2018-11-19 - <dave at tiredofit dot ca>
* Fix for InfluxDB for backing up and supporting DB_PORT variable - Thanks skylord123@github
@@ -47,3 +399,4 @@
* Initial Release
* Alpine:Edge

View File

@@ -1,49 +1,70 @@
FROM tiredofit/alpine:edge
LABEL maintainer="Dave Conroy (dave at tiredofit dot ca)"
FROM docker.io/tiredofit/alpine:3.15
### Set Environment Variables
ENV ENABLE_CRON=FALSE \
ENABLE_SMTP=FALSE
ENV MSSQL_VERSION=17.8.1.1-1 \
CONTAINER_ENABLE_MESSAGING=FALSE \
CONTAINER_ENABLE_MONITORING=TRUE \
IMAGE_NAME="tiredofit/db-backup" \
IMAGE_REPO_URL="https://github.com/tiredofit/docker-db-backup/"
### Dependencies
RUN set -ex && \
echo "@testing http://nl.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && \
apk update && \
apk upgrade && \
apk add --virtual .db-backup-build-deps \
build-base \
bzip2-dev \
git \
xz-dev \
&& \
\
apk add --virtual .db-backup-run-deps \
bzip2 \
mongodb-tools \
mariadb-client \
libressl \
pigz \
postgresql \
postgresql-client \
redis \
xz \
&& \
apk add \
influxdb@testing \
pixz@testing \
&& \
\
cd /usr/src && \
mkdir -p pbzip2 && \
curl -ssL https://launchpad.net/pbzip2/1.1/1.1.13/+download/pbzip2-1.1.13.tar.gz | tar xvfz - --strip=1 -C /usr/src/pbzip2 && \
cd pbzip2 && \
make && \
make install && \
\
# Cleanup
rm -rf /usr/src/* && \
apk del .db-backup-build-deps && \
rm -rf /tmp/* /var/cache/apk/*
RUN set -ex && \
apk update && \
apk upgrade && \
apk add -t .db-backup-build-deps \
build-base \
bzip2-dev \
git \
libarchive-dev \
xz-dev \
&& \
\
apk add --no-cache -t .db-backup-run-deps \
aws-cli \
bzip2 \
influxdb \
libarchive \
mariadb-client \
mariadb-connector-c \
mongodb-tools \
libressl \
pigz \
postgresql \
postgresql-client \
redis \
sqlite \
xz \
zstd \
&& \
\
cd /usr/src && \
\
apkArch="$(apk --print-arch)"; \
case "$apkArch" in \
x86_64) mssql=true ; curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/msodbcsql17_${MSSQL_VERSION}_amd64.apk ; curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/mssql-tools_${MSSQL_VERSION}_amd64.apk ; echo y | apk add --allow-untrusted msodbcsql17_${MSSQL_VERSION}_amd64.apk mssql-tools_${MSSQL_VERSION}_amd64.apk ;; \
*) echo >&2 "Detected non x86_64 build variant, skipping MSSQL installation" ;; \
esac; \
mkdir -p /usr/src/pbzip2 && \
curl -sSL https://launchpad.net/pbzip2/1.1/1.1.13/+download/pbzip2-1.1.13.tar.gz | tar xvfz - --strip=1 -C /usr/src/pbzip2 && \
cd /usr/src/pbzip2 && \
make && \
make install && \
mkdir -p /usr/src/pixz && \
curl -sSL https://github.com/vasi/pixz/releases/download/v1.0.7/pixz-1.0.7.tar.xz | tar xvfJ - --strip 1 -C /usr/src/pixz && \
cd /usr/src/pixz && \
./configure \
--prefix=/usr \
--sysconfdir=/etc \
--localstatedir=/var \
&& \
make && \
make install && \
\
### Cleanup
apk del .db-backup-build-deps && \
rm -rf /usr/src/* && \
rm -rf /etc/logrotate.d/redis && \
rm -rf /root/.cache /tmp/* /var/cache/apk/*
### S6 Setup
ADD install /

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016 Dave Conroy
Copyright (c) 2022 Dave Conroy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

233
README.md
View File

@@ -1,65 +1,86 @@
# tiredofit/db-backup
# github.com/tiredofit/docker-db-backup
[![GitHub release](https://img.shields.io/github/v/tag/tiredofit/docker-db-backup?style=flat-square)](https://github.com/tiredofit/docker-db-backup/releases/latest)
[![Build Status](https://img.shields.io/github/workflow/status/tiredofit/docker-db-backup/build?style=flat-square)](https://github.com/tiredofit/docker-db-backup/actions?query=workflow%3Abuild)
[![Docker Stars](https://img.shields.io/docker/stars/tiredofit/db-backup.svg?style=flat-square&logo=docker)](https://hub.docker.com/r/tiredofit/db-backup/)
[![Docker Pulls](https://img.shields.io/docker/pulls/tiredofit/db-backup.svg?style=flat-square&logo=docker)](https://hub.docker.com/r/tiredofit/db-backup/)
[![Become a sponsor](https://img.shields.io/badge/sponsor-tiredofit-181717.svg?logo=github&style=flat-square)](https://github.com/sponsors/tiredofit)
[![Paypal Donate](https://img.shields.io/badge/donate-paypal-00457c.svg?logo=paypal&style=flat-square)](https://www.paypal.me/tiredofit)
[![Build Status](https://img.shields.io/docker/build/tiredofit/db-backup.svg)](https://hub.docker.com/r/tiredofit/db-backup)
[![Docker Pulls](https://img.shields.io/docker/pulls/tiredofit/db-backup.svg)](https://hub.docker.com/r/tiredofit/db-backup)
[![Docker Stars](https://img.shields.io/docker/stars/tiredofit/db-backup.svg)](https://hub.docker.com/r/tiredofit/db-backup)
[![Docker Layers](https://images.microbadger.com/badges/image/tiredofit/db-backup.svg)](https://microbadger.com/images/tiredofit/db-backup)
* * *
## About
This will build a container for backing up multiple types of DB Servers
# Introduction
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis servers.
This will build a container for backing up multiple type of DB Servers
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB Postgres, Redis, Rethink servers.
* dump to local filesystem
* dump to local filesystem or backup to S3 Compatible services
* select database user and password
* backup all databases
* choose to have an MD5 sum after backup for verification
* delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz)
* choose compression type (none, gz, bz, xz, zstd)
* connect to any container running on the same system
* select how often to run a dump
* select when to start the first dump, whether time of day or relative to container start time
* Execute script after backup for monitoring/alerting purposes
This Container uses Alpine:Edge as a base.
[Changelog](CHANGELOG.md)
# Authors
## Maintainer
- [Dave Conroy](https://github.com/tiredofit)
# Table of Contents
## Table of Contents
- [Introduction](#introduction)
- [Changelog](CHANGELOG.md)
- [Prerequisites](#prerequisites)
- [About](#about)
- [Maintainer](#maintainer)
- [Table of Contents](#table-of-contents)
- [Prerequisites and Assumptions](#prerequisites-and-assumptions)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Build from Source](#build-from-source)
- [Prebuilt Images](#prebuilt-images)
- [Configuration](#configuration)
- [Data Volumes](#data-volumes)
- [Environment Variables](#environmentvariables)
- [Quick Start](#quick-start)
- [Persistent Storage](#persistent-storage)
- [Environment Variables](#environment-variables)
- [Base Images used](#base-images-used)
- [Backing Up to S3 Compatible Services](#backing-up-to-s3-compatible-services)
- [Maintenance](#maintenance)
- [Shell Access](#shell-access)
- [Shell Access](#shell-access)
- [Manual Backups](#manual-backups)
- [Custom Scripts](#custom-scripts)
- [Support](#support)
- [Usage](#usage)
- [Bugfixes](#bugfixes)
- [Feature Requests](#feature-requests)
- [Updates](#updates)
- [License](#license)
# Prerequisites
## Prerequisites and Assumptions
You must have a working DB server or container available for this to work properly, it does not provide server functionality!
## Installation
# Installation
Automated builds of the image are available on [Docker Hub](https://hub.docker.com/r/tiredofit/db-backup) and is the recommended
method of installation.
### Build from Source
Clone this repository and build the image with `docker build -t (imagename) .`
### Prebuilt Images
Builds of the image are available on [Docker Hub](https://hub.docker.com/r/tiredofit/db-backup) and is the recommended method of installation.
```bash
docker pull tiredofit/db-backup
docker pull tiredofit/db-backup:(imagetag)
```
# Quick Start
The following image tags are available along with their tagged release based on what's written in the [Changelog](CHANGELOG.md):
| Container OS | Tag |
| ------------ | --------- |
| Alpine | `:latest` |
## Configuration
### Quick Start
* The quickest way to get started is using [docker-compose](https://docs.docker.com/compose/). See the examples folder for a working [docker-compose.yml](examples/docker-compose.yml) that can be modified for development or production use.
@@ -67,53 +88,131 @@ docker pull tiredofit/db-backup
* Map [persistent storage](#data-volumes) for access to configuration and data files for backup.
> **NOTE**: If you are using this with a docker-compose file along with a seperate SQL container, take care not to set the variables to backup immediately, more so have it delay execution for a minute, otherwise you will get a failed first backup.
# Configuration
## Data-Volumes
### Persistent Storage
The following directories are used for configuration and can be mapped for persistent storage.
| Directory | Description |
|-----------|-------------|
| `/backup` | Backups |
| Directory | Description |
| ------------------------ | ---------------------------------------------------------------------------------- |
| `/backup` | Backups |
| `/assets/custom-scripts` | *Optional* Put custom scripts in this directory to execute after backup operations |
### Environment Variables
## Environment Variables
#### Base Images used
Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation.
This image relies on an [Alpine Linux](https://hub.docker.com/r/tiredofit/alpine) base image that relies on an [init system](https://github.com/just-containers/s6-overlay) for added capabilities. Outgoing SMTP capabilities are handlded via `msmtp`. Individual container performance monitoring is performed by [zabbix-agent](https://zabbix.org). Additional tools include: `bash`,`curl`,`less`,`logrotate`, `nano`,`vim`.
Be sure to view the following repositories to understand all the customizable options:
| Image | Description |
| ------------------------------------------------------ | -------------------------------------- |
| [OS Base](https://github.com/tiredofit/docker-alpine/) | Customized Image based on Alpine Linux |
| Parameter | Description |
|-----------|-------------|
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink`
| `DB_SERVER` | Server Hostname e.g. `mariadb`
| `DB_NAME` | Schema Name e.g. `database`
| `DB_USER` | username for the database - use `root` to backup all MySQL of them.
| `DB_PASS` | (optional if DB doesn't require it) password for the database
| `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day.
| `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats
| | Absolute HHMM, e.g. `2330` or `0415`
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half
| `DB_DUMP_DEBUG` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed.
| `DB_DUMP_TARGET` | Where to put the dump file, should be a directory. Supports three formats |
| | Local If the value of `DB_DUMP_TARGET` starts with a `/` character, will dump to a local path, which should be volume-mounted.
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ`
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE`
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| Parameter | Description | Default |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
| `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi | `FILESYSTEM` |
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` | `GZ` |
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - | `3` |
| `DB_AUTH` | (Mongo Only - Optional) Authentication Database | |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `sqlite3` | |
| `DB_HOST` | Server Hostname e.g. `mariadb`. For `sqlite3`, full path to DB file e.g. `/backup/db.sqlite3` | |
| `DB_NAME` | Schema Name e.g. `database` | |
| `DB_USER` | username for the database - use `root` to backup all MySQL of them. | |
| `DB_PASS` | (optional if DB doesn't require it) password for the database | |
| `DB_PORT` | (optional) Set port to connect to DB_HOST. Defaults are provided | varies |
| `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day. | `1440` |
| `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats | |
| | Absolute HHMM, e.g. `2330` or `0415` | |
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half | |
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything. | `FALSE` |
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. | |
| `EXTRA_OPTS` | If you need to pass extra arguments to the backup command, add them here e.g. `--extra-command` | |
| `MYSQL_MAX_ALLOWED_PACKET` | Max allowed packet if backing up MySQL / MariaDB | `512M` |
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` | `TRUE` |
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` | `TRUE` |
| `POST_SCRIPT` | Fill this variable in with a command to execute post the script backing up | |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. | `FALSE` |
| `TEMP_LOCATION` | Perform Backups and Compression in this temporary directory | `/tmp/backups/` |
- When using compression with MongoDB, only `GZ` compression is possible.
- You may need to wrap your `DB_DUMP_BEGIN` value in quotes for it to properly parse. There have been reports of backups that start with a `0` get converted into a different format which will not allow the timer to start at the correct time.
#### Backing Up to S3 Compatible Services
If `BACKUP_LOCATION` = `S3` then the following options are used.
| Parameter | Description |
| --------------- | --------------------------------------------------------------------------------------- |
| `S3_BUCKET` | S3 Bucket name e.g. `mybucket` |
| `S3_KEY_ID` | S3 Key ID |
| `S3_KEY_SECRET` | S3 Key Secret |
| `S3_PATH` | S3 Pathname to save to e.g. '`backup`' |
| `S3_REGION` | Define region in which bucket is defined. Example: `ap-northeast-2` |
| `S3_HOST` | Hostname (and port) of S3-compatible service, e.g. `minio:8080`. Defaults to AWS. |
| `S3_PROTOCOL` | Protocol to connect to `S3_HOST`. Either `http` or `https`. Defaults to `https`. |
## Maintenance
Manual Backups can be perforemd by entering the container and typing `backup-now`
#### Shell Access
### Shell Access
For debugging and maintenance purposes you may want access the containers shell.
For debugging and maintenance purposes you may want access the containers shell.
```bash
docker exec -it (whatever your container name is e.g.) db-backup bash
```
``bash
docker exec -it (whatever your container name is) bash
``
### Manual Backups
Manual Backups can be performed by entering the container and typing `backup-now`
### Custom Scripts
If you want to execute a custom script at the end of backup, you can drop bash scripts with the extension of `.sh` in this directory. See the following example to utilize:
````bash
$ cat post-script.sh
##!/bin/bash
# #### Example Post Script
# #### $1=EXIT_CODE (After running backup routine)
# #### $2=DB_TYPE (Type of Backup)
# #### $3=DB_HOST (Backup Host)
# #### #4=DB_NAME (Name of Database backed up
# #### $5=DATE (Date of Backup)
# #### $6=TIME (Time of Backup)
# #### $7=BACKUP_FILENAME (Filename of Backup)
# #### $8=FILESIZE (Filesize of backup)
# #### $9=MD5_RESULT (MD5Sum if enabled)
echo "${1} ${2} Backup Completed on ${3} for ${4} on ${5} ${6}. Filename: ${7} Size: ${8} bytes MD5: ${9}"
````
Outputs the following on the console:
`0 mysql Backup Completed on example-db for example on 2020-04-22 05:19:10. Filename: mysql_example_example-db_20200422-051910.sql.bz2 Size: 7795 bytes MD5: 952fbaafa30437494fdf3989a662cd40`
If you wish to change the size value from bytes to megabytes set environment variable `SIZE_VALUE=megabytes`
## Support
These images were built to serve a specific need in a production environment and gradually have had more functionality added based on requests from the community.
### Usage
- The [Discussions board](../../discussions) is a great place for working with the community on tips and tricks of using this image.
- Consider [sponsoring me](https://github.com/sponsors/tiredofit) personalized support.
### Bugfixes
- Please, submit a [Bug Report](issues/new) if something isn't working as expected. I'll do my best to issue a fix in short order.
### Feature Requests
- Feel free to submit a feature request, however there is no guarantee that it will be added, or at what timeline.
- Consider [sponsoring me](https://github.com/sponsors/tiredofit) regarding development of features.
### Updates
- Best effort to track upstream changes, More priority if I am actively using the image in a production environment.
- Consider [sponsoring me](https://github.com/sponsors/tiredofit) for up to date releases.
## License
MIT. See [LICENSE](LICENSE) for more details.

4
examples/docker-compose.yml Normal file → Executable file
View File

@@ -19,7 +19,8 @@ services:
links:
- example-db
volumes:
- ./backups:/backups
- ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- DB_TYPE=mariadb
- DB_HOST=example-db
@@ -32,7 +33,6 @@ services:
- MD5=TRUE
- COMPRESSION=XZ
- SPLIT_DB=FALSE
restart: always

14
examples/post-script.sh Executable file
View File

@@ -0,0 +1,14 @@
##!/bin/bash
## Example Post Script
## $1=EXIT_CODE (After running backup routine)
## $2=DB_TYPE (Type of Backup)
## $3=DB_HOST (Backup Host)
## #4=DB_NAME (Name of Database backed up
## $5=DATE (Date of Backup)
## $6=TIME (Time of Backup)
## $7=BACKUP_FILENAME (Filename of Backup)
## $8=FILESIZE (Filesize of backup)
## $9=MD5_RESULT (MD5Sum if enabled)
echo "${1} ${2} Backup Completed on ${3} for ${4} on ${5} ${6}. Filename: ${7} Size: ${8} bytes MD5: ${9}"

3
install/etc/cont-finish.d/10-db-backup Normal file → Executable file
View File

@@ -1,4 +1,3 @@
#!/usr/bin/with-contenv bash
#!/command/with-contenv bash
pkill bash

View File

@@ -0,0 +1,11 @@
#!/command/with-contenv bash
source /assets/functions/00-container
prepare_service single
prepare_service 03-monitoring
PROCESS_NAME="db-backup"
output_off
create_zabbix dbbackup
liftoff

View File

@@ -1,301 +0,0 @@
#!/usr/bin/with-contenv bash
date >/dev/null
if [ $1 != "NOW" ]; then
sleep 10
fi
### Set Debug Mode
if [ "$DEBUG_MODE" = "TRUE" ] || [ "$DEBUG_MODE" = "true" ]; then
set -x
fi
### Sanity Test
if [ ! -n "$DB_TYPE" ]; then
echo '** [db-backup] ERROR: No Database Type Selected! '
exit 1
fi
if [ ! -n "$DB_HOST" ]; then
echo '** [db-backup] ERROR: No Database Host Entered! '
exit 1
fi
### Set Defaults
COMPRESSION=${COMPRESSION:-GZ}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
DBHOST=${DB_HOST}
DBNAME=${DB_NAME}
DBPASS=${DB_PASS}
DBUSER=${DB_USER}
DBTYPE=${DB_TYPE}
MD5=${MD5:-TRUE}
SPLIT_DB=${SPLIT_DB:-FALSE}
TMPDIR=/tmp/backups
if [ $1 = "NOW" ]; then
DB_DUMP_BEGIN=+0
MANUAL=TRUE
fi
### Set Compression Options
if [ "$PARALLEL_COMPRESSION" = "TRUE " ]; then
BZIP="pbzip2"
GZIP="pigz"
XZIP="pixz"
else
BZIP="bzip2"
GZIP="gzip"
XZIP=="xz"
fi
### Set the Database Type
case "$DBTYPE" in
"couch" | "couchdb" | "COUCH" | "COUCHDB" )
DBTYPE=couch
DBPORT=${DB_PORT:-5984}
;;
"influx" | "influxdb" | "INFLUX" | "INFLUXDB" )
DBTYPE=influx
DBPORT=${DB_PORT:-8088}
;;
"mongo" | "mongodb" | "MONGO" | "MONGODB" )
DBTYPE=mongo
DBPORT=${DB_PORT:-27017}
[[ ( -n "${DB_USER}" ) ]] && MONGO_USER_STR=" --username ${DBUSER}"
[[ ( -n "${DB_PASS}" ) ]] && MONGO_PASS_STR=" --password ${DBPASS}"
[[ ( -n "${DB_NAME}" ) ]] && MONGO_DB_STR=" --db ${DBNAME}"
;;
"mysql" | "MYSQL" | "mariadb" | "MARIADB")
DBTYPE=mysql
DBPORT=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) ]] && MYSQL_PASS_STR=" -p${DBPASS}"
;;
"postgres" | "postgresql" | "pgsql" | "POSTGRES" | "POSTGRESQL" | "PGSQL" )
DBTYPE=pgsql
DBPORT=${DB_PORT:-5432}
[[ ( -n "${DB_PASS}" ) ]] && POSTGRES_PASS_STR="PGPASSWORD=${DBPASS}"
;;
"redis" | "REDIS" )
DBTYPE=redis
DBPORT=${DB_PORT:-6379}
;;
"rethink" | "RETHINK" )
DBTYPE=rethink
DBPORT=${DB_PORT:-28015}
[[ ( -n "${DB_PASS}" ) ]] && echo $DB_PASS>/tmp/.rethink.auth; RETHINK_PASS_STR=" --password-file /tmp/.rethink.auth"
[[ ( -n "${DB_NAME}" ) ]] && RETHINK_DB_STR=" -e ${DBNAME}"
;;
esac
### Functions
function backup_couch() {
TARGET=couch_${DBNAME}_${DBHOST}_${now}.txt
curl -X GET http://${DBHOST}:${DBPORT}/${DBNAME}/ all docs? include docs=true >${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
}
function backup_mysql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then
DATABASES=`mysql -h $DBHOST -u$DBUSER -p$DBPASS --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema`
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
echo "** [db-backup] Dumping database: $db"
TARGET=mysql_${db}_${DBHOST}_${now}.sql
mysqldump --max-allowed-packet=512M -h $DBHOST -u$DBUSER ${MYSQL_PASS_STR} --databases $db > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
done
else
mysqldump --max-allowed-packet=512M -A -h $DBHOST -u$DBUSER ${MYSQL_PASS_STR} > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
}
function backup_influx() {
for DB in $DB_NAME; do
influxd backup -database $DB -host ${DBHOST} -p ${DBPORT} ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
done
}
function backup_mongo() {
mongodump --out ${TMPDIR}/${TARGET} --host ${DBHOST} --port ${DBPORT} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_DB_STR} ${EXTRA_OPTS}
cd ${TMPDIR}
tar cf ${TARGET}.tar ${TARGET}/*
TARGET=${TARGET}.tar
generate_md5
compression
move_backup
}
function backup_pgsql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then
export PGPASSWORD=${DBPASS}
DATABASES=`psql -h $DBHOST -U $DBUSER -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' `
for db in $DATABASES; do
echo "** [db-backup] Dumping database: $db"
TARGET=pgsql_${db}_${DBHOST}_${now}.sql
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
done
else
export PGPASSWORD=${DBPASS}
pg_dump -h ${DBHOST} -U ${DBUSER} $db > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
}
function backup_redis() {
TARGET=redis_${db}_${DBHOST}_${now}.rdb
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} --rdb ${TMPDIR}/${TARGET}
echo "** [db-backup] Dumping Redis - Flushing Redis Cache First"
sleep 10
try=5
while [ $try -gt 0 ] ; do
saved=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} | awk '/rdb_bgsave_in_progress:0/{print "saved"}')
ok=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} | awk '/rdb_last_bgsave_status:ok/{print "ok"}')
if [[ "$saved" = "saved" ]] && [[ "$ok" = "ok" ]]; then
echo "** [db-backup] Redis Backup Complete"
fi
try=$((try - 1))
echo "** [db-backup] Redis Busy - Waiting and retrying in 5 seconds"
sleep 5
done
generate_md5
compression
move_backup
}
function backup_rethink() {
TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz
echo "** [db-backup] Dumping rethink Database: $db"
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR}
move_backup
}
function compression() {
case "$COMPRESSION" in
"GZ" | "gz" | "gzip" | "GZIP")
$GZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.gz
;;
"BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2")
$BZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.bz2
;;
"XZ" | "xz" | "XZIP" | "xzip" )
$XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz
;;
"NONE" | "none" | "FALSE" | "false")
;;
esac
}
function generate_md5() {
if [ "$MD5" = "TRUE" ] || [ "$MD5" = "true" ] ; then
cd $TMPDIR
md5sum ${TARGET} > ${TARGET}.md5
fi
}
function move_backup() {
mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
}
### Container Startup
echo '** [db-backup] Initialized at at '$(date)
### Wait for Next time to start backup
current_time=$(date +"%s")
today=$(date +"%Y%m%d")
if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then
waittime=$(( ${BASH_REMATCH[1]} * 60 ))
else
target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s")
if [[ "$target_time" < "$current_time" ]]; then
target_time=$(($target_time + 24*60*60))
fi
waittime=$(($target_time - $current_time))
fi
sleep $waittime
### Commence Backup
while true; do
# make sure the directory exists
mkdir -p $TMPDIR
### Define Target name
now=$(date +"%Y%m%d-%H%M%S")
TARGET=${DBTYPE}_${DBNAME}_${DBHOST}_${now}.sql
### Take a Dump
case "$DBTYPE" in
"couch" )
backup_couch
;;
"influx" )
backup_influx
;;
"mysql" )
backup_mysql
;;
"mongo" )
backup_mongo
;;
"pgsql" )
backup_pgsql
;;
"redis" )
backup_redis
;;
"rethink" )
backup_rethink
;;
esac
### Zabbix
if [ "$ENABLE_ZABBIX" = "TRUE" ] || [ "$ENABLE_ZABBIX" = "true" ]; then
zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}`
zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'`
fi
### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then
find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \;
fi
### Go back to Sleep until next Backup time
if [ "$MANUAL" = "TRUE" ]; then
exit 1;
else
sleep $(($DB_DUMP_FREQ*60))
fi
done
fi

View File

@@ -0,0 +1,536 @@
#!/command/with-contenv bash
source /assets/functions/00-container
PROCESS_NAME="db-backup"
date >/dev/null
if [ "$1" != "NOW" ]; then
sleep 10
fi
### Sanity Test
sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host"
### Set the Database Type
dbtype=${DB_TYPE}
case "$dbtype" in
"couch" | "couchdb" | "COUCH" | "COUCHDB" )
dbtype=couch
dbport=${DB_PORT:-5984}
file_env 'DB_USER'
file_env 'DB_PASS'
;;
"influx" | "influxdb" | "INFLUX" | "INFLUXDB" )
dbtype=influx
dbport=${DB_PORT:-8088}
file_env 'DB_USER'
file_env 'DB_PASS'
;;
"mongo" | "mongodb" | "MONGO" | "MONGODB" )
dbtype=mongo
dbport=${DB_PORT:-27017}
[[ ( -n "${DB_USER}" ) || ( -n "${DB_USER_FILE}" ) ]] && file_env 'DB_USER'
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
;;
"mysql" | "MYSQL" | "mariadb" | "MARIADB")
dbtype=mysql
dbport=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
MYSQL_MAX_ALLOWED_PACKET=${MYSQL_MAX_ALLOWED_PACKET:-"512M"}
;;
"mssql" | "MSSQL" | "microsoftsql" | "MICROSOFTSQL")
apkArch="$(apk --print-arch)"; \
case "$apkArch" in
x86_64) mssql=true ;;
*) print_error "MSSQL cannot operate on $apkArch processor!" ; exit 1 ;;
esac
dbtype=mssql
dbport=${DB_PORT:-1433}
;;
"postgres" | "postgresql" | "pgsql" | "POSTGRES" | "POSTGRESQL" | "PGSQL" )
dbtype=pgsql
dbport=${DB_PORT:-5432}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
;;
"redis" | "REDIS" )
dbtype=redis
dbport=${DB_PORT:-6379}
[[ ( -n "${DB_PASS}" || ( -n "${DB_PASS_FILE}" ) ) ]] && file_env 'DB_PASS'
;;
"sqlite" | "sqlite3" | "SQLITE" | "SQLITE3" )
dbtype=sqlite3
;;
esac
### Set Defaults
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
COMPRESSION=${COMPRESSION:-"GZ"}
COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-"/backup"}
dbhost=${DB_HOST}
dbname=${DB_NAME}
dbpass=${DB_PASS}
dbuser=${DB_USER}
MD5=${MD5:-TRUE}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-"TRUE"}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SPLIT_DB=${SPLIT_DB:-"FALSE"}
TEMP_LOCATION=${TEMP_LOCATION:-"/tmp/backups"}
if [ "$BACKUP_LOCATION" = "S3" ] || [ "$BACKUP_LOCATION" = "s3" ] || [ "$BACKUP_LOCATION" = "MINIO" ] || [ "$BACKUP_LOCATION" = "minio" ] ; then
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_KEY_ID "S3 Key ID"
sanity_var S3_KEY_SECRET "S3 Key Secret"
sanity_var S3_PATH "S3 Path"
sanity_var S3_REGION "S3 Region"
file_env 'S3_KEY_ID'
file_env 'S3_KEY_SECRET'
fi
if [ "$1" = "NOW" ]; then
DB_DUMP_BEGIN=+0
MANUAL=TRUE
fi
### Set Compression Options
if var_true "${PARALLEL_COMPRESSION}" ; then
bzip="pbzip2 -${COMPRESSION_LEVEL}"
gzip="pigz -${COMPRESSION_LEVEL}"
xzip="pixz -${COMPRESSION_LEVEL}"
zstd="zstd --rm -${COMPRESSION_LEVEL}"
else
bzip="bzip2 -${COMPRESSION_LEVEL}"
gzip="gzip -${COMPRESSION_LEVEL}"
xzip="xz -${COMPRESSION_LEVEL} "
zstd="zstd --rm -${COMPRESSION_LEVEL}"
fi
### Set the Database Authentication Details
case "$dbtype" in
"mongo" )
[[ ( -n "${DB_USER}" ) ]] && MONGO_USER_STR=" --username ${dbuser}"
[[ ( -n "${DB_PASS}" ) ]] && MONGO_PASS_STR=" --password ${dbpass}"
[[ ( -n "${DB_NAME}" ) ]] && MONGO_DB_STR=" --db ${dbname}"
[[ ( -n "${DB_AUTH}" ) ]] && MONGO_AUTH_STR=" --authenticationDatabase ${DB_AUTH}"
;;
"mysql" )
[[ ( -n "${DB_PASS}" ) ]] && export MYSQL_PWD=${dbpass}
;;
"postgres" )
[[ ( -n "${DB_PASS}" ) ]] && POSTGRES_PASS_STR="PGPASSWORD=${dbpass}"
;;
"redis" )
[[ ( -n "${DB_PASS}" ) ]] && REDIS_PASS_STR=" -a ${dbpass}"
;;
esac
### Functions
backup_couch() {
target=couch_${dbname}_${dbhost}_${now}.txt
compression
curl -X GET http://${dbhost}:${dbport}/${dbname}/_all_docs?include_docs=true ${dumpoutput} | $dumpoutput > ${TEMP_LOCATION}/${target}
exit_code=$?
generate_md5
move_backup
}
backup_influx() {
if [ "${COMPRESSION}" = "NONE" ] || [ "${COMPRESSION}" = "none" ] || [ "${COMPRESSION}" = "FALSE" ] || [ "${COMPRESSION}" = "false" ] ; then
:
else
print_notice "Compressing InfluxDB backup with gzip"
influx_compression="-portable"
fi
for DB in ${DB_NAME}; do
target=influx_${DB}_${dbhost}_${now}
influxd backup ${influx_compression} -database $DB -host ${dbhost}:${dbport} ${TEMP_LOCATION}/${target}
exit_code=$?
generate_md5
move_backup
done
}
backup_mongo() {
if [ "${COMPRESSION}" = "NONE" ] || [ "${COMPRESSION}" = "none" ] || [ "${COMPRESSION}" = "FALSE" ] || [ "${COMPRESSION}" = "false" ] ; then
target=${dbtype}_${dbname}_${dbhost}_${now}.archive
else
print_notice "Compressing MongoDB backup with gzip"
target=${dbtype}_${dbname}_${dbhost}_${now}.archivegz
mongo_compression="--gzip"
fi
mongodump --archive=${TEMP_LOCATION}/${target} ${mongo_compression} --host ${dbhost} --port ${dbport} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_AUTH_STR}${MONGO_DB_STR} ${EXTRA_OPTS}
exit_code=$?
cd ${TEMP_LOCATION}
generate_md5
move_backup
}
backup_mssql() {
target=mssql_${dbname}_${dbhost}_${now}.bak
/opt/mssql-tools/bin/sqlcmd -E -C -S ${dbhost}\,${dbport} -U ${dbuser} -P ${dbpass} Q "BACKUP DATABASE \[${dbname}\] TO DISK = N'${TEMP_LOCATION}/${target}' WITH NOFORMAT, NOINIT, NAME = '${dbname}-full', SKIP, NOREWIND, NOUNLOAD, STATS = 10"
}
backup_mysql() {
if var_true "${SPLIT_DB}" ; then
DATABASES=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database | grep -v schema)
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
print_notice "Dumping MariaDB database: $db"
target=mysql_${db}_${dbhost}_${now}.sql
compression
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} --databases $db | $dumpoutput > ${TEMP_LOCATION}/${target}
exit_code=$?
generate_md5
move_backup
fi
done
else
compression
mysqldump --max-allowed-packet=${MYSQL_MAX_ALLOWED_PACKET} -A -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target}
exit_code=$?
generate_md5
move_backup
fi
}
backup_pgsql() {
if var_true "${SPLIT_DB}" ; then
export PGPASSWORD=${dbpass}
authdb=${DB_USER}
[ -n "${DB_NAME}" ] && authdb=${DB_NAME}
DATABASES=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for db in $DATABASES; do
print_info "Dumping database: $db"
target=pgsql_${db}_${dbhost}_${now}.sql
compression
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target}
exit_code=$?
generate_md5
move_backup
done
else
export PGPASSWORD=${dbpass}
compression
pg_dump -h ${dbhost} -U ${dbuser} -p ${dbport} ${dbname} ${EXTRA_OPTS} | $dumpoutput > ${TEMP_LOCATION}/${target}
exit_code=$?
generate_md5
move_backup
fi
}
backup_redis() {
target=redis_${db}_${dbhost}_${now}.rdb
echo bgsave | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} --rdb ${TEMP_LOCATION}/${target} ${EXTRA_OPTS}
print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10
try=5
while [ $try -gt 0 ] ; do
saved=$(echo 'info Persistence' | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} | awk '/rdb_bgsave_in_progress:0/{print "saved"}')
ok=$(echo 'info Persistence' | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} | awk '/rdb_last_bgsave_status:ok/{print "ok"}')
if [[ "$saved" = "saved" ]] && [[ "$ok" = "ok" ]]; then
print_info "Redis Backup Complete"
break
fi
try=$((try - 1))
print_info "Redis Busy - Waiting and retrying in 5 seconds"
sleep 5
done
target_original=${target}
compression
$dumpoutput "${TEMP_LOCATION}/${target_original}"
generate_md5
move_backup
}
backup_sqlite3() {
db=$(basename "$dbhost")
db="${db%.*}"
target=sqlite3_${db}_${now}.sqlite3
compression
print_info "Dumping sqlite3 database: ${dbhost}"
sqlite3 "${dbhost}" ".backup '${TEMP_LOCATION}/backup.sqlite3'"
exit_code=$?
cat "${TEMP_LOCATION}/backup.sqlite3" | $dumpoutput > "${TEMP_LOCATION}/${target}"
generate_md5
move_backup
}
check_availability() {
### Set the Database Type
case "$dbtype" in
"couch" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "CouchDB Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"influx" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "InfluxDB Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"mongo" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "Mongo Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"mysql" )
COUNTER=0
export MYSQL_PWD=${dbpass}
while ! (mysqladmin -u"${dbuser}" -P"${dbport}" -h"${dbhost}" status > /dev/null 2>&1) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "MySQL/MariaDB Server '${dbhost}' is not accessible, retrying.. (${COUNTER} seconds so far)"
done
;;
"mssql" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "MSSQL Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"pgsql" )
COUNTER=0
export PGPASSWORD=${dbpass}
until pg_isready --dbname=${dbname} --host=${dbhost} --port=${dbport} --username=${dbuser} -q
do
sleep 5
(( COUNTER+=5 ))
print_warn "Postgres Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"redis" )
COUNTER=0
while ! (nc -z "${dbhost}" "${dbport}") ; do
sleep 5
(( COUNTER+=5 ))
print_warn "Redis Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"sqlite3" )
if [[ ! -e "${dbhost}" ]]; then
print_error "File '${dbhost}' does not exist."
exit_code=2
exit $exit_code
elif [[ ! -f "${dbhost}" ]]; then
print_error "File '${dbhost}' is not a file."
exit_code=2
exit $exit_code
elif [[ ! -r "${dbhost}" ]]; then
print_error "File '${dbhost}' is not readable."
exit_code=2
exit $exit_code
fi
;;
esac
}
compression() {
case "$COMPRESSION" in
"GZ" | "gz" | "gzip" | "GZIP")
print_notice "Compressing backup with gzip"
target=${target}.gz
dumpoutput="$gzip "
;;
"BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2")
print_notice "Compressing backup with bzip2"
target=${target}.bz2
dumpoutput="$bzip "
;;
"XZ" | "xz" | "XZIP" | "xzip" )
print_notice "Compressing backup with xzip"
target=${target}.xz
dumpoutput="$xzip "
;;
"ZSTD" | "zstd" | "ZST" | "zst" )
print_notice "Compressing backup with zstd"
target=${target}.zst
dumpoutput="$zstd "
;;
"NONE" | "none" | "FALSE" | "false")
dumpoutput="cat "
;;
esac
}
generate_md5() {
if var_true "$MD5" ; then
print_notice "Generating MD5 for ${target}"
cd ${TEMP_LOCATION}
md5sum "${target}" > "${target}".md5
MD5VALUE=$(md5sum "${target}" | awk '{ print $1}')
fi
}
move_backup() {
case "$SIZE_VALUE" in
"b" | "bytes" )
SIZE_VALUE=1
;;
"[kK]" | "[kK][bB]" | "kilobytes" | "[mM]" | "[mM][bB]" | "megabytes" )
SIZE_VALUE="-h"
;;
*)
SIZE_VALUE=1
;;
esac
if [ "$SIZE_VALUE" = "1" ] ; then
FILESIZE=$(stat -c%s "${TEMP_LOCATION}/${target}")
print_notice "Backup of ${target} created with the size of ${FILESIZE} bytes"
else
FILESIZE=$(du -h "${TEMP_LOCATION}/${target}" | awk '{ print $1}')
print_notice "Backup of ${target} created with the size of ${FILESIZE}"
fi
case "${BACKUP_LOCATION}" in
"FILE" | "file" | "filesystem" | "FILESYSTEM" )
mkdir -p "${DB_DUMP_TARGET}"
mv ${TEMP_LOCATION}/*.md5 "${DB_DUMP_TARGET}"/
mv ${TEMP_LOCATION}/"${target}" "${DB_DUMP_TARGET}"/"${target}"
;;
"S3" | "s3" | "MINIO" | "minio" )
export AWS_ACCESS_KEY_ID=${S3_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${S3_KEY_SECRET}
export AWS_DEFAULT_REGION=${S3_REGION}
[[ ( -n "${S3_HOST}" ) ]] && PARAM_AWS_ENDPOINT_URL=" --endpoint-url ${S3_PROTOCOL}://${S3_HOST}"
aws ${PARAM_AWS_ENDPOINT_URL} s3 cp ${TEMP_LOCATION}/${target} s3://${S3_BUCKET}/${S3_PATH}/${target}
rm -rf ${TEMP_LOCATION}/*.md5
rm -rf ${TEMP_LOCATION}/"${target}"
;;
esac
}
### Container Startup
print_debug "Backup routines Initialized on $(date)"
### Wait for Next time to start backup
if [ "$1" != "NOW" ]; then
current_time=$(date +"%s")
today=$(date +"%Y%m%d")
if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then
waittime=$(( ${BASH_REMATCH[1]} * 60 ))
target_time=$(($current_time + $waittime))
else
target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s")
if [[ "$target_time" < "$current_time" ]]; then
target_time=$(($target_time + 24*60*60))
fi
waittime=$(($target_time - $current_time))
fi
print_notice "Next Backup at $(date -d @${target_time} +"%Y-%m-%d %T %Z")"
sleep $waittime
fi
### Commence Backup
while true; do
# make sure the directory exists
mkdir -p $TEMP_LOCATION
### Define Target name
now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
target=${dbtype}_${dbname}_${dbhost}_${now}.sql
### Take a Dump
case "$dbtype" in
"couch" )
check_availability
backup_couch
;;
"influx" )
check_availability
backup_influx
;;
"mssql" )
check_availability
backup_mssql
;;
"mysql" )
check_availability
backup_mysql
;;
"mongo" )
check_availability
backup_mongo
;;
"pgsql" )
check_availability
backup_pgsql
;;
"redis" )
check_availability
backup_redis
;;
"sqlite3" )
check_availability
backup_sqlite3
;;
esac
### Zabbix
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_notice "Sending Backup Statistics to Zabbix"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o "$(stat -c%s "${DB_DUMP_TARGET}"/"${target}")"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
fi
### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then
print_notice "Cleaning up old backups"
find "${DB_DUMP_TARGET}"/ -mmin +"${DB_CLEANUP_TIME}" -iname "*" -exec rm {} \;
fi
if [ -n "$POST_SCRIPT" ] ; then
print_notice "Found POST_SCRIPT environment variable. Executing"
eval "${POST_SCRIPT}"
fi
### Post Backup Custom Script Support
if [ -d /assets/custom-scripts/ ] ; then
print_notice "Found Custom Filesystem Scripts to Execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_notice "Running Script ${f}"
## script EXIT_CODE DB_TYPE DB_HOST DB_NAME DATE BACKUP_FILENAME FILESIZE MD5_VALUE
chmod +x "${f}"
${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${now_date}" "${now_time}" "${target}" "${FILESIZE}" "${MD5VALUE}"
done
fi
### Go back to Sleep until next Backup time
if var_true $MANUAL ; then
exit 0;
else
sleep $(($DB_DUMP_FREQ*60))
fi
done
fi

View File

@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash
#!/command/with-contenv bash
echo '** Performing Manual Backup'
/etc/s6/services/10-db-backup/run NOW
/etc/services.available/10-db-backup/run NOW

View File

@@ -1,515 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<zabbix_export>
<version>3.4</version>
<date>2018-02-02T19:04:27Z</date>
<groups>
<group>
<name>Discovered Containers</name>
</group>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>Service - ICMP</template>
<name>Service - ICMP (Ping)</name>
<description/>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<items>
<item>
<name>ICMP ping</name>
<type>3</type>
<snmp_community/>
<snmp_oid/>
<key>icmpping</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<valuemap>
<name>Service state</name>
</valuemap>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>ICMP loss</name>
<type>3</type>
<snmp_community/>
<snmp_oid/>
<key>icmppingloss</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>0</value_type>
<allowed_hosts/>
<units>%</units>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>ICMP response time</name>
<type>3</type>
<snmp_community/>
<snmp_oid/>
<key>icmppingsec</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>0</value_type>
<allowed_hosts/>
<units>s</units>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
</items>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
</template>
<template>
<template>Zabbix - Container Agent</template>
<name>Zabbix - Container Agent</name>
<description/>
<groups>
<group>
<name>Discovered Containers</name>
</group>
<group>
<name>Templates</name>
</group>
</groups>
<applications>
<application>
<name>Packages</name>
</application>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<items>
<item>
<name>Hostname of Container</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.hostname</key>
<delay>1h</delay>
<history>1w</history>
<trends>0</trends>
<status>0</status>
<value_type>1</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>3</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Contaner OS</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.os</key>
<delay>6h</delay>
<history>30d</history>
<trends>0</trends>
<status>0</status>
<value_type>1</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>5</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Zabbix Agent ping</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.ping</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description>The agent always returns 1 for this item. It could be used in combination with nodata() for availability check.</description>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap>
<name>Zabbix agent ping status</name>
</valuemap>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Zabbix Agent Version</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.version</key>
<delay>1h</delay>
<history>1w</history>
<trends>0</trends>
<status>0</status>
<value_type>1</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Upgradable Packages</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>packages.upgradable</key>
<delay>6h</delay>
<history>90d</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>Packages</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
</items>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
</template>
</templates>
<triggers>
<trigger>
<expression>{Service - ICMP:icmpping.max(3m)}=3</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Cannot be pinged</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>5</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - ICMP:icmppingloss.min(10m)}&gt;50</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Ping loss is too high</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>4</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies>
<dependency>
<name>Cannot be pinged</name>
<expression>{Service - ICMP:icmpping.max(3m)}=3</expression>
<recovery_expression/>
</dependency>
</dependencies>
<tags/>
</trigger>
<trigger>
<expression>{Service - ICMP:icmppingsec.avg(2m)}&gt;100</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Ping Response time is too high</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>4</priority>
<description/>
<type>1</type>
<manual_close>0</manual_close>
<dependencies>
<dependency>
<name>Cannot be pinged</name>
<expression>{Service - ICMP:icmpping.max(3m)}=3</expression>
<recovery_expression/>
</dependency>
</dependencies>
<tags/>
</trigger>
<trigger>
<expression>{Zabbix - Container Agent:packages.upgradable.last()}&gt;0</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Upgraded Packages in Container Available</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>1</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Zabbix - Container Agent:agent.ping.nodata(3m)}=1</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Zabbix agent is unreachable</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>5</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
</triggers>
<value_maps>
<value_map>
<name>Service state</name>
<mappings>
<mapping>
<value>0</value>
<newvalue>Down</newvalue>
</mapping>
<mapping>
<value>1</value>
<newvalue>Up</newvalue>
</mapping>
</mappings>
</value_map>
<value_map>
<name>Zabbix agent ping status</name>
<mappings>
<mapping>
<value>1</value>
<newvalue>Up</newvalue>
</mapping>
</mappings>
</value_map>
</value_maps>
</zabbix_export>