Compare commits

...

95 Commits

Author SHA1 Message Date
Dave Conroy
9a6039d71d Release 2.10.1 - See CHANGELOG.md 2021-12-24 17:43:18 -08:00
Dave Conroy
c5f1618231 Merge pull request #93 from milenkara/master
Provide region when using S3
2021-12-24 17:42:16 -08:00
milenkara
7dd9fa890f Provide region when using S3 2021-12-24 16:26:03 +01:00
Dave Conroy
b62554ceff Release 2.10.0 - See CHANGELOG.md 2021-12-22 14:29:35 -08:00
Dave Conroy
7729743ccf Release 2.9.7 - See CHANGELOG.md 2021-12-15 07:17:57 -08:00
Dave Conroy
d56efc0ee9 Release 2.9.6 - See CHANGELOG.md 2021-12-13 17:40:43 -08:00
Dave Conroy
7d87e474e0 Merge branch 'master' of https://github.com/tiredofit/docker-db-backup 2021-12-13 17:38:25 -08:00
Dave Conroy
342c252d9a Release 2.9.5 - See CHANGELOG.md 2021-12-13 17:33:14 -08:00
Dave Conroy
25f3cab21f Merge pull request #91 from alexbarcelo/minio_fix
MINIO support by reacting to S3_HOST
2021-12-13 17:30:50 -08:00
Dave Conroy
e63d56c753 Merge pull request #92 from alexbarcelo/targettimeprint
Fixing the print_notice for Next Backup when `DB_DUMP_BEGIN` is +XXX
2021-12-13 17:27:58 -08:00
Alex Barcelo
86722a8e8a defining target_time variable in that branch 2021-12-13 21:56:45 +01:00
Alex Barcelo
4d6419fd18 reacting to S3_HOST config envvar by setting the --endpoint-url parameter on AWS CLI 2021-12-13 21:49:02 +01:00
Dave Conroy
99153ac6d1 Release 2.9.5 2021-12-07 15:10:28 -08:00
Dave Conroy
142967135d Release 2.9.4 - See CHANGELOG.md 2021-12-07 15:02:47 -08:00
Dave Conroy
1df66853fb Release 2.9.3 - See CHANGELOG.md 2021-11-24 09:53:49 -08:00
Dave Conroy
c019efeb74 Release 2.9.2 - See CHANGELOG.md 2021-10-22 07:12:05 -07:00
Dave Conroy
f276af2512 Merge pull request #86 from teenigma/backup-redis
Fix compression on Redis backup - Fixes #58
2021-10-22 07:10:00 -07:00
Teerapatr K
4e41e66eff Fix compression on Redis backup file 2021-10-22 14:06:37 +07:00
Teerapatr K
4488d113ef Jump out loop when complete 2021-10-22 14:04:54 +07:00
Dave Conroy
1cd014b165 Add manual CI pipeline 2021-10-17 09:41:53 -07:00
Dave Conroy
39bd8537ff Add manual CI pipeline 2021-10-17 09:41:18 -07:00
Dave Conroy
75ded7599c Release 2.9.1 - See CHANGELOG.md 2021-10-15 16:27:19 -07:00
Dave Conroy
5e5986db69 Merge pull request #83 from sbrunecker/bug/79/fix-mysql-8
fix: not able to connect to mysql 8 db with caching_sha2_password
2021-10-15 16:25:20 -07:00
Dave Conroy
1d61f40d0c Release 2.9.1 - See CHANGELOG.md 2021-10-15 16:24:46 -07:00
Dave Conroy
ee8bbb370b Merge pull request #84 from sbrunecker/bug/80/empty-password
fix: db available check getting stuck with empty password
2021-10-15 16:23:26 -07:00
Stefan Brunecker
ae201814fa fix: db available check getting stuck with empty password 2021-10-16 00:50:15 +02:00
Stefan Brunecker
bf8bce0893 fix: not able to connect to mysql 8 db with caching_sha2_password 2021-10-16 00:49:27 +02:00
Dave Conroy
3e8585394d Release 2.9.0 - See CHANGELOG.md 2021-10-15 10:08:27 -07:00
Dave Conroy
2e0c0d9248 Release 2.8.2 - See CHANGELOG.md 2021-10-15 09:52:25 -07:00
Dave Conroy
c81d8e2713 Release 2.8.1 - See CHANGELOG.md 2021-09-01 07:43:01 -07:00
Dave Conroy
b808b35624 Release 2.8.0 - See CHANGELOG.md 2021-08-27 07:22:59 -07:00
Dave Conroy
e060aeb0e5 Merge pull request #70 from the1ts/master
Fix syntax error - Issue #69
2021-06-22 08:26:58 -07:00
Paul Mansfield
03c16cc582 Fix syntax error
Case statement is missing double semi-colon.
2021-06-22 14:06:07 +01:00
Dave Conroy
e45d916b00 Release 2.7.0 - See CHANGELOG.md 2021-06-17 08:45:57 -07:00
Dave Conroy
eb0ee61662 Mongo DB Authentication Database 2021-06-17 07:22:38 -07:00
Dave Conroy
8e737eb579 Update to Alpine 3.14 2021-06-17 07:17:34 -07:00
Dave Conroy
5d2043a603 Merge pull request #63 from james-song/master
Fixed #62
2021-06-12 09:03:41 -07:00
Dave Conroy
89fe251321 Update CHANGELOG.md 2021-06-08 14:16:31 -07:00
Dave Conroy
e9052f1cd6 Release 2.6.1 - See CHANGELOG.md 2021-06-08 13:36:22 -07:00
Dave Conroy
a73528c5b2 Merge pull request #66 from jwillmer/patch-1
Fix for issue #14 - Use db_name in query
2021-06-08 13:33:35 -07:00
Jens Willmer
7d1e112dfc Fix for issue #14 - Use db_name in query
As pointed out by @wglambert the db_name needs to be specified when POSTGRES_DB was set: https://github.com/docker-library/postgres/issues/854#issuecomment-856877311
2021-06-08 22:10:58 +02:00
Dave Conroy
e0bb4c313b Merge pull request #65 from MadWalnut/master
Fix Docker image name and URL
2021-06-01 16:22:32 -07:00
MadWalnut
cb5333a59c Fix Docker image name and URL 2021-06-02 00:34:00 +02:00
Dave Conroy
7b59632378 Update README 2021-05-27 08:40:48 -07:00
james-song
353b1af7b4 Fixed #62, for upload backup files at S3 using awscli 2021-05-10 16:13:21 +09:00
Dave Conroy
a27b46a43a Update README 2021-05-02 16:18:28 -07:00
Dave Conroy
e5b9ee2cc0 Add Issue Templates 2021-05-02 14:48:24 -07:00
Dave Conroy
6ddb749bc6 Add Issue Templates 2021-05-02 14:47:56 -07:00
Dave Conroy
fb299c41dd Update README.md 2021-03-15 17:48:00 -07:00
Dave Conroy
d7d4f1cc19 Merge branch 'master' of https://github.com/tiredofit/docker-db-backup 2021-02-19 08:33:48 -08:00
Dave Conroy
c8c9a80533 Release 2.6.0 - See CHANGELOG.md 2021-02-19 08:33:43 -08:00
Dave Conroy
018234b9bc Merge pull request #56 from tpansino/add-sqlite-support
Add sqlite support
2021-02-19 08:32:52 -08:00
Tom Pansino
912e60edd8 Exit on failed file checks 2021-02-18 00:35:06 -08:00
Tom Pansino
46fddb533c Add sqlite3 to README 2021-02-18 00:09:17 -08:00
Tom Pansino
e8a1859d1a Add initial sqlite3 support 2021-02-17 23:55:22 -08:00
Dave Conroy
30fe2f181c #55 - Fix xz parallel compression 2021-02-14 09:06:21 -08:00
Dave Conroy
f57ce461e9 GitHub CI 2021-01-25 17:05:25 -08:00
Dave Conroy
34aab69cc2 Release 2.5.0 - See CHANGELOG.md 2021-01-25 16:39:42 -08:00
Dave Conroy
1930358775 Multi Arch CI 2021-01-21 15:35:16 -08:00
Dave Conroy
f207f375cc Release 2.4.0 - See CHANGELOG.md 2020-12-07 15:27:20 -08:00
Dave Conroy
88b58bffc5 Release 2.3.2 - See CHANGELOG.md 2020-11-14 12:37:58 -08:00
Dave Conroy
738f7fad25 Release 2.3.1 - See CHANGELOG.md 2020-11-11 13:45:05 -08:00
Dave Conroy
8c4733bf7f Merge pull request #52 from bambi73/master
#51 Fix backup of multiple InfluxDB databases failure
2020-11-11 13:43:56 -08:00
Bambi125
be4d8c0747 #51 Fix backup of multiple InfluxDB databases failure 2020-11-11 22:38:04 +01:00
Dave Conroy
a13849df0a Release 2.3.0 - See CHANGELOG.md 2020-10-15 08:15:10 -07:00
Dave Conroy
cb5347afe5 Release 2.2.2 - See CHANGELOG.md 2020-09-22 21:14:37 -07:00
Dave Conroy
ca03c5369d Merge pull request #47 from tpansino/bug/46-fix-docker-secrets
Fix Docker Secrets injection from DB_USER_FILE/DB_PASS_FILE
2020-09-22 21:02:05 -07:00
Tom Pansino
3008d9125f Fix Docker Secrets injection from DB_USER_FILE/DB_PASS_FILE 2020-09-22 20:32:09 -07:00
Dave Conroy
19cf3d007f Release 2.2.1 - See CHANGELOG.md 2020-09-17 21:39:27 -07:00
Dave Conroy
0bbf142349 Merge pull request #45 from alwynpan/fix-backup-now-date-error-message
Fix backup now date error message
2020-09-17 21:38:10 -07:00
Yao (Alwyn) Pan
1bc357866f #42 Update README 2020-09-18 14:34:06 +10:00
Yao (Alwyn) Pan
b38ad7a5cc #44 Remove 'invalid date' error message when performing backup-now 2020-09-18 14:32:08 +10:00
Dave Conroy
8bc02ee6c8 Release 2.2.0 - See CHANGELOG.md 2020-09-14 07:07:44 -07:00
Dave Conroy
3e71c377c6 Merge pull request #43 from alwynpan/fix-optional-vars
#42 Make DB_USER and DB_PASS optional for some dbtypes; update alpine repo URI
2020-09-14 07:05:29 -07:00
Yao (Alwyn) Pan
76a857239f #42 Make DB_USER and DB_PASS optional for some dbtypes; update alpine repo URI 2020-09-14 19:22:57 +10:00
Dave Conroy
02880d6541 Release 5.1.1 - See CHANGELOG.md 2020-09-01 09:57:58 -07:00
Dave Conroy
564613f329 Merge pull request #41 from zicklag/patch-1
Fix POST_SCRIPT Environment Vairable Run
2020-09-01 09:55:37 -07:00
Zicklag
2606d3c4d5 Fix POST_SCRIPT Environment Vairable Run 2020-09-01 09:46:43 -05:00
Dave Conroy
51f0206e17 Release 2.1.0 - See CHANGELOG.md 2020-08-29 07:43:24 -07:00
Dave Conroy
8d7bea3315 Merge branch 'master' of https://github.com/tiredofit/docker-db-backup into master 2020-08-29 07:37:10 -07:00
Dave Conroy
30c56229cf Release 2.1.0 - See CHANGELOG.md 2020-08-29 07:37:03 -07:00
Dave Conroy
04594087ed Create FUNDING.yml 2020-06-24 17:08:09 -07:00
Dave Conroy
b57683e992 Update README.md 2020-06-17 08:53:22 -07:00
Dave Conroy
1323966e22 Update README.md 2020-06-17 08:21:12 -07:00
Dave Conroy
310edda88c Reduce size of temporarily files
Changed way backups are performed to reduce temporary files
Removed Rethink Support
Rework MongoDB compression
Remove function prefix from functions
Rename case on variables for easier reading
2020-06-17 08:15:34 -07:00
Dave Conroy
955a08a21b Release 1.23.0 - See CHANGELOG.md 2020-06-15 09:44:07 -07:00
Dave Conroy
bf97c3ab97 Update README.md 2020-06-10 05:48:03 -07:00
Dave Conroy
11969da1ea Release 1.22.0 - See CHANGELOG.md 2020-06-10 05:45:49 -07:00
Dave Conroy
7998156576 Release 1.21.3 - See CHANGELOG.md 2020-06-10 05:19:24 -07:00
Dave Conroy
6655d5a12a Release 1.21.2 - See CHANGELOG.md 2020-06-08 21:29:54 -07:00
Dave Conroy
bd141cc865 Release 1.21.1 - See CHANGELOG.md 2020-06-04 05:59:29 -07:00
Dave Conroy
abf2a877f7 Fix example 2020-06-03 20:41:17 -07:00
Dave Conroy
3115cb3440 Release 1.21.0 - See CHANGELOG.md 2020-06-03 05:55:46 -07:00
Dave Conroy
859ce5ffa3 Release 1.20.1 - See CHANGELOG.md 2020-04-24 15:45:52 -07:00
Dave Conroy
4d1577e553 Fix malformed backtick 2020-04-22 14:39:09 -07:00
17 changed files with 1280 additions and 1028 deletions

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: [tiredofit]

42
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,42 @@
---
name: Bug report
about: If something isn't working right..
title: ''
labels: bug
assignees: ''
---
### Summary
<!-- Summarize the bug encountered -->
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. Please use an ordered list. -->
### What is the expected *correct* behavior?
<!-- Describe what should be seen instead. -->
### Relevant logs and/or screenshots
<!-- Paste any relevant logs - please use code blocks (```) to format console output, logs, and code as it's tough to read otherwise. -->
### Environment
<!--Your Configuration (please complete the following information): -->
- Image version / tag:
- Host OS:
<details>
<summary>Any logs | docker-compose.yml</summary>
</details>
<!-- Include anything additional -->
### Possible fixes
<!-- If you can, provide details to the root cause that might be responsible for the problem. -->

View File

@@ -0,0 +1,23 @@
---
name: Feature request
about: Suggest an idea or feature
title: ''
labels: enhancement
assignees: ''
---
---
name: Feature Request
about: Suggest an idea for this project
---
**Description of the feature**
<!-- A clear description of the feature you'd like implemented -->
**Benftits of feature**
<!-- Explain the measurable benefits this feature would achieve. -->
**Additional context**
<!--Add any other context or screenshots about the feature request here. -->

110
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
### Application Level Image CI
### Dave Conroy <dave at tiredofit dot ca>
name: 'build'
on:
push:
paths:
- '**'
- '!README.md'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${GITHUB_REPOSITORY/docker-/}
set -x
if [[ $GITHUB_REF == refs/heads/* ]]; then
if [[ $GITHUB_REF == refs/heads/*/* ]] ; then
BRANCH="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed "s|refs/heads/||g" | sed "s|/|-|g")"
else
BRANCH=${GITHUB_REF#refs/heads/}
fi
case ${BRANCH} in
"main" | "master" )
BRANCHTAG="${DOCKER_IMAGE}:latest"
;;
"develop" )
BRANCHTAG="${DOCKER_IMAGE}:develop"
;;
* )
if [ -n "${{ secrets.LATEST }}" ] ; then
if [ "${BRANCHTAG}" = "${{ secrets.LATEST }}" ]; then
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest,${DOCKER_IMAGE}:latest"
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
;;
esac
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
GITTAG="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed 's|refs/tags/||g')"
fi
if [ -n "${BRANCHTAG}" ] && [ -n "${GITTAG}" ]; then
TAGS=${BRANCHTAG},${GITTAG}
else
TAGS="${BRANCHTAG}${GITTAG}"
fi
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Label
id: Label
run: |
if [ -f "Dockerfile" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_repository=\"https://github.com/${GITHUB_REPOSITORY}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_commit=\"${GITHUB_SHA}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_committed_by=\"${GITHUB_ACTOR}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.image_build_date=\"$(date +'%Y-%m-%d %H:%M:%S')\"" Dockerfile
if [ -f "CHANGELOG.md" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_changelog_version=\"$(head -n1 ./CHANGELOG.md | awk '{print $2}')\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_tag=\"${GITHUB_REF#refs/tags/v}\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/heads/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_branch=\"${GITHUB_REF#refs/heads/}\"" Dockerfile
fi
fi
- name: Build
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64
push: true
tags: ${{ steps.prep.outputs.tags }}

110
.github/workflows/manual.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
# Manual Workflow (Application)
name: manual
on:
workflow_dispatch:
inputs:
Manual Build:
description: 'Manual Build'
required: false
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=${GITHUB_REPOSITORY/docker-/}
set -x
if [[ $GITHUB_REF == refs/heads/* ]]; then
if [[ $GITHUB_REF == refs/heads/*/* ]] ; then
BRANCH="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed "s|refs/heads/||g" | sed "s|/|-|g")"
else
BRANCH=${GITHUB_REF#refs/heads/}
fi
case ${BRANCH} in
"main" | "master" )
BRANCHTAG="${DOCKER_IMAGE}:latest"
;;
"develop" )
BRANCHTAG="${DOCKER_IMAGE}:develop"
;;
* )
if [ -n "${{ secrets.LATEST }}" ] ; then
if [ "${BRANCHTAG}" = "${{ secrets.LATEST }}" ]; then
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest,${DOCKER_IMAGE}:latest"
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
else
BRANCHTAG="${DOCKER_IMAGE}:${BRANCH},${DOCKER_IMAGE}:${BRANCH}-latest"
fi
;;
esac
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
GITTAG="${DOCKER_IMAGE}:$(echo $GITHUB_REF | sed 's|refs/tags/||g')"
fi
if [ -n "${BRANCHTAG}" ] && [ -n "${GITTAG}" ]; then
TAGS=${BRANCHTAG},${GITTAG}
else
TAGS="${BRANCHTAG}${GITTAG}"
fi
echo ::set-output name=tags::${TAGS}
echo ::set-output name=docker_image::${DOCKER_IMAGE}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Label
id: Label
run: |
if [ -f "Dockerfile" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_repository=\"https://github.com/${GITHUB_REPOSITORY}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_commit=\"${GITHUB_SHA}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image.git_committed_by=\"${GITHUB_ACTOR}\"" Dockerfile
sed -i "/FROM .*/a LABEL tiredofit.image_build_date=\"$(date +'%Y-%m-%d %H:%M:%S')\"" Dockerfile
if [ -f "CHANGELOG.md" ] ; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_changelog_version=\"$(head -n1 ./CHANGELOG.md | awk '{print $2}')\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_tag=\"${GITHUB_REF#refs/tags/v}\"" Dockerfile
fi
if [[ $GITHUB_REF == refs/heads/* ]]; then
sed -i "/FROM .*/a LABEL tiredofit.image.git_branch=\"${GITHUB_REF#refs/heads/}\"" Dockerfile
fi
fi
- name: Build
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64
push: true
tags: ${{ steps.prep.outputs.tags }}

View File

@@ -1,3 +1,234 @@
## 2.10.1 2021-12-22 <milenkara@github>
### Added
- Allow for choosing region when backing up to S3
## 2.10.0 2021-12-22 <dave at tiredofit dot ca>
### Changed
- Revert back to Postgresql 14 from packages as its now in the repositories
- Fix for Zabbix Monitoring
## 2.9.7 2021-12-15 <dave at tiredofit dot ca>
### Changed
- Fixup for Zabbix Autoagent registration
## 2.9.6 2021-12-03 <alexbarcello@githuba>
### Changed
- Fix for S3 Minio backup targets
- Fix for annoying output on certain target time print conditions
## 2.9.5 2021-12-07 <dave at tiredofit dot ca>
### Changed
- Fix for 2.9.3
## 2.9.4 2021-12-07 <dave at tiredofit dot ca>
### Added
- Add Zabbix auto register support for templates
## 2.9.3 2021-11-24 <dave at tiredofit dot ca>
### Added
- Alpine 3.15 base
## 2.9.2 2021-10-22 <teenigma@github>
### Fixed
- Fix compression failing on Redis backup
## 2.9.1 2021-10-15 <sbrunecker@github>
### Fixed
- Allow MariaDB 8.0 servers to be backed up
- Fixed DB available check getting stuck with empty password
## 2.9.0 2021-10-15 <dave at tiredofit dot ca>
### Added
- Postgresql 14 Support (compiled)
- MSSQL 17.8.1.1
## 2.8.2 2021-10-15 <dave at tiredofit dot ca>
### Changed
- Change to using aws cli from Alpine repositories (fixes #81)
## 2.8.1 2021-09-01 <dave at tiredofit dot ca>
### Changed
- Modernize image with updated environment varialbes from upstream
## 2.8.0 2021-08-27 <dave at tiredofit dot ca>
### Added
- Alpine 3.14 Base
### Changed
- Fix for syntax error in 2.7.0 Release (Credit the1ts@github)
- Cleanup image and leftover cache with AWS CLI installation
## 2.7.0 2021-06-17 <dave at tiredofit dot ca>
### Added
- MongoDB Authentication Database support (DB_AUTH)
## 2.6.1 2021-06-08 <jwillmer@github>
### Changed
- Fix for Issue #14 - SPLIT_DB=TRUE was not working for Postgres DB server
## 2.6.0 2021-02-19 <tpansino@github>
### Added
- SQLite support
## 2.5.1 2021-02-14 <dave at tiredofit dot ca>
### Changed
- Fix xz backups with `PARALLEL_COMPRESION=TRUE`
## 2.5.0 2021-01-25 <dave at tiredofit dot ca>
### Added
- Multi Platform Build Variants (ARMv7 AMD64 AArch64)
### Changed
- Alpine 3.13 Base
- Compile Pixz as opposed to relying on testing repository
- MSSQL Support only available under AMD64. Container exits if any other platform detected when MSSQL set to be backed up.
## 2.4.0 2020-12-07 <dave at tiredofit dot ca>
### Added
- Switch back to packges for Postgresql (now 13.1)
## 2.3.2 2020-11-14 <dave at tiredofit dot ca>
### Changed
- Reapply S6-Overlay into filesystem as Postgresql build is removing S6 files due to edge containing S6 overlay
## 2.3.1 2020-11-11 <bambi73@github>
### Fixed
- Multiple Influx DB's not being backed up correctly
## 2.3.0 2020-10-15 <dave at tiredofit dot ca>
### Added
- Microsoft SQL Server support (experimental)
### Changed
- Compiled Postgresql 13 from source to backup psql/13 hosts
## 2.2.2 2020-09-22 <tpansino@github>
### Fixed
- Patch for 2.2.0 release fixing Docker Secrets Support. Was skipping password check.
## 2.2.1 2020-09-17 <alwynpan@github>
### Fixed
- Ondemand/Manual backup with `backup-now` was throwing errors not being able to find a proper date
## 2.2.0 2020-09-14 <alwynpan@github>
### Fixed
- Allow to use MariaDB and MongoDBs with no username and password while still allowing Docker Secrets
- Changed source of Alpine package repositories
## 2.1.1 2020-09-01 <zicklag@github>
### Fixed
- Add eval to POST_SCRIPT execution
## 2.1.0 2020-08-29 <dave at tiredofit dot ca>
### Added
- Add Exit Code variable to be used for custom scripts - See README.md for placement
- Add POST_SCRIPT environment variable to execute command instead of relying on custom script
## 2.0.0 2020-06-17 <dave at tiredofit dot ca>
### Added
- Reworked compression routines to remove dependency on temporary files
- Changed the way that MongoDB compression works - only supports GZ going forward
### Changed
- Code cleanup (removed function prefixes, added verbosity)
### Reverted
- Removed Rethink Support
## 1.23.0 2020-06-15 <dave at tiredofit dot ca>
### Added
- Add zstd compression support
- Add choice of compression level
## 1.22.0 2020-06-10 <dave at tiredofit dot ca>
### Added
- Added EXTRA_OPTS variable to all backup commands to pass extra arguments
## 1.21.3 2020-06-10 <dave at tiredofit dot ca>
### Changed
- Fix `backup-now` manual script due to services.available change
## 1.21.2 2020-06-08 <dave at tiredofit dot ca>
### Added
- Change to support tiredofit/alpine base image 5.0.0
## 1.21.1 2020-06-04 <dave at tiredofit dot ca>
### Changed
- Bugfix to initalization routine
## 1.21.0 2020-06-03 <dave at tiredofit dot ca>
### Added
- Add S3 Compatible Storage Support
### Changed
- Switch some variables to support tiredofit/alpine base image better
- Fix issue with parallel compression not working correctly
## 1.20.1 2020-04-24 <dave at tiredofit dot ca>
### Changed
- Fix Auto Cleanup routines when using `root` as username
## 1.20.0 2020-04-22 <dave at tiredofit dot ca>
### Added

View File

@@ -1,51 +1,67 @@
FROM tiredofit/alpine:edge
LABEL maintainer="Dave Conroy (dave at tiredofit dot ca)"
FROM docker.io/tiredofit/alpine:3.15
### Set Environment Variables
ENV ENABLE_CRON=FALSE \
ENABLE_SMTP=FALSE \
ENABLE_ZABBIX=FALSE \
ZABBIX_HOSTNAME=db-backup
ENV MSSQL_VERSION=17.8.1.1-1 \
CONTAINER_ENABLE_MESSAGING=FALSE \
CONTAINER_ENABLE_MONITORING=TRUE
### Dependencies
RUN set -ex && \
echo "@testing http://nl.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && \
apk update && \
apk upgrade && \
apk add -t .db-backup-build-deps \
build-base \
bzip2-dev \
git \
libarchive-dev \
xz-dev \
&& \
\
apk add -t .db-backup-run-deps \
bzip2 \
apk add --no-cache -t .db-backup-run-deps \
aws-cli \
bzip2 \
influxdb \
libarchive \
mariadb-client \
mariadb-connector-c \
mongodb-tools \
libressl \
pigz \
postgresql \
postgresql-client \
redis \
sqlite \
xz \
zstd \
&& \
\
apk add \
pixz@testing \
&& \
\
cd /usr/src && \
\
apkArch="$(apk --print-arch)"; \
case "$apkArch" in \
x86_64) mssql=true ; curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/msodbcsql17_${MSSQL_VERSION}_amd64.apk ; curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/mssql-tools_${MSSQL_VERSION}_amd64.apk ; echo y | apk add --allow-untrusted msodbcsql17_${MSSQL_VERSION}_amd64.apk mssql-tools_${MSSQL_VERSION}_amd64.apk ;; \
*) echo >&2 "Detected non x86_64 build variant, skipping MSSQL installation" ;; \
esac; \
mkdir -p /usr/src/pbzip2 && \
curl -ssL https://launchpad.net/pbzip2/1.1/1.1.13/+download/pbzip2-1.1.13.tar.gz | tar xvfz - --strip=1 -C /usr/src/pbzip2 && \
curl -sSL https://launchpad.net/pbzip2/1.1/1.1.13/+download/pbzip2-1.1.13.tar.gz | tar xvfz - --strip=1 -C /usr/src/pbzip2 && \
cd /usr/src/pbzip2 && \
make && \
make install && \
mkdir -p /usr/src/pixz && \
curl -sSL https://github.com/vasi/pixz/releases/download/v1.0.7/pixz-1.0.7.tar.xz | tar xvfJ - --strip 1 -C /usr/src/pixz && \
cd /usr/src/pixz && \
./configure \
--prefix=/usr \
--sysconfdir=/etc \
--localstatedir=/var \
&& \
make && \
make install && \
\
### Cleanup
apk del .db-backup-build-deps && \
rm -rf /usr/src/* && \
rm -rf /tmp/* /var/cache/apk/*
rm -rf /root/.cache /tmp/* /var/cache/apk/*
### S6 Setup
ADD install /

View File

@@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2020 Dave Conroy
Copyright (c) 2021 Dave Conroy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

238
README.md
View File

@@ -1,66 +1,97 @@
# hub.docker.com/r/tiredofit/db-backup
# github.com/tiredofit/docker-db-backup
[![GitHub release](https://img.shields.io/github/v/tag/tiredofit/docker-db-backup?style=flat-square)](https://github.com/tiredofit/docker-db-backup/releases/latest)
[![Build Status](https://img.shields.io/github/workflow/status/tiredofit/docker-db-backup/build?style=flat-square)](https://github.com/tiredofit/docker-db-backup/actions?query=workflow%3Abuild)
[![Docker Stars](https://img.shields.io/docker/stars/tiredofit/db-backup.svg?style=flat-square&logo=docker)](https://hub.docker.com/r/tiredofit/db-backup/)
[![Docker Pulls](https://img.shields.io/docker/pulls/tiredofit/db-backup.svg?style=flat-square&logo=docker)](https://hub.docker.com/r/tiredofit/db-backup/)
[![Become a sponsor](https://img.shields.io/badge/sponsor-tiredofit-181717.svg?logo=github&style=flat-square)](https://github.com/sponsors/tiredofit)
[![Paypal Donate](https://img.shields.io/badge/donate-paypal-00457c.svg?logo=paypal&style=flat-square)](https://www.paypal.me/tiredofit)
[![Build Status](https://img.shields.io/docker/build/tiredofit/db-backup.svg)](https://hub.docker.com/r/tiredofit/db-backup)
[![Docker Pulls](https://img.shields.io/docker/pulls/tiredofit/db-backup.svg)](https://hub.docker.com/r/tiredofit/db-backup)
[![Docker Stars](https://img.shields.io/docker/stars/tiredofit/db-backup.svg)](https://hub.docker.com/r/tiredofit/db-backup)
[![Docker Layers](https://images.microbadger.com/badges/image/tiredofit/db-backup.svg)](https://microbadger.com/images/tiredofit/db-backup)
* * *
## About
# Introduction
This will build a container for backing up multiple types of DB Servers
This will build a container for backing up multiple type of DB Servers
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis servers.
Currently backs up CouchDB, InfluxDB, MySQL, MongoDB, Postgres, Redis, Rethink servers.
* dump to local filesystem
* dump to local filesystem or backup to S3 Compatible services
* select database user and password
* backup all databases
* choose to have an MD5 sum after backup for verification
* delete old backups after specific amount of time
* choose compression type (none, gz, bz, xz)
* choose compression type (none, gz, bz, xz, zstd)
* connect to any container running on the same system
* select how often to run a dump
* select when to start the first dump, whether time of day or relative to container start time
* Execute script after backup for monitoring/alerting purposes
* This Container uses a [customized Alpine Linux base](https://hub.docker.com/r/tiredofit/alpine) which includes [s6 overlay](https://github.com/just-containers/s6-overlay) enabled for PID 1 Init capabilities, [zabbix-agent](https://zabbix.org) for individual container monitoring, Cron also installed along with other tools (bash,curl, less, logrotate, nano, vim) for easier management. It also supports sending to external SMTP servers.
[Changelog](CHANGELOG.md)
# Authors
## Maintainer
- [Dave Conroy](https://github.com/tiredofit)
# Table of Contents
## Table of Contents
- [Introduction](#introduction)
- [Changelog](CHANGELOG.md)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#configuration)
- [Data Volumes](#data-volumes)
- [Environment Variables](#environmentvariables)
- [Maintenance](#maintenance)
- [github.com/tiredofit/docker-db-backup](#githubcomtiredofitdocker-db-backup)
- [About](#about)
- [Maintainer](#maintainer)
- [Table of Contents](#table-of-contents)
- [Prerequisites and Assumptions](#prerequisites-and-assumptions)
- [Installation](#installation)
- [Build from Source](#build-from-source)
- [Prebuilt Images](#prebuilt-images)
- [Configuration](#configuration)
- [Quick Start](#quick-start)
- [Persistent Storage](#persistent-storage)
- [Environment Variables](#environment-variables)
- [Base Images used](#base-images-used)
- [Backing Up to S3 Compatible Services](#backing-up-to-s3-compatible-services)
- [Maintenance](#maintenance)
- [Shell Access](#shell-access)
- [Manual Backups](#manual-backups)
- [Custom Scripts](#custom-scripts)
- [#### Example Post Script](#-example-post-script)
- [#### $1=EXIT_CODE (After running backup routine)](#-1exit_code-after-running-backup-routine)
- [#### $2=DB_TYPE (Type of Backup)](#-2db_type-type-of-backup)
- [#### $3=DB_HOST (Backup Host)](#-3db_host-backup-host)
- [#### #4=DB_NAME (Name of Database backed up](#-4db_name-name-of-database-backed-up)
- [#### $5=DATE (Date of Backup)](#-5date-date-of-backup)
- [#### $6=TIME (Time of Backup)](#--6time-time-of-backup)
- [#### $7=BACKUP_FILENAME (Filename of Backup)](#--7backup_filename-filename-of-backup)
- [#### $8=FILESIZE (Filesize of backup)](#--8filesize-filesize-of-backup)
- [#### $9=MD5_RESULT (MD5Sum if enabled)](#--9md5_result-md5sum-if-enabled)
- [Support](#support)
- [Usage](#usage)
- [Bugfixes](#bugfixes)
- [Feature Requests](#feature-requests)
- [Updates](#updates)
- [License](#license)
# Prerequisites
## Prerequisites and Assumptions
You must have a working DB server or container available for this to work properly, it does not provide server functionality!
## Installation
# Installation
Automated builds of the image are available on [Docker Hub](https://hub.docker.com/r/tiredofit/db-backup) and is the recommended
method of installation.
### Build from Source
Clone this repository and build the image with `docker build -t (imagename) .`
### Prebuilt Images
Builds of the image are available on [Docker Hub](https://hub.docker.com/r/tiredofit/db-backup) and is the recommended method of installation.
```bash
docker pull tiredofit/db-backup:latest
docker pull tiredofit/db-backup:(imagetag)
```
# Quick Start
The following image tags are available along with their tagged release based on what's written in the [Changelog](CHANGELOG.md):
| Container OS | Tag |
| ------------ | --------- |
| Alpine | `:latest` |
## Configuration
### Quick Start
* The quickest way to get started is using [docker-compose](https://docs.docker.com/compose/). See the examples folder for a working [docker-compose.yml](examples/docker-compose.yml) that can be modified for development or production use.
@@ -68,79 +99,126 @@ docker pull tiredofit/db-backup:latest
* Map [persistent storage](#data-volumes) for access to configuration and data files for backup.
> **NOTE**: If you are using this with a docker-compose file along with a seperate SQL container, take care not to set the variables to backup immediately, more so have it delay execution for a minute, otherwise you will get a failed first backup.
# Configuration
## Data-Volumes
### Persistent Storage
The following directories are used for configuration and can be mapped for persistent storage.
| Directory | Description |
|-----------|-------------|
| `/backup` | Backups |
| `/assets/custom-scripts | *Optional* Put custom scripts in this directory to execute after backup operations`
| Directory | Description |
| ------------------------ | ---------------------------------------------------------------------------------- |
| `/backup` | Backups |
| `/assets/custom-scripts` | *Optional* Put custom scripts in this directory to execute after backup operations |
### Environment Variables
#### Base Images used
This image relies on an [Alpine Linux](https://hub.docker.com/r/tiredofit/alpine) base image that relies on an [init system](https://github.com/just-containers/s6-overlay) for added capabilities. Outgoing SMTP capabilities are handlded via `msmtp`. Individual container performance monitoring is performed by [zabbix-agent](https://zabbix.org). Additional tools include: `bash`,`curl`,`less`,`logrotate`, `nano`,`vim`.
Be sure to view the following repositories to understand all the customizable options:
| Image | Description |
| ------------------------------------------------------ | -------------------------------------- |
| [OS Base](https://github.com/tiredofit/docker-alpine/) | Customized Image based on Alpine Linux |
## Environment Variables
| Parameter | Description |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `BACKUP_LOCATION` | Backup to `FILESYSTEM` or `S3` compatible services like S3, Minio, Wasabi - Default `FILESYSTEM` |
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, ZSTD `ZSTD` or none `NONE` - Default `GZ` |
| `COMPRESSION_LEVEL` | Numberical value of what level of compression to use, most allow `1` to `9` except for `ZSTD` which allows for `1` to `19` - Default `3` |
| `DB_AUTH` | (Mongo Only - Optional) Authentication Database |
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `sqlite3` |
| `DB_HOST` | Server Hostname e.g. `mariadb`. For `sqlite3`, full path to DB file e.g. `/backup/db.sqlite3` |
| `DB_NAME` | Schema Name e.g. `database` |
| `DB_USER` | username for the database - use `root` to backup all MySQL of them. |
| `DB_PASS` | (optional if DB doesn't require it) password for the database |
| `DB_PORT` | (optional) Set port to connect to DB_HOST. Defaults are provided |
| `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day. |
| `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats |
| | Absolute HHMM, e.g. `2330` or `0415` |
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half |
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything. |
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. |
| `EXTRA_OPTS` | If you need to pass extra arguments to the backup command, add them here e.g. "--extra-command" |
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE` |
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| `POST_SCRIPT` | Fill this variable in with a command to execute post the script backing up | |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation.
When using compression with MongoDB, only `GZ` compression is possible.
#### Backing Up to S3 Compatible Services
| Parameter | Description |
|-----------|-------------|
| `COMPRESSION` | Use either Gzip `GZ`, Bzip2 `BZ`, XZip `XZ`, or none `NONE` - Default `GZ`
| `DB_TYPE` | Type of DB Server to backup `couch` `influx` `mysql` `pgsql` `mongo` `redis` `rethink`
| `DB_HOST` | Server Hostname e.g. `mariadb`
| `DB_NAME` | Schema Name e.g. `database`
| `DB_USER` | username for the database - use `root` to backup all MySQL of them.
| `DB_PASS` | (optional if DB doesn't require it) password for the database
| `DB_PORT` | (optional) Set port to connect to DB_HOST. Defaults are provided
| `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day.
| `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats
| | Absolute HHMM, e.g. `2330` or `0415`
| | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half
| `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
| `DEBUG_MODE` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed.
| `MD5` | Generate MD5 Sum in Directory, `TRUE` or `FALSE` - Default `TRUE`
| `PARALLEL_COMPRESSION` | Use multiple cores when compressing backups `TRUE` or `FALSE` - Default `TRUE` |
| `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. - Default `FALSE` |
If `BACKUP_LOCATION` = `S3` then the following options are used.
| Parameter | Description |
| --------------- | --------------------------------------------------------------------------------------- |
| `S3_BUCKET` | S3 Bucket name e.g. 'mybucket' |
| `S3_HOST` | Hostname of S3 Server e.g "s3.amazonaws.com" - You can also include a port if necessary |
| `S3_KEY_ID` | S3 Key ID |
| `S3_KEY_SECRET` | S3 Key Secret |
| `S3_PATH` | S3 Pathname to save to e.g. '`backup`' |
| `S3_PROTOCOL` | Use either `http` or `https` to access service - Default `https` |
| `S3_REGION` | Define region in which bucket is defined. Example: `ap-northeast-2` |
## Maintenance
Manual Backups can be performed by entering the container and typing `backup-now`
#### Shell Access
### Shell Access
For debugging and maintenance purposes you may want access the containers shell.
```bash
docker exec -it (whatever your container name is e.g.) db-backup bash
```
``bash
docker exec -it (whatever your container name is) bash
``
### Manual Backups
Manual Backups can be performed by entering the container and typing `backup-now`
#### Custom Scripts
### Custom Scripts
If you want to execute a custom script at the end of backup, you can drop bash scripts with the extension of `.sh` in this directory. See the following example to utilize:
````bash
$ cat post-script.sh
$ cat post-script.sh
##!/bin/bash
## Example Post Script
## $1=DB_TYPE (Type of Backup)
## $2=DB_HOST (Backup Host)
## #3=DB_NAME (Name of Database backed up
## $4=DATE (Date of Backup)
## $5=TIME (Time of Backup)
## $6=BACKUP_FILENAME (Filename of Backup)
## $7=FILESIZE (Filesize of backup)
## $8=MD5_RESULT (MD5Sum if enabled)
# #### Example Post Script
# #### $1=EXIT_CODE (After running backup routine)
# #### $2=DB_TYPE (Type of Backup)
# #### $3=DB_HOST (Backup Host)
# #### #4=DB_NAME (Name of Database backed up
# #### $5=DATE (Date of Backup)
# #### $6=TIME (Time of Backup)
# #### $7=BACKUP_FILENAME (Filename of Backup)
# #### $8=FILESIZE (Filesize of backup)
# #### $9=MD5_RESULT (MD5Sum if enabled)
echo "${1} Backup Completed on ${2} for ${3} on ${4} ${5}. Filename: ${6} Size: ${7} bytes MD5: ${8}"
echo "${1} ${2} Backup Completed on ${3} for ${4} on ${5} ${6}. Filename: ${7} Size: ${8} bytes MD5: ${9}"
````
Outputs the following on the console:
`mysql Backup Completed on example-db for example on 2020-04-22 05:19:10. Filename: mysql_example_example-db_20200422-051910.sql.bz2 Size: 7795 bytes MD5: 952fbaafa30437494fdf3989a662cd40`
`0 mysql Backup Completed on example-db for example on 2020-04-22 05:19:10. Filename: mysql_example_example-db_20200422-051910.sql.bz2 Size: 7795 bytes MD5: 952fbaafa30437494fdf3989a662cd40`
If you wish to change the size value from bytes to megabytes set environment variable `SIZE_VALUE=megabytes`
## Support
These images were built to serve a specific need in a production environment and gradually have had more functionality added based on requests from the community.
### Usage
- The [Discussions board](../../discussions) is a great place for working with the community on tips and tricks of using this image.
- Consider [sponsoring me](https://github.com/sponsors/tiredofit) personalized support.
### Bugfixes
- Please, submit a [Bug Report](issues/new) if something isn't working as expected. I'll do my best to issue a fix in short order.
### Feature Requests
- Feel free to submit a feature request, however there is no guarantee that it will be added, or at what timeline.
- Consider [sponsoring me](https://github.com/sponsors/tiredofit) regarding development of features.
### Updates
- Best effort to track upstream changes, More priority if I am actively using the image in a production environment.
- Consider [sponsoring me](https://github.com/sponsors/tiredofit) for up to date releases.
## License
MIT. See [LICENSE](LICENSE) for more details.

2
examples/docker-compose.yml Normal file → Executable file
View File

@@ -6,7 +6,6 @@ services:
image: mariadb:latest
volumes:
- ./db:/var/lib/mysql
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- MYSQL_ROOT_PASSWORD=examplerootpassword
- MYSQL_DATABASE=example
@@ -21,6 +20,7 @@ services:
- example-db
volumes:
- ./backups:/backup
- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- DB_TYPE=mariadb
- DB_HOST=example-db

View File

@@ -1,13 +1,14 @@
##!/bin/bash
## Example Post Script
## $1=DB_TYPE (Type of Backup)
## $2=DB_HOST (Backup Host)
## #3=DB_NAME (Name of Database backed up
## $4=DATE (Date of Backup)
## $5=TIME (Time of Backup)
## $6=BACKUP_FILENAME (Filename of Backup)
## $7=FILESIZE (Filesize of backup)
## $8=MD5_RESULT (MD5Sum if enabled)
## $1=EXIT_CODE (After running backup routine)
## $2=DB_TYPE (Type of Backup)
## $3=DB_HOST (Backup Host)
## #4=DB_NAME (Name of Database backed up
## $5=DATE (Date of Backup)
## $6=TIME (Time of Backup)
## $7=BACKUP_FILENAME (Filename of Backup)
## $8=FILESIZE (Filesize of backup)
## $9=MD5_RESULT (MD5Sum if enabled)
echo "${1} Backup Completed on ${2} for ${3} on ${4} ${5}. Filename: ${6} Size: ${7} bytes MD5: ${8}"
echo "${1} ${2} Backup Completed on ${3} for ${4} on ${5} ${6}. Filename: ${7} Size: ${8} bytes MD5: ${9}"

1
install/etc/cont-finish.d/10-db-backup Normal file → Executable file
View File

@@ -1,4 +1,3 @@
#!/usr/bin/with-contenv bash
pkill bash

View File

@@ -0,0 +1,17 @@
#!/usr/bin/with-contenv bash
source /assets/functions/00-container
prepare_service single
prepare_service 03-monitoring
PROCESS_NAME="db-backup"
output_off
if var_true "${CONTAINER_ENABLE_MONITORING}" && [ "${CONTAINER_MONITORING_BACKEND,,}" = "zabbix" ]; then
cat <<EOF > "${ZABBIX_CONFIG_PATH}"/"${ZABBIX_CONFIG_FILE}.d"/tiredofit_dbbackup.conf
# Zabbix DB Backup Configuration - Automatically Generated
# Find Companion Zabbix Server Templates at https://github.com/tiredofit/docker-dbbackup
# Autoregister=dbbackup
EOF
fi
liftoff

View File

@@ -1,405 +0,0 @@
#!/usr/bin/with-contenv bash
for s in /assets/functions/*; do source $s; done
PROCESS_NAME="db-backup"
date >/dev/null
if [ "$1" != "NOW" ]; then
sleep 10
fi
### Sanity Test
sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host"
file_env 'DB_USER'
file_env 'DB_PASS'
### Set Defaults
COMPRESSION=${COMPRESSION:-GZ}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
DBHOST=${DB_HOST}
DBNAME=${DB_NAME}
DBPASS=${DB_PASS}
DBUSER=${DB_USER}
DBTYPE=${DB_TYPE}
MD5=${MD5:-TRUE}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SPLIT_DB=${SPLIT_DB:-FALSE}
TMPDIR=/tmp/backups
if [ "$1" = "NOW" ]; then
DB_DUMP_BEGIN=+0
MANUAL=TRUE
fi
### Set Compression Options
if [ "$PARALLEL_COMPRESSION" = "TRUE " ]; then
BZIP="pbzip2"
GZIP="pigz"
XZIP="pixz"
else
BZIP="bzip2"
GZIP="gzip"
XZIP="xz"
fi
### Set the Database Type
case "$DBTYPE" in
"couch" | "couchdb" | "COUCH" | "COUCHDB" )
DBTYPE=couch
DBPORT=${DB_PORT:-5984}
;;
"influx" | "influxdb" | "INFLUX" | "INFLUXDB" )
DBTYPE=influx
DBPORT=${DB_PORT:-8088}
;;
"mongo" | "mongodb" | "MONGO" | "MONGODB" )
DBTYPE=mongo
DBPORT=${DB_PORT:-27017}
[[ ( -n "${DB_USER}" ) ]] && MONGO_USER_STR=" --username ${DBUSER}"
[[ ( -n "${DB_PASS}" ) ]] && MONGO_PASS_STR=" --password ${DBPASS}"
[[ ( -n "${DB_NAME}" ) ]] && MONGO_DB_STR=" --db ${DBNAME}"
;;
"mysql" | "MYSQL" | "mariadb" | "MARIADB")
DBTYPE=mysql
DBPORT=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) ]] && export MYSQL_PWD=${DBPASS}
;;
"postgres" | "postgresql" | "pgsql" | "POSTGRES" | "POSTGRESQL" | "PGSQL" )
DBTYPE=pgsql
DBPORT=${DB_PORT:-5432}
[[ ( -n "${DB_PASS}" ) ]] && POSTGRES_PASS_STR="PGPASSWORD=${DBPASS}"
;;
"redis" | "REDIS" )
DBTYPE=redis
DBPORT=${DB_PORT:-6379}
[[ ( -n "${DB_PASS}" ) ]] && REDIS_PASS_STR=" -a ${DBPASS}"
;;
"rethink" | "RETHINK" )
DBTYPE=rethink
DBPORT=${DB_PORT:-28015}
[[ ( -n "${DB_PASS}" ) ]] && echo $DB_PASS>/tmp/.rethink.auth; RETHINK_PASS_STR=" --password-file /tmp/.rethink.auth"
[[ ( -n "${DB_NAME}" ) ]] && RETHINK_DB_STR=" -e ${DBNAME}"
;;
esac
### Functions
function backup_couch() {
TARGET=couch_${DBNAME}_${DBHOST}_${now}.txt
curl -X GET http://${DBHOST}:${DBPORT}/${DBNAME}/_all_docs?include_docs=true >${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
}
function backup_mysql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then
DATABASES=`mysql -h ${DBHOST} -P $DBPORT -u$DBUSER --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema`
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
echo "** [db-backup] Dumping database: $db"
TARGET=mysql_${db}_${DBHOST}_${now}.sql
mysqldump --max-allowed-packet=512M -h $DBHOST -P $DBPORT -u$DBUSER --databases $db > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
done
else
mysqldump --max-allowed-packet=512M -A -h $DBHOST -P $DBPORT -u$DBUSER > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
}
function backup_influx() {
for DB in $DB_NAME; do
influxd backup -database $DB -host ${DBHOST}:${DBPORT} ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
done
}
function backup_mongo() {
mongodump --out ${TMPDIR}/${TARGET} --host ${DBHOST} --port ${DBPORT} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_DB_STR} ${EXTRA_OPTS}
cd ${TMPDIR}
tar cf ${TARGET}.tar ${TARGET}/*
TARGET=${TARGET}.tar
generate_md5
compression
move_backup
}
function backup_pgsql() {
if [ "$SPLIT_DB" = "TRUE" ] || [ "$SPLIT_DB" = "true" ]; then
export PGPASSWORD=${DBPASS}
DATABASES=`psql -h $DBHOST -U $DBUSER -p ${DBPORT} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' `
for db in $DATABASES; do
print_info "Dumping database: $db"
TARGET=pgsql_${db}_${DBHOST}_${now}.sql
pg_dump -h ${DBHOST} -p ${DBPORT} -U ${DBUSER} $db > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
done
else
export PGPASSWORD=${DBPASS}
pg_dump -h ${DBHOST} -U ${DBUSER} -p ${DBPORT} ${DBNAME} > ${TMPDIR}/${TARGET}
generate_md5
compression
move_backup
fi
}
function backup_redis() {
TARGET=redis_${db}_${DBHOST}_${now}.rdb
echo bgsave | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} --rdb ${TMPDIR}/${TARGET}
print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10
try=5
while [ $try -gt 0 ] ; do
saved=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} | awk '/rdb_bgsave_in_progress:0/{print "saved"}')
ok=$(echo 'info Persistence' | redis-cli -h ${DBHOST} -p ${DBPORT} ${REDIS_PASS_STR} | awk '/rdb_last_bgsave_status:ok/{print "ok"}')
if [[ "$saved" = "saved" ]] && [[ "$ok" = "ok" ]]; then
print_info "Redis Backup Complete"
fi
try=$((try - 1))
print_info "Redis Busy - Waiting and retrying in 5 seconds"
sleep 5
done
generate_md5
compression
move_backup
}
function backup_rethink() {
TARGET=rethink_${db}_${DBHOST}_${now}.tar.gz
print_info "Dumping rethink Database: $db"
rethinkdb dump -f ${TMPDIR}/${TARGET} -c ${DBHOST}:${DBPORT} ${RETHINK_PASS_STR} ${RETHINK_DB_STR}
move_backup
}
function check_availability() {
### Set the Database Type
case "$DBTYPE" in
"couch" )
COUNTER=0
while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5
let COUNTER+=5
print_warn "CouchDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"influx" )
COUNTER=0
while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5
let COUNTER+=5
print_warn "InfluxDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"mongo" )
COUNTER=0
while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5
let COUNTER+=5
print_warn "Mongo Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"mysql" )
COUNTER=0
while true; do
mysqlcmd='mysql -u'${DBUSER}' -P '${DBPORT}' -h '${DBHOST}' -p'${DBPASS}
out="`$mysqlcmd -e "SELECT COUNT(*) FROM information_schema.FILES;" 2>&1`"
echo "$out" | grep -E "COUNT|Enter" 2>&1 > /dev/null
if [ $? -eq 0 ]; then
:
break
fi
print_warn "MySQL/MariaDB Server "$DBHOST" is not accessible, retrying.. ($COUNTER seconds so far)"
sleep 5
let COUNTER+=5
done
;;
"pgsql" )
# Wait until mongo logs that it's ready (or timeout after 60s)
COUNTER=0
export PGPASSWORD=${DBPASS}
until pg_isready --dbname=${DBNAME} --host=${DBHOST} --port=${DBPORT} --username=${DBUSER} -q
do
sleep 5
let COUNTER+=5
print_warn "Postgres Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"redis" )
COUNTER=0
while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5
let COUNTER+=5
print_warn "Redis Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"rethink" )
COUNTER=0
while ! (nc -z ${DBHOST} ${DBPORT}) ; do
sleep 5
let COUNTER+=5
print_warn "RethinkDB Host '"$DBHOST"' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
esac
}
function compression() {
case "$COMPRESSION" in
"GZ" | "gz" | "gzip" | "GZIP")
$GZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.gz
;;
"BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2")
$BZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.bz2
;;
"XZ" | "xz" | "XZIP" | "xzip" )
$XZIP ${TMPDIR}/${TARGET}
TARGET=${TARGET}.xz
;;
"NONE" | "none" | "FALSE" | "false")
;;
esac
}
function generate_md5() {
if [ "$MD5" = "TRUE" ] || [ "$MD5" = "true" ] ; then
cd $TMPDIR
md5sum ${TARGET} > ${TARGET}.md5
MD5VALUE=$(md5sum ${TARGET} | awk '{ print $1}')
fi
}
function move_backup() {
mkdir -p ${DB_DUMP_TARGET}
mv ${TMPDIR}/*.md5 ${DB_DUMP_TARGET}/
mv ${TMPDIR}/${TARGET} ${DB_DUMP_TARGET}/${TARGET}
case "$SIZE_VALUE" in
"b" | "bytes" )
SIZE_VALUE=1
;;
"[kK]" | "[kK][bB]" | "kilobytes" | "[mM]" | "[mM][bB]" | "megabytes" )
SIZE_VALUE="-h"
;;
*)
SIZE_VALUE=1
;;
esac
if [ "$SIZE_VALUE" = "1" ] ; then
FILESIZE=$(stat -c%s "${DB_DUMP_TARGET}/${TARGET}")
else
FILESIZE=$(du -h "${DB_DUMP_TARGET}/${TARGET}" | awk '{ print $1}')
fi
}
### Container Startup
print_info "Initialized on `date`"
### Wait for Next time to start backup
current_time=$(date +"%s")
today=$(date +"%Y%m%d")
if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then
waittime=$(( ${BASH_REMATCH[1]} * 60 ))
else
target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s")
if [[ "$target_time" < "$current_time" ]]; then
target_time=$(($target_time + 24*60*60))
fi
waittime=$(($target_time - $current_time))
fi
sleep $waittime
### Commence Backup
while true; do
# make sure the directory exists
mkdir -p $TMPDIR
### Define Target name
now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
TARGET=${DBTYPE}_${DBNAME}_${DBHOST}_${now}.sql
### Take a Dump
case "$DBTYPE" in
"couch" )
check_availability
backup_couch
;;
"influx" )
check_availability
backup_influx
;;
"mysql" )
check_availability
backup_mysql
;;
"mongo" )
check_availability
backup_mongo
;;
"pgsql" )
check_availability
backup_pgsql
;;
"redis" )
check_availability
backup_redis
;;
"rethink" )
check_availability
backup_rethink
;;
esac
### Zabbix
if [ "$ENABLE_ZABBIX" = "TRUE" ] || [ "$ENABLE_ZABBIX" = "true" ]; then
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o `stat -c%s ${DB_DUMP_TARGET}/${TARGET}`
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o `date -r ${DB_DUMP_TARGET}/${TARGET} +'%s'`
fi
### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then
find $DB_DUMP_TARGET/ -mmin +$DB_CLEANUP_TIME -iname "$DBTYPE_$DBNAME_*.*" -exec rm {} \;
fi
### Post Backup Custom Script Support
if [ -d /assets/custom-scripts/ ] ; then
print_info "Found Custom Scripts to Execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_info "Running Script ${f}"
## script DB_TYPE DB_HOST DB_NAME DATE BACKUP_FILENAME FILESIZE MD5_VALUE
chmod +x ${f}
${f} "${DBTYPE}" "${DBHOST}" "${DBNAME}" "${now_date}" "${now_time}" "${TARGET}" "${FILESIZE}" "${MD5VALUE}"
done
fi
### Go back to Sleep until next Backup time
if [ "$MANUAL" = "TRUE" ]; then
exit 1;
else
sleep $(($DB_DUMP_FREQ*60))
fi
done
fi

View File

@@ -0,0 +1,544 @@
#!/usr/bin/with-contenv bash
source /assets/functions/00-container
PROCESS_NAME="db-backup"
date >/dev/null
if [ "$1" != "NOW" ]; then
sleep 10
fi
### Sanity Test
sanity_var DB_TYPE "Database Type"
sanity_var DB_HOST "Database Host"
### Set the Database Type
dbtype=${DB_TYPE}
case "$dbtype" in
"couch" | "couchdb" | "COUCH" | "COUCHDB" )
dbtype=couch
dbport=${DB_PORT:-5984}
file_env 'DB_USER'
file_env 'DB_PASS'
;;
"influx" | "influxdb" | "INFLUX" | "INFLUXDB" )
dbtype=influx
dbport=${DB_PORT:-8088}
file_env 'DB_USER'
file_env 'DB_PASS'
;;
"mongo" | "mongodb" | "MONGO" | "MONGODB" )
dbtype=mongo
dbport=${DB_PORT:-27017}
[[ ( -n "${DB_USER}" ) || ( -n "${DB_USER_FILE}" ) ]] && file_env 'DB_USER'
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
;;
"mysql" | "MYSQL" | "mariadb" | "MARIADB")
dbtype=mysql
dbport=${DB_PORT:-3306}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
;;
"mssql" | "MSSQL" | "microsoftsql" | "MICROSOFTSQL")
apkArch="$(apk --print-arch)"; \
case "$apkArch" in
x86_64) mssql=true ;;
*) print_error "MSSQL cannot operate on $apkArch processor!" ; exit 1 ;;
esac
dbtype=mssql
dbport=${DB_PORT:-1433}
;;
"postgres" | "postgresql" | "pgsql" | "POSTGRES" | "POSTGRESQL" | "PGSQL" )
dbtype=pgsql
dbport=${DB_PORT:-5432}
[[ ( -n "${DB_PASS}" ) || ( -n "${DB_PASS_FILE}" ) ]] && file_env 'DB_PASS'
;;
"redis" | "REDIS" )
dbtype=redis
dbport=${DB_PORT:-6379}
[[ ( -n "${DB_PASS}" || ( -n "${DB_PASS_FILE}" ) ) ]] && file_env 'DB_PASS'
;;
"sqlite" | "sqlite3" | "SQLITE" | "SQLITE3" )
dbtype=sqlite3
;;
esac
### Set Defaults
BACKUP_LOCATION=${BACKUP_LOCATION:-"FILESYSTEM"}
COMPRESSION=${COMPRESSION:-GZ}
COMPRESSION_LEVEL=${COMPRESSION_LEVEL:-"3"}
DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0}
DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440}
DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backup}
dbhost=${DB_HOST}
dbname=${DB_NAME}
dbpass=${DB_PASS}
dbuser=${DB_USER}
MD5=${MD5:-TRUE}
PARALLEL_COMPRESSION=${PARALLEL_COMPRESSION:-TRUE}
SIZE_VALUE=${SIZE_VALUE:-"bytes"}
SPLIT_DB=${SPLIT_DB:-FALSE}
tmpdir=/tmp/backups
if [ "$BACKUP_TYPE" = "S3" ] || [ "$BACKUP_TYPE" = "s3" ] || [ "$BACKUP_TYPE" = "MINIO" ] || [ "$BACKUP_TYPE" = "minio" ] ; then
S3_PROTOCOL=${S3_PROTOCOL:-"https"}
sanity_var S3_HOST "S3 Host"
sanity_var S3_BUCKET "S3 Bucket"
sanity_var S3_KEY_ID "S3 Key ID"
sanity_var S3_KEY_SECRET "S3 Key Secret"
sanity_var S3_URI_STYLE "S3 URI Style (Virtualhost or Path)"
sanity_var S3_PATH "S3 Path"
sanity_var S3_REGION "S3 Region"
file_env 'S3_KEY_ID'
file_env 'S3_KEY_SECRET'
fi
if [ "$1" = "NOW" ]; then
DB_DUMP_BEGIN=+0
MANUAL=TRUE
fi
### Set Compression Options
if var_true "$PARALLEL_COMPRESSION" ; then
bzip="pbzip2 -${COMPRESSION_LEVEL}"
gzip="pigz -${COMPRESSION_LEVEL}"
xzip="pixz -${COMPRESSION_LEVEL}"
zstd="zstd --rm -${COMPRESSION_LEVEL}"
else
bzip="bzip2 -${COMPRESSION_LEVEL}"
gzip="gzip -${COMPRESSION_LEVEL}"
xzip="xz -${COMPRESSION_LEVEL} "
zstd="zstd --rm -${COMPRESSION_LEVEL}"
fi
### Set the Database Authentication Details
case "$dbtype" in
"mongo" )
[[ ( -n "${DB_USER}" ) ]] && MONGO_USER_STR=" --username ${dbuser}"
[[ ( -n "${DB_PASS}" ) ]] && MONGO_PASS_STR=" --password ${dbpass}"
[[ ( -n "${DB_NAME}" ) ]] && MONGO_DB_STR=" --db ${dbname}"
[[ ( -n "${DB_AUTH}" ) ]] && MONGO_AUTH_STR=" --authenticationDatabase ${DB_AUTH}"
;;
"mysql" )
[[ ( -n "${DB_PASS}" ) ]] && export MYSQL_PWD=${dbpass}
;;
"postgres" )
[[ ( -n "${DB_PASS}" ) ]] && POSTGRES_PASS_STR="PGPASSWORD=${dbpass}"
;;
"redis" )
[[ ( -n "${DB_PASS}" ) ]] && REDIS_PASS_STR=" -a ${dbpass}"
;;
esac
### Functions
backup_couch() {
target=couch_${dbname}_${dbhost}_${now}.txt
compression
curl -X GET http://${dbhost}:${dbport}/${dbname}/_all_docs?include_docs=true ${dumpoutput} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
}
backup_influx() {
if [ "${COMPRESSION}" = "NONE" ] || [ "${COMPRESSION}" = "none" ] || [ "${COMPRESSION}" = "FALSE" ] || [ "${COMPRESSION}" = "false" ] ; then
:
else
print_notice "Compressing InfluxDB backup with gzip"
influx_compression="-portable"
fi
for DB in $DB_NAME; do
target=influx_${DB}_${dbhost}_${now}
influxd backup ${influx_compression} -database $DB -host ${dbhost}:${dbport} ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
done
}
backup_mongo() {
if [ "${COMPRESSION}" = "NONE" ] || [ "${COMPRESSION}" = "none" ] || [ "${COMPRESSION}" = "FALSE" ] || [ "${COMPRESSION}" = "false" ] ; then
target=${dbtype}_${dbname}_${dbhost}_${now}.archive
else
print_notice "Compressing MongoDB backup with gzip"
target=${dbtype}_${dbname}_${dbhost}_${now}.archivegz
mongo_compression="--gzip"
fi
mongodump --archive=${tmpdir}/${target} ${mongo_compression} --host ${dbhost} --port ${dbport} ${MONGO_USER_STR}${MONGO_PASS_STR}${MONGO_AUTH_STR}${MONGO_DB_STR} ${EXTRA_OPTS}
exit_code=$?
cd ${tmpdir}
generate_md5
move_backup
}
backup_mssql() {
target=mssql_${dbname}_${dbhost}_${now}.bak
/opt/mssql-tools/bin/sqlcmd -E -C -S ${dbhost}\,${dbport} -U ${dbuser} -P ${dbpass} Q "BACKUP DATABASE \[${dbname}\] TO DISK = N'${tmpdir}/${target}' WITH NOFORMAT, NOINIT, NAME = '${dbname}-full', SKIP, NOREWIND, NOUNLOAD, STATS = 10"
}
backup_mysql() {
if var_true "$SPLIT_DB" ; then
DATABASES=$(mysql -h ${dbhost} -P $dbport -u$dbuser --batch -e "SHOW DATABASES;" | grep -v Database|grep -v schema)
for db in $DATABASES; do
if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
print_notice "Dumping MariaDB database: $db"
target=mysql_${db}_${dbhost}_${now}.sql
compression
mysqldump --max-allowed-packet=512M -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} --databases $db | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
done
else
compression
mysqldump --max-allowed-packet=512M -A -h $dbhost -P $dbport -u$dbuser ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
}
backup_pgsql() {
if var_true $SPLIT_DB ; then
export PGPASSWORD=${dbpass}
authdb=${DB_USER}
[ -n "${DB_NAME}" ] && authdb=${DB_NAME}
DATABASES=$(psql -h $dbhost -U $dbuser -p ${dbport} -d ${authdb} -c 'COPY (SELECT datname FROM pg_database WHERE datistemplate = false) TO STDOUT;' )
for db in $DATABASES; do
print_info "Dumping database: $db"
target=pgsql_${db}_${dbhost}_${now}.sql
compression
pg_dump -h ${dbhost} -p ${dbport} -U ${dbuser} $db ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
done
else
export PGPASSWORD=${dbpass}
compression
pg_dump -h ${dbhost} -U ${dbuser} -p ${dbport} ${dbname} ${EXTRA_OPTS} | $dumpoutput > ${tmpdir}/${target}
exit_code=$?
generate_md5
move_backup
fi
}
backup_redis() {
target=redis_${db}_${dbhost}_${now}.rdb
echo bgsave | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} --rdb ${tmpdir}/${target} ${EXTRA_OPTS}
print_info "Dumping Redis - Flushing Redis Cache First"
sleep 10
try=5
while [ $try -gt 0 ] ; do
saved=$(echo 'info Persistence' | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} | awk '/rdb_bgsave_in_progress:0/{print "saved"}')
ok=$(echo 'info Persistence' | redis-cli -h ${dbhost} -p ${dbport} ${REDIS_PASS_STR} | awk '/rdb_last_bgsave_status:ok/{print "ok"}')
if [[ "$saved" = "saved" ]] && [[ "$ok" = "ok" ]]; then
print_info "Redis Backup Complete"
break
fi
try=$((try - 1))
print_info "Redis Busy - Waiting and retrying in 5 seconds"
sleep 5
done
target_original=${target}
compression
$dumpoutput "${tmpdir}/${target_original}"
generate_md5
move_backup
}
backup_sqlite3() {
db=$(basename "$dbhost")
db="${db%.*}"
target=sqlite3_${db}_${now}.sqlite3
compression
print_info "Dumping sqlite3 database: ${dbhost}"
sqlite3 "${dbhost}" ".backup '${tmpdir}/backup.sqlite3'"
exit_code=$?
cat "${tmpdir}/backup.sqlite3" | $dumpoutput > "${tmpdir}/${target}"
generate_md5
move_backup
}
check_availability() {
### Set the Database Type
case "$dbtype" in
"couch" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "CouchDB Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"influx" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "InfluxDB Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"mongo" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "Mongo Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"mysql" )
COUNTER=0
export MYSQL_PWD=${dbpass}
while true; do
mysqlcmd='mysql -u'${dbuser}' -P '${dbport}' -h '${dbhost}
out="$($mysqlcmd -e "SELECT COUNT(*) FROM information_schema.FILES;" 2>&1)"
echo "$out" | grep -E "COUNT|Enter" 2>&1 > /dev/null
if [ $? -eq 0 ]; then
:
break
fi
print_warn "MySQL/MariaDB Server '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
sleep 5
(( COUNTER+=5 ))
done
;;
"mssql" )
COUNTER=0
while ! (nc -z ${dbhost} ${dbport}) ; do
sleep 5
(( COUNTER+=5 ))
print_warn "MSSQL Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"pgsql" )
COUNTER=0
export PGPASSWORD=${dbpass}
until pg_isready --dbname=${dbname} --host=${dbhost} --port=${dbport} --username=${dbuser} -q
do
sleep 5
(( COUNTER+=5 ))
print_warn "Postgres Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"redis" )
COUNTER=0
while ! (nc -z "${dbhost}" "${dbport}") ; do
sleep 5
(( COUNTER+=5 ))
print_warn "Redis Host '${dbhost}' is not accessible, retrying.. ($COUNTER seconds so far)"
done
;;
"sqlite3" )
if [[ ! -e "${dbhost}" ]]; then
print_error "File '${dbhost}' does not exist."
exit_code=2
exit $exit_code
elif [[ ! -f "${dbhost}" ]]; then
print_error "File '${dbhost}' is not a file."
exit_code=2
exit $exit_code
elif [[ ! -r "${dbhost}" ]]; then
print_error "File '${dbhost}' is not readable."
exit_code=2
exit $exit_code
fi
;;
esac
}
compression() {
case "$COMPRESSION" in
"GZ" | "gz" | "gzip" | "GZIP")
print_notice "Compressing backup with gzip"
target=${target}.gz
dumpoutput="$gzip "
;;
"BZ" | "bz" | "bzip2" | "BZIP2" | "bzip" | "BZIP" | "bz2" | "BZ2")
print_notice "Compressing backup with bzip2"
target=${target}.bz2
dumpoutput="$bzip "
;;
"XZ" | "xz" | "XZIP" | "xzip" )
print_notice "Compressing backup with xzip"
target=${target}.xz
dumpoutput="$xzip "
;;
"ZSTD" | "zstd" | "ZST" | "zst" )
print_notice "Compressing backup with zstd"
target=${target}.zst
dumpoutput="$zstd "
;;
"NONE" | "none" | "FALSE" | "false")
dumpoutput="cat "
;;
esac
}
generate_md5() {
if var_true "$MD5" ; then
print_notice "Generating MD5 for ${target}"
cd $tmpdir
md5sum "${target}" > "${target}".md5
MD5VALUE=$(md5sum "${target}" | awk '{ print $1}')
fi
}
move_backup() {
case "$SIZE_VALUE" in
"b" | "bytes" )
SIZE_VALUE=1
;;
"[kK]" | "[kK][bB]" | "kilobytes" | "[mM]" | "[mM][bB]" | "megabytes" )
SIZE_VALUE="-h"
;;
*)
SIZE_VALUE=1
;;
esac
if [ "$SIZE_VALUE" = "1" ] ; then
FILESIZE=$(stat -c%s "${tmpdir}/${target}")
print_notice "Backup of ${target} created with the size of ${FILESIZE} bytes"
else
FILESIZE=$(du -h "${tmpdir}/${target}" | awk '{ print $1}')
print_notice "Backup of ${target} created with the size of ${FILESIZE}"
fi
case "${BACKUP_LOCATION}" in
"FILE" | "file" | "filesystem" | "FILESYSTEM" )
mkdir -p "${DB_DUMP_TARGET}"
mv ${tmpdir}/*.md5 "${DB_DUMP_TARGET}"/
mv ${tmpdir}/"${target}" "${DB_DUMP_TARGET}"/"${target}"
;;
"S3" | "s3" | "MINIO" | "minio" )
export AWS_ACCESS_KEY_ID=${S3_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${S3_KEY_SECRET}
export AWS_DEFAULT_REGION=${S3_REGION}
[[ ( -n "${S3_HOST}" ) ]] && PARAM_AWS_ENDPOINT_URL=" --endpoint-url ${S3_PROTOCOL}://${S3_HOST}"
aws ${PARAM_AWS_ENDPOINT_URL} s3 cp ${tmpdir}/${target} s3://${S3_BUCKET}/${S3_PATH}/${target}
rm -rf ${tmpdir}/*.md5
rm -rf ${tmpdir}/"${target}"
;;
esac
}
### Container Startup
print_debug "Backup routines Initialized on $(date)"
### Wait for Next time to start backup
if [ "$1" != "NOW" ]; then
current_time=$(date +"%s")
today=$(date +"%Y%m%d")
if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then
waittime=$(( ${BASH_REMATCH[1]} * 60 ))
target_time=$(($current_time + $waittime))
else
target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s")
if [[ "$target_time" < "$current_time" ]]; then
target_time=$(($target_time + 24*60*60))
fi
waittime=$(($target_time - $current_time))
fi
print_notice "Next Backup at $(date -d @${target_time} +"%Y-%m-%d %T %Z")"
sleep $waittime
fi
### Commence Backup
while true; do
# make sure the directory exists
mkdir -p $tmpdir
### Define Target name
now=$(date +"%Y%m%d-%H%M%S")
now_time=$(date +"%H:%M:%S")
now_date=$(date +"%Y-%m-%d")
target=${dbtype}_${dbname}_${dbhost}_${now}.sql
### Take a Dump
case "$dbtype" in
"couch" )
check_availability
backup_couch
;;
"influx" )
check_availability
backup_influx
;;
"mssql" )
check_availability
backup_mssql
;;
"mysql" )
check_availability
backup_mysql
;;
"mongo" )
check_availability
backup_mongo
;;
"pgsql" )
check_availability
backup_pgsql
;;
"redis" )
check_availability
backup_redis
;;
"sqlite3" )
check_availability
backup_sqlite3
;;
esac
### Zabbix
if var_true "${CONTAINER_ENABLE_MONITORING}" ; then
print_notice "Sending Backup Statistics to Zabbix"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.size -o "$(stat -c%s "${DB_DUMP_TARGET}"/"${target}")"
silent zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -k dbbackup.datetime -o "$(date -r "${DB_DUMP_TARGET}"/"${target}" +'%s')"
fi
### Automatic Cleanup
if [[ -n "$DB_CLEANUP_TIME" ]]; then
print_notice "Cleaning up old backups"
find "${DB_DUMP_TARGET}"/ -mmin +"${DB_CLEANUP_TIME}" -iname "*" -exec rm {} \;
fi
if [ -n "$POST_SCRIPT" ] ; then
print_notice "Found POST_SCRIPT environment variable. Executing"
eval "${POST_SCRIPT}"
fi
### Post Backup Custom Script Support
if [ -d /assets/custom-scripts/ ] ; then
print_notice "Found Custom Filesystem Scripts to Execute"
for f in $(find /assets/custom-scripts/ -name \*.sh -type f); do
print_notice "Running Script ${f}"
## script EXIT_CODE DB_TYPE DB_HOST DB_NAME DATE BACKUP_FILENAME FILESIZE MD5_VALUE
chmod +x "${f}"
${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${now_date}" "${now_time}" "${target}" "${FILESIZE}" "${MD5VALUE}"
done
fi
### Go back to Sleep until next Backup time
if var_true $MANUAL ; then
exit 1;
else
sleep $(($DB_DUMP_FREQ*60))
fi
done
fi

View File

@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash
echo '** Performing Manual Backup'
/etc/s6/services/10-db-backup/run NOW
/etc/services.available/10-db-backup/run NOW

View File

@@ -1,515 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<zabbix_export>
<version>3.4</version>
<date>2018-02-02T19:04:27Z</date>
<groups>
<group>
<name>Discovered Containers</name>
</group>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>Service - ICMP</template>
<name>Service - ICMP (Ping)</name>
<description/>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<items>
<item>
<name>ICMP ping</name>
<type>3</type>
<snmp_community/>
<snmp_oid/>
<key>icmpping</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<valuemap>
<name>Service state</name>
</valuemap>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>ICMP loss</name>
<type>3</type>
<snmp_community/>
<snmp_oid/>
<key>icmppingloss</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>0</value_type>
<allowed_hosts/>
<units>%</units>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>ICMP response time</name>
<type>3</type>
<snmp_community/>
<snmp_oid/>
<key>icmppingsec</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>0</value_type>
<allowed_hosts/>
<units>s</units>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>ICMP</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
</items>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
</template>
<template>
<template>Zabbix - Container Agent</template>
<name>Zabbix - Container Agent</name>
<description/>
<groups>
<group>
<name>Discovered Containers</name>
</group>
<group>
<name>Templates</name>
</group>
</groups>
<applications>
<application>
<name>Packages</name>
</application>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<items>
<item>
<name>Hostname of Container</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.hostname</key>
<delay>1h</delay>
<history>1w</history>
<trends>0</trends>
<status>0</status>
<value_type>1</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>3</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Contaner OS</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.os</key>
<delay>6h</delay>
<history>30d</history>
<trends>0</trends>
<status>0</status>
<value_type>1</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>5</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Zabbix Agent ping</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.ping</key>
<delay>1m</delay>
<history>1w</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description>The agent always returns 1 for this item. It could be used in combination with nodata() for availability check.</description>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap>
<name>Zabbix agent ping status</name>
</valuemap>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Zabbix Agent Version</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>agent.version</key>
<delay>1h</delay>
<history>1w</history>
<trends>0</trends>
<status>0</status>
<value_type>1</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>Zabbix agent</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
<item>
<name>Upgradable Packages</name>
<type>0</type>
<snmp_community/>
<snmp_oid/>
<key>packages.upgradable</key>
<delay>6h</delay>
<history>90d</history>
<trends>365d</trends>
<status>0</status>
<value_type>3</value_type>
<allowed_hosts/>
<units/>
<snmpv3_contextname/>
<snmpv3_securityname/>
<snmpv3_securitylevel>0</snmpv3_securitylevel>
<snmpv3_authprotocol>0</snmpv3_authprotocol>
<snmpv3_authpassphrase/>
<snmpv3_privprotocol>0</snmpv3_privprotocol>
<snmpv3_privpassphrase/>
<params/>
<ipmi_sensor/>
<authtype>0</authtype>
<username/>
<password/>
<publickey/>
<privatekey/>
<port/>
<description/>
<inventory_link>0</inventory_link>
<applications>
<application>
<name>Packages</name>
</application>
</applications>
<valuemap/>
<logtimefmt/>
<preprocessing/>
<jmx_endpoint/>
<master_item/>
</item>
</items>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
</template>
</templates>
<triggers>
<trigger>
<expression>{Service - ICMP:icmpping.max(3m)}=3</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Cannot be pinged</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>5</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Service - ICMP:icmppingloss.min(10m)}&gt;50</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Ping loss is too high</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>4</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies>
<dependency>
<name>Cannot be pinged</name>
<expression>{Service - ICMP:icmpping.max(3m)}=3</expression>
<recovery_expression/>
</dependency>
</dependencies>
<tags/>
</trigger>
<trigger>
<expression>{Service - ICMP:icmppingsec.avg(2m)}&gt;100</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Ping Response time is too high</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>4</priority>
<description/>
<type>1</type>
<manual_close>0</manual_close>
<dependencies>
<dependency>
<name>Cannot be pinged</name>
<expression>{Service - ICMP:icmpping.max(3m)}=3</expression>
<recovery_expression/>
</dependency>
</dependencies>
<tags/>
</trigger>
<trigger>
<expression>{Zabbix - Container Agent:packages.upgradable.last()}&gt;0</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Upgraded Packages in Container Available</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>1</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
<trigger>
<expression>{Zabbix - Container Agent:agent.ping.nodata(3m)}=1</expression>
<recovery_mode>0</recovery_mode>
<recovery_expression/>
<name>Zabbix agent is unreachable</name>
<correlation_mode>0</correlation_mode>
<correlation_tag/>
<url/>
<status>0</status>
<priority>5</priority>
<description/>
<type>0</type>
<manual_close>0</manual_close>
<dependencies/>
<tags/>
</trigger>
</triggers>
<value_maps>
<value_map>
<name>Service state</name>
<mappings>
<mapping>
<value>0</value>
<newvalue>Down</newvalue>
</mapping>
<mapping>
<value>1</value>
<newvalue>Up</newvalue>
</mapping>
</mappings>
</value_map>
<value_map>
<name>Zabbix agent ping status</name>
<mappings>
<mapping>
<value>1</value>
<newvalue>Up</newvalue>
</mapping>
</mappings>
</value_map>
</value_maps>
</zabbix_export>