Go to file
ViViDboarder b2d95cc775 Add support for pre/post scripts
Allows adding scripts to a directory to be executed before backups and
after restorating. This can allow backing up a mysql dump or something.

Using these could (and probably does) break verify checks

Fixes #8
2018-09-27 18:09:20 -07:00
tests Add support for pre/post scripts 2018-09-27 18:09:20 -07:00
.dockerignore Clean test scripts 2018-05-13 10:31:31 -07:00
.travis.yml Add new integration test against minio 2017-11-16 09:28:50 -08:00
backup.sh Add support for pre/post scripts 2018-09-27 18:09:20 -07:00
docker-compose.yaml Switch from entrypoint to start command to allow exec of other scripts 2018-05-20 20:34:48 -07:00
Dockerfile.raspbian Add support for pre/post scripts 2018-09-27 18:09:20 -07:00
Dockerfile.ubuntu Add support for pre/post scripts 2018-09-27 18:09:20 -07:00
LICENSE Initial commit 2017-03-18 17:00:44 -07:00
Makefile Add support for pre/post scripts 2018-09-27 18:09:20 -07:00
Readme.md Switch from entrypoint to start command to allow exec of other scripts 2018-05-20 20:34:48 -07:00
restore.sh Add support for pre/post scripts 2018-09-27 18:09:20 -07:00
start.sh Switch from entrypoint to start command to allow exec of other scripts 2018-05-20 20:34:48 -07:00
verify.sh Add restore script and restore test 2017-06-28 23:28:48 -07:00

Duplicity Backup

Build Status

Instructions

Mount any directories you'd like to back up as a volume and run

Env Variables

Variable Default Description
AWS_ACCESS_KEY_ID Required for writing to S3
AWS_DEFAULT_REGION Required for writing to S3
AWS_SECRET_ACCESS_KEY Required for writing to S3
BACKUP_DEST file:///backups Destination to store backups (See duplicity documenation)
BACKUP_NAME backup What the name for the backup should be. If using a single store for multiple backups, make sure this is unique
CLEANUP_COMMAND An optional duplicity command to execute after backups to clean older ones out (eg. "remove-all-but-n-full 2")
CRON_SCHEDULE If you want to periodic incremental backups on a schedule, provide it here. By default we just backup once and exit
FLOCK_WAIT 60 Seconds to wait for a lock before skipping a backup
FTP_PASSWORD Used to provide passwords for some backends. May not work without an attached TTY
FULL_CRON_SCHEDULE If you want to periodic full backups on a schedule, provide it here. This requires an incremental cron schedule too
GPG_KEY_ID The ID of the key you wish to use. See Encryption section below
OPT_ARGUMENTS Any additional arguments to provide to the duplicity backup command
PASSPHRASE Correct.Horse.Battery.Staple Passphrase to use for GPG
PATH_TO_BACKUP /data The path to the directory you wish to backup. If you want to backup multiple, see the tip below
RESTORE_ON_EMPTY_START Set this to "true" and if the $PATH_TO_BACKUP is empty, it will restore the latest backup. This can be used for auto recovery from lost data
SKIP_ON_START Skips backup on start if set to "true"
VERIFY_CRON_SCHEDULE If you want to verify your backups on a schedule, provide it here

Encryption

By default Duplicity will use a symettric encryption using just your passphrase. If you wish to use a GPG key, you can add a ro mount to your ~/.gnupg directory and then provide the GPG_KEY_ID as an environment variable. The key will be used to sign and encrypt your files before sending to the backup destination.

Need to generate a key? Install gnupg and run gnupg --gen-key

Tips

Missing dependencies?

Please file a ticket! Duplicity supports a ton of backends and I haven't had a chance to validate that all dependencies are present in the image. If something is missing, let me know and I'll add it

Getting complains about no terminal for askpass?

Instead of using FTP_PASSWORD, add the password to the endpoint url

Backing up more than one source directory

Duplicity only accepts one target, however you can refine that selection with --exclude and --include arguments. The below example shows how this can be used to select multiple backup sources

OPT_ARGUMENTS="--include /home --include /etc --exclude '**'"
PATH_TO_BACKUP="/"

Backing up from another container

Mount all volumes from your existing container with --volumes-from and then back up by providing the paths to those volumes. If there are more than one volumes, you'll want to use the above tip for mulitple backup sources

Restoring a backup

On your running container, execute /restore.sh. That should be that! Eg. docker exec my_backup_container /restore.sh

To Do

  • Automatic restoration if there is no source data