Files
librenotes/scripts/backup.sh
Michael Czechowski bcccba92f7 Add deploy workflow and backup tooling
CI deployment (.gitea/workflows/deploy.yml):
- Two jobs (build, deploy) gated on the repo variable
  DEPLOY_ENABLED=true so the workflow exists but does nothing
  until secrets and host are configured.
- Build pushes two image tags per run: rolling :main + the short
  SHA on main, or vX.Y.Z + :latest on tag pushes. Immutable per
  commit/tag tags make rollback trivial.
- Deploy SSHes to DEPLOY_HOST, runs docker compose pull && up -d
  in DEPLOY_PATH, then polls HEALTH_URL for up to a minute. A
  failed health check fails the workflow, which is the alert.
- Required secrets and the rollback procedure are documented in
  docs/operations.md.

Backup tooling (scripts/):
- backup.sh: SQLite online .backup snapshot + tarball of the
  per-tenant data dir + info.txt header, all wrapped into a
  single librenotes-YYYYMMDD-HHMMSS.tar.gz. Optional BACKUP_REMOTE
  triggers an rclone copy for off-site storage.
- backup-prune.sh: enforces retention "30 daily + 12 monthly".
  Sorts archives by filename (date is in the name so lex order
  matches chronological) and keeps the newest 30 plus the newest
  archive for each of the most recent 12 months.
- backup-restore-test.sh: extracts the most recent archive into
  a tmpdir, runs sqlite3 .schema (proves DB readability), and
  asserts the notes tar has at least one entry. Failure is the
  alert. Wired into a separate weekly timer.
- librenotes-backup.{service,timer}: systemd units for the daily
  03:17 UTC run with 5min jitter; ProtectSystem=strict, only
  /var/backups/librenotes is writable.
- librenotes-backup-verify.{service,timer}: weekly Monday
  04:00 UTC restore test.

Closes #26 and #27.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:49:40 +02:00

61 lines
2.2 KiB
Bash
Executable File

#!/usr/bin/env bash
# backup.sh — daily backup of librenotes state.
#
# Produces $BACKUP_DIR/librenotes-YYYYMMDD-HHMMSS.tar.gz containing:
# - librenotes.db (consistent SQLite .backup snapshot)
# - notes.tar.gz (per-tenant note files)
# - info.txt (timestamp, hostname, version)
#
# Required env:
# LIBRENOTES_DB path to the SQLite database
# LIBRENOTES_DATA_DIR path to the per-tenant note directory
#
# Optional env:
# BACKUP_DIR where to write archives (default /var/backups/librenotes)
# BACKUP_REMOTE rclone target for off-site copy (e.g. s3:bucket/path)
# BACKUP_VERSION version string written into info.txt
#
# The script is intentionally a single self-contained file so it
# can run on a minimal host with only sqlite3, tar, gzip, and
# (optionally) rclone.
set -euo pipefail
: "${LIBRENOTES_DB:?LIBRENOTES_DB is required}"
: "${LIBRENOTES_DATA_DIR:?LIBRENOTES_DATA_DIR is required}"
BACKUP_DIR="${BACKUP_DIR:-/var/backups/librenotes}"
BACKUP_VERSION="${BACKUP_VERSION:-unknown}"
mkdir -p "$BACKUP_DIR"
ts="$(date -u +%Y%m%d-%H%M%S)"
work="$(mktemp -d)"
trap 'rm -rf "$work"' EXIT
# Online SQLite snapshot. .backup is atomic from the application's
# perspective even while writes are happening.
sqlite3 "$LIBRENOTES_DB" ".backup '$work/librenotes.db'"
# Notes archive. Use --warning=no-file-changed because per-user
# files may be touched concurrently; we still get a consistent
# point-in-time view per file thanks to tar's read semantics.
tar --warning=no-file-changed -C "$LIBRENOTES_DATA_DIR" -czf "$work/notes.tar.gz" .
cat > "$work/info.txt" <<EOF
backup_at: $ts UTC
host: $(hostname)
version: $BACKUP_VERSION
db_size: $(stat -c%s "$work/librenotes.db" 2>/dev/null || stat -f%z "$work/librenotes.db")
notes_size:$(stat -c%s "$work/notes.tar.gz" 2>/dev/null || stat -f%z "$work/notes.tar.gz")
EOF
archive="$BACKUP_DIR/librenotes-$ts.tar.gz"
tar -C "$work" -czf "$archive" librenotes.db notes.tar.gz info.txt
echo "wrote $archive ($(stat -c%s "$archive" 2>/dev/null || stat -f%z "$archive") bytes)"
if [ -n "${BACKUP_REMOTE:-}" ]; then
rclone copy "$archive" "$BACKUP_REMOTE" --quiet
echo "uploaded to $BACKUP_REMOTE"
fi