Automate backup and restore for Ollama models with systemd timers and shell scripts

Intermediate 45 min Apr 18, 2026 184 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up automated backup and restore procedures for Ollama AI models using systemd timers, shell scripts, and compression. Includes disaster recovery strategies and monitoring integration for production environments.

Prerequisites

  • Ollama installed and configured
  • Root or sudo access
  • Basic shell scripting knowledge
  • At least 10GB free disk space for backups

What this solves

Ollama models are large, valuable assets that require regular backup protection and disaster recovery planning. This tutorial creates automated backup systems using systemd timers and shell scripts to protect your AI models from hardware failures, corruption, or accidental deletion. You'll implement compressed backups, automated restoration procedures, and monitoring integration for production reliability.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you have the latest security patches and tools.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install backup prerequisites

Install compression tools, rsync for efficient transfers, and monitoring utilities for backup verification.

sudo apt install -y rsync gzip tar curl jq pigz pv
sudo dnf install -y rsync gzip tar curl jq pigz pv

Create backup directory structure

Set up organized directories for storing backups with proper permissions for the backup system.

sudo mkdir -p /opt/ollama-backup/{scripts,backups,logs,restore}
sudo mkdir -p /opt/ollama-backup/backups/{daily,weekly,monthly}
sudo useradd --system --home /opt/ollama-backup --shell /bin/bash ollama-backup
sudo chown -R ollama-backup:ollama-backup /opt/ollama-backup
sudo chmod 750 /opt/ollama-backup
sudo chmod 755 /opt/ollama-backup/backups

Identify Ollama data locations

Find where Ollama stores models and configuration data on your system.

sudo systemctl status ollama
sudo find /home /root /usr/share -name "*.gguf" -o -name "modelfile" 2>/dev/null | head -10
ls -la ~/.ollama/models/ 2>/dev/null || echo "Default location not found"
sudo ls -la /usr/share/ollama/.ollama/models/ 2>/dev/null || echo "System location not found"

Create the main backup script

Build a comprehensive backup script that handles model discovery, compression, and verification.

#!/bin/bash

Ollama Model Backup Script

Automatically discovers and backs up Ollama models with compression

set -euo pipefail

Configuration

BACKUP_BASE="/opt/ollama-backup/backups" LOG_FILE="/opt/ollama-backup/logs/backup-$(date +%Y%m%d-%H%M%S).log" RETENTION_DAYS=30 COMPRESSION_LEVEL=6 MAX_PARALLEL_JOBS=2

Ollama data locations to check

OLLAMA_LOCATIONS=( "$HOME/.ollama" "/usr/share/ollama/.ollama" "/var/lib/ollama" "/opt/ollama" )

Logging function

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE" }

Error handling

error_exit() { log "ERROR: $1" exit 1 }

Check if running as backup user

if [[ $(id -u) -eq 0 ]]; then error_exit "Do not run this script as root. Use the ollama-backup user." fi

Create backup type directory

BACKUP_TYPE=${1:-daily} TIMESTAMP=$(date +%Y%m%d-%H%M%S) BACKUP_DIR="$BACKUP_BASE/$BACKUP_TYPE/$TIMESTAMP" mkdir -p "$BACKUP_DIR" log "Starting Ollama backup: $BACKUP_TYPE" log "Backup directory: $BACKUP_DIR"

Function to backup a location

backup_location() { local source_dir="$1" local backup_name="$2" if [[ ! -d "$source_dir" ]]; then log "Skipping $source_dir - directory not found" return 0 fi local size=$(du -sh "$source_dir" | cut -f1) log "Backing up $source_dir ($size) as $backup_name" # Create compressed archive with progress tar -C "$(dirname "$source_dir")" -cf - "$(basename "$source_dir")" | \ pv -s $(du -sb "$source_dir" | cut -f1) | \ pigz -$COMPRESSION_LEVEL > "$BACKUP_DIR/$backup_name.tar.gz" # Verify backup integrity if pigz -t "$BACKUP_DIR/$backup_name.tar.gz" >/dev/null 2>&1; then log "Successfully backed up $source_dir" # Store metadata echo "source=$source_dir" > "$BACKUP_DIR/$backup_name.meta" echo "size=$size" >> "$BACKUP_DIR/$backup_name.meta" echo "timestamp=$TIMESTAMP" >> "$BACKUP_DIR/$backup_name.meta" echo "checksum=$(sha256sum "$BACKUP_DIR/$backup_name.tar.gz" | cut -d' ' -f1)" >> "$BACKUP_DIR/$backup_name.meta" else error_exit "Backup verification failed for $source_dir" fi }

Backup Ollama service configuration

if systemctl is-enabled ollama >/dev/null 2>&1; then log "Backing up Ollama systemd configuration" mkdir -p "$BACKUP_DIR/systemd" sudo cp /etc/systemd/system/ollama.service "$BACKUP_DIR/systemd/" 2>/dev/null || true systemctl show ollama --no-pager > "$BACKUP_DIR/systemd/ollama-service-status.txt" fi

Backup Ollama models and data

for location in "${OLLAMA_LOCATIONS[@]}"; do # Expand tilde to actual home directory expanded_location=$(eval echo "$location") if [[ -d "$expanded_location" ]]; then backup_name=$(echo "$expanded_location" | sed 's|/|_|g' | sed 's/^_//') backup_location "$expanded_location" "ollama-$backup_name" fi done

Get list of installed models

if command -v ollama >/dev/null 2>&1; then log "Saving list of installed models" ollama list > "$BACKUP_DIR/model-list.txt" 2>/dev/null || echo "No models found" > "$BACKUP_DIR/model-list.txt" fi

Cleanup old backups

log "Cleaning up backups older than $RETENTION_DAYS days" find "$BACKUP_BASE/$BACKUP_TYPE" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null || true

Generate backup summary

BACKUP_SIZE=$(du -sh "$BACKUP_DIR" | cut -f1) log "Backup completed successfully" log "Total backup size: $BACKUP_SIZE" log "Backup location: $BACKUP_DIR"

Send metrics if Prometheus node_exporter textfile collector is available

if [[ -d "/var/lib/node_exporter/textfile_collector" ]]; then cat > "/var/lib/node_exporter/textfile_collector/ollama_backup.prom" << EOF

HELP ollama_backup_last_success_timestamp_seconds Last successful backup timestamp

TYPE ollama_backup_last_success_timestamp_seconds gauge

ollama_backup_last_success_timestamp_seconds{type="$BACKUP_TYPE"} $(date +%s)

HELP ollama_backup_size_bytes Size of the backup in bytes

TYPE ollama_backup_size_bytes gauge

ollama_backup_size_bytes{type="$BACKUP_TYPE"} $(du -sb "$BACKUP_DIR" | cut -f1) EOF fi log "Ollama backup completed: $BACKUP_DIR"

Create the restore script

Build a restore script that can recover models from backups with verification and rollback capabilities.

#!/bin/bash

Ollama Model Restore Script

Restores Ollama models from compressed backups

set -euo pipefail

Configuration

BACKUP_BASE="/opt/ollama-backup/backups" RESTORE_DIR="/opt/ollama-backup/restore" LOG_FILE="/opt/ollama-backup/logs/restore-$(date +%Y%m%d-%H%M%S).log"

Logging function

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE" }

Error handling

error_exit() { log "ERROR: $1" exit 1 }

Usage information

usage() { echo "Usage: $0 [destination]" echo "Example: $0 /opt/ollama-backup/backups/daily/20241201-120000" echo "Available backups:" find "$BACKUP_BASE" -name "*.tar.gz" -type f | sort | tail -10 exit 1 }

Validate input

if [[ $# -lt 1 ]]; then usage fi BACKUP_PATH="$1" DESTINATION="${2:-auto}" if [[ ! -d "$BACKUP_PATH" ]]; then error_exit "Backup path does not exist: $BACKUP_PATH" fi log "Starting Ollama restore from: $BACKUP_PATH"

Stop Ollama service if running

if systemctl is-active ollama >/dev/null 2>&1; then log "Stopping Ollama service" sudo systemctl stop ollama RESTART_OLLAMA=true else RESTART_OLLAMA=false fi

Create temporary restore workspace

WORKSPACE="$RESTORE_DIR/$(date +%Y%m%d-%H%M%S)" mkdir -p "$WORKSPACE" log "Using workspace: $WORKSPACE"

Function to restore a backup file

restore_backup() { local backup_file="$1" local meta_file="${backup_file%.tar.gz}.meta" if [[ ! -f "$meta_file" ]]; then log "Warning: No metadata file found for $backup_file" return 1 fi # Read metadata source "$meta_file" log "Restoring: $backup_file" log "Original source: $source" log "Original size: $size" # Verify backup integrity before restore if ! pigz -t "$backup_file" >/dev/null 2>&1; then error_exit "Backup file is corrupted: $backup_file" fi # Verify checksum if available if [[ -n "${checksum:-}" ]]; then local current_checksum=$(sha256sum "$backup_file" | cut -d' ' -f1) if [[ "$current_checksum" != "$checksum" ]]; then error_exit "Checksum verification failed for $backup_file" fi log "Checksum verified: $checksum" fi # Extract to workspace first log "Extracting $backup_file to workspace" pigz -dc "$backup_file" | tar -C "$WORKSPACE" -xf - # Determine restoration target if [[ "$DESTINATION" == "auto" ]]; then target_dir="$source" else target_dir="$DESTINATION/$(basename "$source")" fi # Create backup of existing data if it exists if [[ -d "$target_dir" ]]; then backup_existing="${target_dir}.backup-$(date +%Y%m%d-%H%M%S)" log "Backing up existing data to: $backup_existing" sudo mv "$target_dir" "$backup_existing" fi # Restore the data extracted_dir="$WORKSPACE/$(basename "$source")" if [[ -d "$extracted_dir" ]]; then log "Restoring to: $target_dir" sudo mkdir -p "$(dirname "$target_dir")" sudo mv "$extracted_dir" "$target_dir" # Fix permissions if [[ "$target_dir" =~ \.ollama ]]; then if [[ "$target_dir" =~ ^/home ]]; then # User installation local user_dir=$(echo "$target_dir" | cut -d'/' -f1-3) local username=$(basename "$user_dir") sudo chown -R "$username:$username" "$target_dir" else # System installation sudo chown -R ollama:ollama "$target_dir" 2>/dev/null || sudo chown -R root:root "$target_dir" fi sudo chmod -R 755 "$target_dir" fi log "Successfully restored: $target_dir" else error_exit "Extraction failed - directory not found: $extracted_dir" fi }

Process all backup files in the backup path

for backup_file in "$BACKUP_PATH"/*.tar.gz; do if [[ -f "$backup_file" ]]; then restore_backup "$backup_file" fi done

Restore systemd configuration if available

if [[ -d "$BACKUP_PATH/systemd" ]] && [[ -f "$BACKUP_PATH/systemd/ollama.service" ]]; then log "Restoring Ollama systemd configuration" sudo cp "$BACKUP_PATH/systemd/ollama.service" /etc/systemd/system/ sudo systemctl daemon-reload fi

Restart Ollama if it was running

if [[ "$RESTART_OLLAMA" == "true" ]]; then log "Starting Ollama service" sudo systemctl start ollama sleep 5 if systemctl is-active ollama >/dev/null 2>&1; then log "Ollama service started successfully" else log "Warning: Ollama service failed to start - check logs" fi fi

Cleanup workspace

rm -rf "$WORKSPACE" log "Restore completed successfully" log "You may need to run 'ollama list' to verify model availability"

Make scripts executable and set permissions

Set proper permissions on the backup scripts to ensure they can execute but remain secure.

sudo chmod 755 /opt/ollama-backup/scripts/ollama-backup.sh
sudo chmod 755 /opt/ollama-backup/scripts/ollama-restore.sh
sudo chown ollama-backup:ollama-backup /opt/ollama-backup/scripts/*.sh

Create systemd service for backup

Create a systemd service that handles the backup execution with proper environment and error handling.

[Unit]
Description=Ollama Model Backup (%i)
After=network.target
Wants=network.target

[Service]
Type=oneshot
User=ollama-backup
Group=ollama-backup
WorkingDirectory=/opt/ollama-backup
Environment=HOME=/opt/ollama-backup
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ExecStart=/opt/ollama-backup/scripts/ollama-backup.sh %i
StandardOutput=journal
StandardError=journal
TimeoutStartSec=3600

Security hardening

NoNewPrivileges=true PrivateTmp=true ProtectHome=read-only ProtectSystem=strict ReadWritePaths=/opt/ollama-backup /var/lib/node_exporter SupplementaryGroups=ollama [Install] WantedBy=multi-user.target

Create systemd timers for scheduled backups

Set up multiple timer schedules for daily, weekly, and monthly backups with different retention policies.

[Unit]
Description=Daily Ollama Model Backup
Requires=ollama-backup@daily.service

[Timer]
OnCalendar=daily
RandomizedDelaySec=1800
Persistent=true
AccuracySec=1m

[Install]
WantedBy=timers.target
[Unit]
Description=Weekly Ollama Model Backup
Requires=ollama-backup@weekly.service

[Timer]
OnCalendar=weekly
RandomizedDelaySec=3600
Persistent=true
AccuracySec=1m

[Install]
WantedBy=timers.target
[Unit]
Description=Monthly Ollama Model Backup
Requires=ollama-backup@monthly.service

[Timer]
OnCalendar=monthly
RandomizedDelaySec=7200
Persistent=true
AccuracySec=1m

[Install]
WantedBy=timers.target

Create backup monitoring script

Build a monitoring script that checks backup health and sends alerts when backups fail or are missing.

#!/bin/bash

Ollama Backup Monitoring Script

Checks backup health and sends alerts

set -euo pipefail BACKUP_BASE="/opt/ollama-backup/backups" LOG_FILE="/opt/ollama-backup/logs/monitor-$(date +%Y%m%d).log" MAX_AGE_HOURS=25 # Daily backups should not be older than 25 hours MIN_BACKUP_SIZE="100M" # Minimum expected backup size

Logging function

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE" }

Alert function (customize for your notification system)

alert() { local severity="$1" local message="$2" log "ALERT [$severity]: $message" # Send to syslog logger -t "ollama-backup" -p "daemon.$severity" "$message" # Optional: Send email (requires mailutils) # echo "$message" | mail -s "Ollama Backup Alert" admin@example.com # Optional: Send to Slack/Discord webhook # curl -X POST -H 'Content-type: application/json' --data "{\"text\":\"$message\"}" YOUR_WEBHOOK_URL }

Check if backup exists and is recent

check_backup_freshness() { local backup_type="$1" local backup_dir="$BACKUP_BASE/$backup_type" if [[ ! -d "$backup_dir" ]]; then alert "error" "Backup directory missing: $backup_dir" return 1 fi # Find most recent backup local latest_backup=$(find "$backup_dir" -type d -name "[0-9]*" | sort | tail -1) if [[ -z "$latest_backup" ]]; then alert "error" "No backups found in $backup_dir" return 1 fi # Check backup age local backup_age=$(( ($(date +%s) - $(stat -c %Y "$latest_backup")) / 3600 )) if [[ $backup_age -gt $MAX_AGE_HOURS ]]; then alert "warning" "Latest $backup_type backup is $backup_age hours old: $latest_backup" return 1 fi log "$backup_type backup is current ($backup_age hours old)" return 0 }

Check backup integrity

check_backup_integrity() { local backup_dir="$1" local failed=false for backup_file in "$backup_dir"/*.tar.gz; do if [[ -f "$backup_file" ]]; then if ! pigz -t "$backup_file" >/dev/null 2>&1; then alert "error" "Corrupted backup file: $backup_file" failed=true fi # Check minimum size local size=$(stat -c%s "$backup_file") local min_size=$(numfmt --from=iec "$MIN_BACKUP_SIZE") if [[ $size -lt $min_size ]]; then alert "warning" "Backup file smaller than expected: $backup_file ($(numfmt --to=iec $size))" fi fi done if [[ "$failed" == "false" ]]; then log "All backup files in $backup_dir passed integrity checks" return 0 else return 1 fi }

Check systemd service status

check_service_status() { for service in ollama-backup-daily.timer ollama-backup-weekly.timer ollama-backup-monthly.timer; do if ! systemctl is-active "$service" >/dev/null 2>&1; then alert "warning" "Backup timer not active: $service" elif ! systemctl is-enabled "$service" >/dev/null 2>&1; then alert "warning" "Backup timer not enabled: $service" else log "Backup timer $service is active and enabled" fi done }

Check disk space

check_disk_space() { local backup_partition=$(df "$BACKUP_BASE" | tail -1) local usage_percent=$(echo "$backup_partition" | awk '{print $5}' | sed 's/%//') if [[ $usage_percent -gt 90 ]]; then alert "error" "Backup disk usage critical: ${usage_percent}%" elif [[ $usage_percent -gt 80 ]]; then alert "warning" "Backup disk usage high: ${usage_percent}%" else log "Backup disk usage normal: ${usage_percent}%" fi } log "Starting backup monitoring check"

Run all checks

overall_status=0 check_service_status check_disk_space

Check daily backups

if ! check_backup_freshness "daily"; then overall_status=1 else latest_daily=$(find "$BACKUP_BASE/daily" -type d -name "[0-9]*" | sort | tail -1) if [[ -n "$latest_daily" ]]; then check_backup_integrity "$latest_daily" || overall_status=1 fi fi

Generate summary metrics

if [[ -d "/var/lib/node_exporter/textfile_collector" ]]; then cat > "/var/lib/node_exporter/textfile_collector/ollama_backup_health.prom" << EOF

HELP ollama_backup_health_status Overall backup system health (1=healthy, 0=issues)

TYPE ollama_backup_health_status gauge

ollama_backup_health_status $((overall_status == 0 ? 1 : 0))

HELP ollama_backup_disk_usage_percent Backup storage disk usage percentage

TYPE ollama_backup_disk_usage_percent gauge

ollama_backup_disk_usage_percent $(df "$BACKUP_BASE" | tail -1 | awk '{print $5}' | sed 's/%//')

HELP ollama_backup_monitoring_last_run_timestamp_seconds Last monitoring run timestamp

TYPE ollama_backup_monitoring_last_run_timestamp_seconds gauge

ollama_backup_monitoring_last_run_timestamp_seconds $(date +%s) EOF fi if [[ $overall_status -eq 0 ]]; then log "Backup monitoring check completed - all systems healthy" else log "Backup monitoring check completed - issues detected" fi exit $overall_status

Enable and start the backup system

Reload systemd, enable the timers, and start the backup monitoring system.

sudo chmod 755 /opt/ollama-backup/scripts/backup-monitor.sh
sudo chown ollama-backup:ollama-backup /opt/ollama-backup/scripts/backup-monitor.sh
sudo systemctl daemon-reload
sudo systemctl enable ollama-backup-daily.timer
sudo systemctl enable ollama-backup-weekly.timer
sudo systemctl enable ollama-backup-monthly.timer
sudo systemctl start ollama-backup-daily.timer
sudo systemctl start ollama-backup-weekly.timer
sudo systemctl start ollama-backup-monthly.timer

Set up log rotation

Configure logrotate to manage backup logs and prevent disk space issues.

/opt/ollama-backup/logs/*.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
    create 644 ollama-backup ollama-backup
    postrotate
        /bin/systemctl reload rsyslog > /dev/null 2>&1 || true
    endscript
}

Configure automated disaster recovery

Create disaster recovery documentation

Generate automated documentation for disaster recovery procedures and model restoration.

#!/bin/bash

Generate disaster recovery documentation

DOCS_FILE="/opt/ollama-backup/DISASTER_RECOVERY.md" cat > "$DOCS_FILE" << 'EOF'

Ollama Disaster Recovery Guide

Generated: $(date)

Quick Recovery Commands

List Available Backups

# Show recent backups
find /opt/ollama-backup/backups -name "*.tar.gz" -mtime -7 | sort

Show backup sizes and dates

du -sh /opt/ollama-backup/backups// | sort -k2

Emergency Model Restoration

# Stop Ollama
sudo systemctl stop ollama

Restore from most recent daily backup

latest_backup=$(find /opt/ollama-backup/backups/daily -type d -name "[0-9]*" | sort | tail -1) sudo -u ollama-backup /opt/ollama-backup/scripts/ollama-restore.sh "$latest_backup"

Start Ollama

sudo systemctl start ollama

Verify models

ollama list

Backup System Health Check

# Check backup timers
sudo systemctl status ollama-backup-daily.timer
sudo systemctl list-timers ollama-backup*

Run health check

sudo -u ollama-backup /opt/ollama-backup/scripts/backup-monitor.sh

View recent backup logs

tail -f /opt/ollama-backup/logs/backup-*.log

Manual Backup Creation

# Create immediate backup
sudo systemctl start ollama-backup@emergency.service

Check backup status

sudo systemctl status ollama-backup@emergency.service

Recovery Scenarios

Scenario 1: Individual Model Corruption

  1. Identify corrupted model: ollama list
  2. Remove corrupted model: ollama rm model_name
  3. Restore from backup or re-download: ollama pull model_name

Scenario 2: Complete Ollama Data Loss

  1. Stop Ollama service: sudo systemctl stop ollama
  2. Restore from latest backup using restore script
  3. Verify permissions on restored files
  4. Start Ollama service: sudo systemctl start ollama
  5. Test model availability: ollama list

Scenario 3: Backup System Failure

  1. Check systemd timer status
  2. Review backup logs for errors
  3. Test manual backup execution
  4. Verify disk space and permissions
  5. Restart backup timers if necessary

Contact Information

System: $(hostname) Backup Location: /opt/ollama-backup Last Updated: $(date) EOF echo "Disaster recovery documentation generated: $DOCS_FILE" chown ollama-backup:ollama-backup "$DOCS_FILE"

Test the backup and restore system

Perform a complete test of the backup and restore functionality to ensure everything works correctly.

sudo chmod 755 /opt/ollama-backup/scripts/generate-recovery-docs.sh
sudo -u ollama-backup /opt/ollama-backup/scripts/generate-recovery-docs.sh

Test manual backup

sudo systemctl start ollama-backup@test.service sudo systemctl status ollama-backup@test.service

Verify backup was created

ls -la /opt/ollama-backup/backups/test/

Test monitoring script

sudo -u ollama-backup /opt/ollama-backup/scripts/backup-monitor.sh

Verify your setup

# Check timer status
sudo systemctl list-timers ollama-backup*

Check backup service status

sudo systemctl status ollama-backup@daily.service

List recent backups

find /opt/ollama-backup/backups -name "*.tar.gz" -mtime -1 | head -5

Check backup logs

tail -10 /opt/ollama-backup/logs/backup-*.log

Verify backup monitoring

sudo -u ollama-backup /opt/ollama-backup/scripts/backup-monitor.sh

Check Prometheus metrics (if available)

cat /var/lib/node_exporter/textfile_collector/ollama_backup*.prom 2>/dev/null || echo "Metrics not available"

Test restore functionality (dry run)

latest_backup=$(find /opt/ollama-backup/backups -type d -name "[0-9]*" | sort | tail -1) echo "Latest backup available: $latest_backup"

Integration with monitoring systems

The backup system integrates with Prometheus and Grafana for automated infrastructure oversight through exported metrics. You can also integrate with Docker Compose monitoring stacks if you're running Ollama in containers.

Note: The backup scripts automatically export metrics to Prometheus node_exporter textfile collector if available. This enables monitoring backup health, timing, and storage usage through your existing monitoring infrastructure.

Common issues

SymptomCauseFix
Permission denied during backupollama-backup user can't read Ollama datasudo usermod -a -G ollama ollama-backup
Backup files are corruptedDisk space issues or interrupted backupCheck disk space and re-run backup manually
Timers not runningsystemd timers not enabledsudo systemctl enable --now ollama-backup-daily.timer
Large backup sizesNo compression or multiple copiesCheck compression settings and cleanup old backups
Restore fails with ownership errorsIncorrect file permissions after restoreRun restore as backup user and fix ownership manually
Models missing after restoreWrong Ollama data directory restoredVerify Ollama installation location and restore correct path
Backup monitoring alertsFailed or stale backupsCheck systemd service logs and disk space

Next steps

Running this in production?

Want this handled for you? Setting up automated backups once is straightforward. Keeping them monitored, tested, verified and integrated with your disaster recovery procedures across environments is the harder part. Our managed platform covers monitoring, backups and 24/7 response by default.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed cloud infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.