The thing about backing up an AI agent is that the data isn't a database, it's not a folder of files, it's a mix of both plus things like your persona file that took you weeks to refine and that you really, really don't want to redo. Hermes spreads its state across half a dozen places under ~/.hermes, each with slightly different rules about what's safe to copy and when. Get the rules right and your backups are correct and small. Get them wrong and you'll have a tarball that looks fine but corrupts on restore.
This guide covers the boring-but-correct way to back up a Hermes install: what to copy, why, when to use SQLite-aware tooling for the state DB instead of cp, how to encrypt the archive so credentials don't leak and how to set up a cron-driven schedule that runs without you babysitting it.
What's in ~/.hermes and what each thing is
Quick tour of the files that matter, in approximately decreasing order of "would it ruin my day to lose this".
SOUL.md at the root is your persona file. Hand-written, hard to recreate. Backup-priority high.
memories/MEMORY.md and memories/USER.md hold the persistent facts the agent loads into every session. Append-only over months of use, contains a lot of context you've shared with the agent. Backup-priority high.
skills/ contains every skill, both the ones you wrote and the ones Hermes generated via the learning loop. The auto-generated ones are recoverable in theory but you'd lose all the situational knowledge baked in. Backup-priority high.
state.db is the SQLite database with your conversation history. Contains every session, every message, the full-text search index. Big and grows over time. Backup-priority medium-high; you can survive losing it (the agent will just have shorter memory of recent sessions) but you'd miss it.
.env contains API keys and bot tokens. Sensitive. Recoverable from your password manager if you lost it, but having the live file backed up is convenient. Backup-priority high but encrypt the archive.
config.yaml is the non-secret configuration. Backup-priority medium.
migration/ is snapshots from past hermes claw migrate runs (if you ran one). You don't need these in your day-to-day backups; if disk space matters, exclude them.
hermes-agent/ is the code clone. Don't back it up. The installer recreates it on restore.
hermes-agent/.venv/ is the Python virtualenv. Don't back this up either; same reason.
The minimum-effort tarball
For a personal install, this one-liner produces a recoverable backup:
tar -czf hermes-backup-$(date +%Y%m%d).tar.gz \
--exclude='hermes-agent' \
--exclude='migration' \
-C ~ .hermes
Excluding the code clone and the migration archives keeps the tarball small. A typical personal install with six months of history compresses to between 30 and 200 MB depending on session volume.
Two problems with this. First, if Hermes is running while you tar the file, the SQLite WAL logs may be in an inconsistent state. The tarball will probably restore but it might lose the last few seconds of conversation. Second, the tarball is unencrypted; if it leaks, your bot tokens leak with it.
Both fixes are below.
SQLite-safe state.db backup
SQLite supports an online backup API that produces a consistent copy even while the database is being written to. The standard sqlite3 CLI exposes this:
sqlite3 ~/.hermes/state.db ".backup '/tmp/state-snapshot.db'"
This produces a snapshot file in /tmp that's a fully consistent copy as of the moment the backup finished. Use that snapshot in your tarball instead of copying state.db directly:
SNAP=$(mktemp /tmp/hermes-state-XXXXXX.db)
sqlite3 ~/.hermes/state.db ".backup '$SNAP'"
tar -czf hermes-backup-$(date +%Y%m%d).tar.gz \
--exclude='hermes-agent' \
--exclude='migration' \
--exclude='state.db' \
--exclude='state.db-wal' \
--exclude='state.db-shm' \
--transform "s|$SNAP|.hermes/state.db|" \
-C ~ .hermes "$SNAP"
rm -f "$SNAP"
Three things to note. First, the snapshot path goes into the tarball with a renamed path inside (--transform) so it lands at .hermes/state.db on restore. Second, we explicitly exclude the live state.db and its WAL/SHM sidecars from the tarball; we want the consistent snapshot, not the live files. Third, we clean up the snapshot afterwards.
Encrypting the archive
For credentials, an unencrypted backup is a leak waiting to happen. Two patterns work.
Symmetric encryption with GPG, simple and self-contained:
tar -czf - \
--exclude='hermes-agent' \
--exclude='migration' \
-C ~ .hermes \
| gpg --symmetric --cipher-algo AES256 \
--output hermes-backup-$(date +%Y%m%d).tar.gz.gpg
GPG prompts for a passphrase. Use a strong one; you'll need it to restore. Store the passphrase somewhere recoverable (password manager, sealed envelope, you know the drill).
For automation (cron-driven backups where you can't type a passphrase), the better pattern is an asymmetric key. Generate a key once on a separate trusted machine:
gpg --quick-generate-key '[email protected]' rsa4096 sign 0
gpg --armor --export '[email protected]' > backup-pubkey.asc
Copy backup-pubkey.asc to your VPS, import:
gpg --import backup-pubkey.asc
Now the VPS can encrypt to that key without needing the private key (which lives on your trusted machine):
tar -czf - \
--exclude='hermes-agent' \
--exclude='migration' \
-C ~ .hermes \
| gpg --encrypt --recipient '[email protected]' --trust-model always \
--output hermes-backup-$(date +%Y%m%d).tar.gz.gpg
To restore, copy the encrypted backup to the trusted machine, decrypt with the private key, then untar. The advantage: even if your VPS is compromised, the attacker can encrypt fake backups to your key but they can't decrypt existing ones.
The full backup script
Putting it together, save as /usr/local/bin/hermes-backup.sh:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/var/backups/hermes"
RECIPIENT="[email protected]"
HERMES_USER="youruser"
HERMES_HOME="/home/$HERMES_USER/.hermes"
DATE=$(date +%Y%m%d-%H%M%S)
SNAP=$(mktemp /tmp/hermes-state-XXXXXX.db)
# Consistent state.db snapshot
sudo -u "$HERMES_USER" sqlite3 "$HERMES_HOME/state.db" ".backup '$SNAP'"
sudo chown root:root "$SNAP"
# Tar everything we want, encrypt with GPG
tar -czf - \
--exclude='hermes-agent' \
--exclude='migration' \
--exclude='state.db' \
--exclude='state.db-wal' \
--exclude='state.db-shm' \
--transform "s|$SNAP|.hermes/state.db|" \
-C "$(dirname $HERMES_HOME)" "$(basename $HERMES_HOME)" "$SNAP" \
| gpg --encrypt --recipient "$RECIPIENT" --trust-model always \
--output "$BACKUP_DIR/hermes-$DATE.tar.gz.gpg"
# Clean up
rm -f "$SNAP"
# Retain the last 14 daily backups
ls -t "$BACKUP_DIR"/hermes-*.tar.gz.gpg | tail -n +15 | xargs -r rm -f
echo "Backup complete: $BACKUP_DIR/hermes-$DATE.tar.gz.gpg"
Make it executable, create the backup dir:
sudo chmod +x /usr/local/bin/hermes-backup.sh
sudo mkdir -p /var/backups/hermes
sudo chown root:root /var/backups/hermes
sudo chmod 700 /var/backups/hermes
Schedule it nightly with systemd, which is more reliable than crontab for jobs that need consistent behaviour. Save as /etc/systemd/system/hermes-backup.service:
[Unit]
Description=Hermes Agent backup
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/hermes-backup.sh
And the timer at /etc/systemd/system/hermes-backup.timer:
[Unit]
Description=Run Hermes Agent backup nightly
[Timer]
OnCalendar=*-*-* 03:30:00
Persistent=true
[Install]
WantedBy=timers.target
Enable:
sudo systemctl daemon-reload
sudo systemctl enable --now hermes-backup.timer
systemctl list-timers | grep hermes
You should see the next scheduled run. Test the service immediately with sudo systemctl start hermes-backup.service, then check the backup directory; if a fresh encrypted file is there, the schedule is wired up correctly.
Off-server retention
Backups on the same VPS as the data are fine for "I deleted a file by mistake" scenarios. They don't help if the whole VPS dies. Push backups off-host.
Three patterns I see used:
rsync to a separate server, gated by SSH keys with a restricted command. The classic pattern. rsync -avz /var/backups/hermes/ [email protected]:/srv/hermes-backups/ tacked onto the end of the backup script.
S3-compatible object storage. AWS S3, Cloudflare R2, Backblaze B2 all work; aws s3 cp, rclone copy or mc cp handle the upload. Encrypt before upload (you already do this with GPG) and pick a bucket with versioning enabled so you can recover from a malicious delete.
A LumaDock secondary VPS or a separate dedicated server. If you already have your primary on us, a cheap second box in a different zone gives you geographic redundancy with low latency between the two and no egress charges on the transfer.
Restore drill
Backups you've never restored are wishes, not backups. Once a quarter, restore from your latest backup to a scratch directory and verify the contents. Don't skip this.
mkdir -p /tmp/hermes-restore
cd /tmp/hermes-restore
gpg --decrypt /var/backups/hermes/hermes-20260505-033000.tar.gz.gpg | tar -xzf -
ls -la .hermes/
# Open SOUL.md to make sure it's there and readable
cat .hermes/SOUL.md
# Check state.db opens cleanly
sqlite3 .hermes/state.db ".tables"
If the tarball decrypts, untars and the state.db opens with all the expected tables, your backup is good. If anything fails (decryption fails, the tarball is corrupt, sqlite3 throws "database disk image is malformed"), fix it now while you have time, not at 3 AM during a real recovery.
Restoring to a fresh server
The full restore flow on a new VPS, assuming you've already followed the install guide:
# Stop hermes if it's running
sudo systemctl stop hermes 2>/dev/null || true
# Move existing fresh-install state out of the way
mv ~/.hermes ~/.hermes.fresh
# Decrypt and extract the backup
gpg --decrypt hermes-20260505.tar.gz.gpg | tar -xzf - -C ~
# Confirm the data is there
ls -la ~/.hermes/
# Start hermes
sudo systemctl start hermes
journalctl -u hermes -f
The agent should come up with all your old memory and skills intact. The first conversation should feel like the agent remembers you. If it doesn't, check the logs for skill-load failures or memory-loading errors; the most common post-restore issue is permission mismatches if the backup was made under one user and restored under a different one.
What backups don't replace
One thing worth being clear about: backups protect against data loss, not against bad agent decisions. If you accidentally let your agent rewrite half your codebase or send weird messages from your bot, restoring last night's backup gets you the persona and memory back, but it won't undo the messages it sent or the code it changed. For protection against that class of mistake, the production hardening guide covers approval prompts and command allowlists, which prevent the bad action from happening in the first place.
The shortcut on LumaDock
The LumaDock Hermes Agent VPS includes nightly snapshot backups of the entire VPS image as part of the standard plan, kept for seven days. That's not a substitute for the GPG-encrypted application-level backups described above (you don't want to restore a whole VPS just to get a single skill back), but it's a useful additional layer for the "the disk died and I need a clean copy of last night" scenario.

