When I first ran n8n in production, the database looked healthy for a few weeks. Then Postgres started eating disk space like crazy. The culprit? Execution logs. Every workflow run, every failed webhook, every retry was saved….. thousands, then millions of rows. n8n is reliable, but it never forgets unless you tell it to.
Let’s go through how executions are stored, what you risk by deleting them blindly, and the best ways to prune while still keeping enough history for debugging and audits.
How n8n stores executions
By default, executions are written into the execution_entity
table in your database. Each entry contains:
- Workflow ID and version
- Start and stop timestamps
- Status (success, failed, error)
- Execution JSON with node inputs and outputs
- Error details when something broke
For small setups this is fine. But once you process thousands of tasks a day, execution tables grow fast. It’s not unusual for Postgres to hit tens of gigabytes in a matter of weeks.
If you’re on SQLite (not recommended for production, see the PostgreSQL vs SQLite trade-offs), this growth can even corrupt or lock the database.
Why pruning matters
Large execution tables cause problems:
- Slow queries: The editor’s “Executions” view starts crawling.
- Disk pressure: NVMe fills, snapshots balloon in size.
- Backup headaches: Dumps take longer and restores become painful.
- Retention risk: Sensitive data from years ago may stick around when it shouldn’t.
At the same time, you don’t want to nuke everything. Old execution data is invaluable for debugging workflows, investigating customer issues, and proving that a job ran when it should.
Pruning strategies
I usually frame pruning around three questions:
- How long do I really need full execution data?
- For debugging: maybe 7–30 days.
- For compliance: sometimes 90+ days.
- Do I need full JSON or just a summary?
- Full payloads are huge.
- A lightweight log (success/failure, timestamp, workflow ID) might be enough long-term.
- How much disk budget do I have?
- If you’re on a small VPS, you may need to prune aggressively.
- If you run on a larger node with terabytes of storage, keep more history.
Built-in pruning in n8n
n8n actually has a simple retention config baked in. You can set environment variables like:
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EXECUTIONS_DATA_PRUNE_MAX_COUNT=1000
EXECUTIONS_DATA_PRUNE=true
: turns on pruningEXECUTIONS_DATA_MAX_AGE=168
: keep executions for 168 hours (7 days)EXECUTIONS_DATA_PRUNE_MAX_COUNT=1000
: cap total stored executions
This works fine for small installs. But it’s blunt: once the age or count threshold is reached, executions are gone forever.
Smarter pruning with Postgres policies
On a serious deployment, I prefer to handle pruning in Postgres itself. That way I can:
- Keep a rolling window of recent runs
- Archive older summaries to a lightweight table
- Enforce retention that matches business rules
Example: Move summaries to a separate table
-- Create summary table
CREATE TABLE execution_summary AS
SELECT id, "workflowId", "status", "startedAt", "stoppedAt"
FROM execution_entity
WHERE false;
-- Copy old data into summary table
INSERT INTO execution_summary
SELECT id, "workflowId", "status", "startedAt", "stoppedAt"
FROM execution_entity
WHERE "startedAt" < NOW() - INTERVAL '30 days';
-- Delete old data from main table
DELETE FROM execution_entity
WHERE "startedAt" < NOW() - INTERVAL '30 days';
This way you still have an audit trail without carrying gigabytes of payload JSON.
Example: Scheduled cleanup with cron
#!/bin/bash
psql -U n8n -d n8n -c "
DELETE FROM execution_entity
WHERE \"startedAt\" < NOW() - INTERVAL '14 days';
"
Run it nightly with cron.
Reducing execution volume before pruning
Sometimes pruning is just hiding the symptom. You can often reduce the flood of executions:
- Turn off save data: In workflow settings, uncheck “Save successful executions” unless you really need them.
- Prune error detail: Keep only failed runs; drop successful ones.
- Use lightweight logs: Send execution metrics to Prometheus instead of saving every JSON blob.
This cuts database load dramatically and makes pruning less of a chore.
Monitoring database health
Keep an eye on:
- Size of
execution_entity
table (\dt+ execution_entity
in psql). - Autovacuum stats — Postgres needs vacuuming to reclaim space.
- Disk usage at the VPS level with
df -h
.
If you’re on a multi-node setup, also watch Redis queue depth. Stale jobs there can masquerade as DB bloat. The monitoring and troubleshooting guide covers more on spotting these issues.
Testing your pruning policy
Never assume it works — run a test:
- Backup the database (always first step).
- Run your prune script manually.
- Open the n8n UI and confirm you see recent runs.
- Check that older runs are gone but summaries remain (if you use the summary table pattern).
Do this on staging before touching production.
FAQ
How often should I prune executions?
For small workloads, weekly pruning is enough. For busy production, I prune daily.
Can I keep failed runs but drop successful ones?
Yes. Use workflow settings or custom SQL to only retain failures.
Will pruning improve n8n performance?
Yes. Smaller tables mean faster queries, backups and restores.
Is there a way to archive executions instead of deleting?
Yes. Copy key fields into a summary table or export JSON to S3 before deletion.
What happens if I don’t prune at all?
Postgres will bloat, your VPS disk will fill, and n8n will slow down or crash.