Back to Article List

How to migrate from Heroku to Coolify

How to migrate from Heroku to Coolify

Heroku ran the table on developer experience for a decade and then quietly stopped trying. Free dynos went away, eco dynos got more expensive, the add-on marketplace shrank and the Postgres pricing bracket moved up. None of that is a scandal on its own, but if you've been on Heroku since 2018 your monthly bill probably tripled while the product got fewer features.

Coolify is the closest like-for-like swap. Git-based deploys, automatic SSL through Let's Encrypt, environment variable management, app and database lifecycle, all running on a flat-rate VPS instead of metered dyno-hours. The mental model is almost identical, just with you in charge of the underlying machine.

This guide walks through a real Heroku-to-Coolify migration, from "I have a working Heroku app" to "DNS is cut over and the old app can be deleted." It covers the architecture mapping, Postgres dump and restore, environment variable transfer, deploy verification and DNS cutover. Written against Coolify v4.0.0 stable.

What stays the same and what changes

Most of the day-to-day Heroku workflow ports over directly. You still push to a Git branch and watch a deploy happen. You still manage env vars from a dashboard. You still get HTTPS without thinking about it. You still get separate ephemeral containers for builds and runtime.

What changes is the abstraction layer. On Heroku, you didn't have to know that a dyno is just a containerized process. On Coolify, the container is more visible. You'll occasionally look at docker ps output, you'll restart containers when something gets stuck and you'll think about server resources because the server is yours.

You also lose three things that Heroku does for free. Automatic horizontal scaling across multiple dynos (Coolify can do it but it's manual). Heroku Connect for Salesforce sync (no equivalent, you'd build it yourself). Heroku Postgres dataclips and managed follower databases (Coolify gives you Postgres but the management surface is thinner). For most apps these aren't dealbreakers. For a few specific use cases they matter, so check your dependency on them before committing.

Map your Heroku architecture to Coolify

Before you start clicking around, sketch out what you currently have on Heroku and what it'll become in Coolify. The mapping is mostly straightforward.

Dynos become containers

A web dyno is a container that listens on the port Coolify tells it to. A worker dyno is a container that runs a long-lived process (Sidekiq, Celery, BullMQ, whatever you use). A release phase dyno is a one-shot pre-deployment command. Coolify supports all three patterns.

The web dyno is your default Coolify application. The worker dynos become additional applications in the same project, each pointing at the same Git repo but with a different start command. The release phase becomes the Pre-deployment Command in your application settings.

Add-ons become Coolify resources or external services

Heroku Postgres becomes a Coolify Postgres database resource. Heroku Redis becomes a Coolify Redis resource. Heroku Scheduler becomes Coolify's Scheduled Tasks feature inside the application. Papertrail and Logtail become whatever you wire up yourself, often a self-hosted Dozzle or Loki or you keep paying Papertrail and just point it at your Coolify app.

For email-sending add-ons (SendGrid, Mailgun, Postmark) the simplest path is to keep the third-party service and just move your API keys into Coolify's environment variables. There's no benefit to self-hosting email sending and a significant cost in deliverability.

Config Vars become environment variables

This is the smoothest transition. Heroku's heroku config -s command outputs your config vars in KEY=value format which is exactly what Coolify accepts as a bulk env import. We'll use this in a moment.

Procfile becomes a start command or a Dockerfile entrypoint

If your Heroku Procfile says web: npm start, the Coolify Nixpacks build pack does the equivalent automatically. If it says web: bundle exec rails server -p $PORT, you'll set that as the start command in Coolify's General tab and replace $PORT with the port Coolify exposes (usually you don't need to set anything, the build pack handles it).

For multi-process Procfiles (web + worker + scheduler), each line becomes a separate Coolify resource. We'll cover this when we get to workers.

Step 1, prepare on Heroku before you touch Coolify

Don't shut down anything yet. While the Heroku app is still running and serving traffic, capture the things you'll need on the other side.

Take a Postgres backup

From your local machine with the Heroku CLI installed:

heroku pg:backups:capture --app your-heroku-app
heroku pg:backups:download --app your-heroku-app

This creates a fresh backup and downloads it as latest.dump. The file format is Postgres custom format (compressed, restorable with pg_restore). Keep this file, you'll restore from it once Coolify Postgres is ready. If your database is multi-gigabyte, the download takes a while, do it on a fast connection.

Export config vars

Same machine:

heroku config -s --app your-heroku-app > heroku-env.txt

Open heroku-env.txt and review. You'll have things like DATABASE_URL, REDIS_URL, possibly add-on credentials. Strip any that point at Heroku-specific resources (you'll regenerate DATABASE_URL for the Coolify Postgres in a moment) and keep everything else. The file is now your source of truth for the env vars you'll paste into Coolify.

Identify your buildpack and runtime

Run heroku buildpacks --app your-heroku-app to see what's installed. Most apps have one buildpack (heroku/nodejs, heroku/ruby, heroku/python, etc.) and Coolify's Nixpacks build pack handles all of these out of the box.

If you have multiple buildpacks (a Node.js app that also needs Python for some scripts, for example), you'll likely want to switch to a Dockerfile in Coolify so you have full control over what's installed in the build environment. The Heroku buildpack composition doesn't translate directly.

Note your runtime version too. Check package.json's engines field, .python-version, .ruby-version or runtime.txt. Coolify Nixpacks reads these the same way Heroku does, so you usually don't need to change anything.

Step 2, set up Coolify and connect your repo

If you don't already have Coolify running, the LumaDock Coolify VPS ships with v4.0.0 pre-installed on Debian 12, so the dashboard is reachable on port 8000 immediately. The getting started guide covers initial setup, securing the dashboard and connecting the localhost server.

Once Coolify is reachable, set up a Git source. Click Sources in the left sidebar, then Add, then pick GitHub App (recommended) or GitLab or whichever provider hosts your repo. Authorize the app on the org or account that owns the repo.

Now create a project (call it the same thing you called the Heroku app, for sanity) and inside it click Add a new resource, then Private Repository (with GitHub App). Pick the repo, pick the branch and Coolify will scan and detect the framework. Don't deploy yet, we still need to set up the database and env vars first.

Step 3, recreate the database in Coolify and restore

Inside the same project, click Add a new resource again, pick Database, then PostgreSQL. Set a name (something like postgres-primary), pick the version that matches your Heroku Postgres version (run heroku pg:info --app your-heroku-app to check), accept the auto-generated credentials and click Start.

The database container starts, gets a name on the Docker network and is reachable internally as postgres-primary (or whatever you named it) on port 5432. Internal connections from your future application container will use this hostname.

To restore your dump, you have two options. The convenient option is to expose the database temporarily on a public port from Coolify's database settings, restore over the public port from your local machine, then close the port again. The cleaner option is to copy the dump file onto the VPS and restore from there.

For the cleaner approach, SCP your dump file onto the server:

scp latest.dump root@your-vps-ip:/tmp/

Then SSH in and run pg_restore from inside a temporary container connected to the same Docker network as your Coolify Postgres:

docker run --rm -it \
  --network coolify \
  -v /tmp/latest.dump:/tmp/latest.dump \
  postgres:16 \
  pg_restore -h postgres-primary -U your-db-user -d your-db-name --no-owner --no-acl /tmp/latest.dump

You'll be prompted for the password (the one Coolify generated when you created the database). The --no-owner and --no-acl flags strip Heroku-specific role information that won't apply on your new Postgres, which prevents a wall of harmless-looking errors during restore.

For a sanity check, count rows on a known table after restore. If the numbers match what was on Heroku, the dump went through cleanly. If you have a Redis add-on too, you usually don't need to migrate it. Redis is typically a cache, not a source of truth, so spin up a fresh Coolify Redis resource the same way and let your app populate it as it runs.

Step 4, recreate environment variables in Coolify

Open your application in Coolify, go to the Environment Variables tab and click Bulk Edit. Paste the contents of your heroku-env.txt file. Before saving, do three things.

Replace DATABASE_URL with the new internal URL pointing at your Coolify Postgres. The format is postgresql://user:password@postgres-primary:5432/dbname. Use the database resource name as the hostname, not localhost.

Replace REDIS_URL the same way, with the Coolify Redis resource hostname.

Mark any multi-line values (private keys, certificates) as runtime-only. Uncheck the Is build variable checkbox on those lines. Multi-line values can break the Dockerfile parser if Coolify tries to inject them at build time and the symptom is a confusing build failure that has nothing to do with your code.

Save. Coolify stores the env vars and will inject them into the next build and runtime container.

Step 5, deploy the app and run migrations

If your Heroku app had a release phase running migrations (release: rails db:migrate or similar), recreate that as a Pre-deployment Command in the application's General tab. This runs inside the same image as your app, before the new container takes over from the old one, so migrations always run before traffic hits the new code.

For a Rails app the command is bundle exec rails db:migrate. For Django it's python manage.py migrate. For a Node app using Prisma it's npx prisma migrate deploy. The command runs once per deploy, so it's fine for migrations and seed data, not for long-running tasks.

Now click Deploy. Coolify pulls the latest commit, runs Nixpacks (or your Dockerfile), runs the pre-deployment command, starts the new container, waits for the health check to pass, then swaps Traefik's routing over. The build logs stream live in the Deployments tab.

If the build fails, the error is almost always one of three things. Missing env vars (your app crashes on startup looking for a config that wasn't carried over). Missing system dependencies (a native gem or pip package that needs libpq-dev or similar, which Heroku's buildpack handled silently and Nixpacks may not). Build memory (your app's bundle size grew over the years and the build now needs more RAM than your VPS has, see the heap memory guide for the fix).

Step 6, smoke test before DNS cutover

Don't change DNS yet. Your Coolify app has a default subdomain Coolify generated for it (something like your-app.your-coolify-instance.com or just an IP-based URL if you haven't set up a wildcard yet). Hit that URL and verify the app responds. Log in with a real user. Click around. Make sure the database queries return real data, the session cookies work and any third-party integrations are reachable.

Pay special attention to anything that depended on Heroku's environment. $PORT and $DYNO are Heroku-specific, your app should already be using process.env.PORT or similar (which Coolify sets correctly), but if you hardcoded anything to a Heroku assumption it'll show up here. Background workers, scheduled jobs and webhooks from external services are the usual missed pieces, test them explicitly.

If something's off, fix it with the Heroku app still running so you have a comparison point. The whole reason for testing before DNS cutover is to keep the old app as a working fallback while you fiddle with the new one.

Step 7, cut over DNS and watch for issues

Once you're happy, go to the application's Domains tab in Coolify and add your real domain (app.yourdomain.com). Save. Now go to your DNS provider and update the A record for that subdomain to point at your VPS IP instead of your-app.herokuapp.com.

If your domain provider supports a low TTL setting, set it to 60 or 300 seconds a day before the cutover so the DNS change propagates quickly. After the change, keep the Heroku app running for a couple of days while DNS fully resolves to the new IP everywhere. Some clients cache DNS aggressively (corporate networks, mobile carriers, occasionally just bad router configurations) and traffic that still hits Heroku will succeed during the overlap window.

Coolify handles the SSL certificate automatically once DNS resolves. Within a minute or two of the A record propagating, Traefik fires a Let's Encrypt HTTP-01 challenge, gets the cert and serves your domain over HTTPS. Watch the Logs tab on the application for any ACME errors. The most common cause is Cloudflare proxying being on (set it to DNS only first, get the cert, then turn proxy back on).

Special case, multiple processes (web plus worker plus scheduler)

If your Heroku Procfile had multiple lines, each one becomes a separate resource in Coolify, all pointing at the same Git repo and image. The simplest pattern is to use a Dockerfile that builds once, then create multiple Coolify applications that share the build but run different commands.

You can also use the Docker Compose build pack to define web, worker and scheduler in one docker-compose.yml file at the repo root. Coolify reads it, brings up all the services together and manages them as a unit. This works well for tightly coupled multi-process apps where you want one deploy to update everything atomically.

For Heroku Scheduler, Coolify's Scheduled Tasks feature in the application settings is the direct replacement. You define cron expressions and the command to run, Coolify wakes up the container at the right time and runs it. The crucial difference is that Heroku Scheduler always runs in a fresh dyno, while Coolify's Scheduled Tasks run inside your already-running app container by default. For most jobs this is fine. If you specifically need a fresh isolated environment, you can configure the task to run in a new container.

Rollback plan

Before you delete the Heroku app, give yourself a clean rollback. Keep the Heroku app for at least a week post-cutover, with all dynos running but no DNS traffic pointing at it. If something explodes on the Coolify side that you can't fix in 30 minutes, point DNS back at your-app.herokuapp.com and you're back in business while you debug.

You also want a clean point-in-time backup of the Coolify Postgres before any production traffic hits. Coolify's Backups feature on the database resource lets you take a manual backup before cutover. The backups guide walks through wiring up automated backups to S3 so you have ongoing recoverability after the migration.

What you gained, what you gave up

What you gain is predictable monthly billing (a flat VPS price instead of metered dyno hours plus add-on costs), full root on the underlying machine, the ability to run as many apps as the VPS can hold without per-app charges and a deploy pipeline that's structurally identical to what you had on Heroku.

What you give up is one-click horizontal scaling across multiple regions, the polish around managed Postgres (point-in-time recovery, automated follower databases, dataclips) and the implicit assumption that someone else is responsible when the server breaks at 3 AM. The 3 AM thing is real, plan for it. Set up monitoring (the Prometheus and Grafana guide covers a self-hosted stack), automated backups and ideally a runbook for the failures you can predict.

For most apps this trade is good. The marginal cost of a 4 GB or 8 GB VPS is dramatically lower than the equivalent Heroku monthly bill once you factor in Postgres, Redis and a few worker dynos. The marginal hassle is real but bounded.

Your idea deserves better hosting

24/7 support 30-day money-back guarantee Cancel anytime
Строк Оплати

1 GB RAM VPS

£2.95 Save  25 %
£2.21 Щомісячно
  • 1 vCPU AMD EPYC
  • 30 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Керування фаєрволом
  • Безкоштовний моніторинг

2 GB RAM VPS

£4.43 Save  17 %
£3.69 Щомісячно
  • 2 vCPU AMD EPYC
  • 30 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Керування фаєрволом
  • Безкоштовний моніторинг

6 GB RAM VPS

£11.09 Save  33 %
£7.39 Щомісячно
  • 6 vCPU AMD EPYC
  • 70 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Керування фаєрволом
  • Безкоштовний моніторинг

AMD EPYC VPS.P1

£5.91 Save  25 %
£4.43 Щомісячно
  • 2 vCPU AMD EPYC
  • 4 GB оперативна пам’ять
  • 40 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

AMD EPYC VPS.P2

£11.09 Save  27 %
£8.13 Щомісячно
  • 2 vCPU AMD EPYC
  • 8 GB оперативна пам’ять
  • 80 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

AMD EPYC VPS.P4

£22.20 Save  20 %
£17.76 Щомісячно
  • 4 vCPU AMD EPYC
  • 16 GB оперативна пам’ять
  • 160 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

AMD EPYC VPS.P5

£27.01 Save  21 %
£21.46 Щомісячно
  • 8 vCPU AMD EPYC
  • 16 GB оперативна пам’ять
  • 180 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

AMD EPYC VPS.P6

£42.18 Save  21 %
£33.30 Щомісячно
  • 8 vCPU AMD EPYC
  • 32 GB оперативна пам’ять
  • 200 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

AMD EPYC VPS.P7

£51.80 Save  20 %
£41.44 Щомісячно
  • 16 vCPU AMD EPYC
  • 32 GB оперативна пам’ять
  • 240 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

EPYC Genoa VPS.G1

£3.69 Save  20 %
£2.95 Щомісячно
  • 1 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4-го покоління 9xx4 із частотою 3.25 ГГц або подібним, на архітектурі Zen 4.
  • 1 GB DDR5 RAM
  • 25 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

EPYC Genoa VPS.G2

£9.61 Save  23 %
£7.39 Щомісячно
  • 2 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4-го покоління 9xx4 із частотою 3.25 ГГц або подібним, на архітектурі Zen 4.
  • 4 GB DDR5 RAM
  • 50 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

EPYC Genoa VPS.G4

£19.24 Save  27 %
£14.06 Щомісячно
  • 4 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4-го покоління 9xx4 із частотою 3.25 ГГц або подібним, на архітектурі Zen 4.
  • 8 GB DDR5 RAM
  • 100 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

EPYC Genoa VPS.G5

£33.30 Save  33 %
£22.20 Щомісячно
  • 4 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4-го покоління 9xx4 із частотою 3.25 ГГц або подібним, на архітектурі Zen 4.
  • 16 GB DDR5 RAM
  • 150 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

EPYC Genoa VPS.G6

£36.26 Save  31 %
£25.16 Щомісячно
  • 8 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4-го покоління 9xx4 із частотою 3.25 ГГц або подібним, на архітектурі Zen 4.
  • 16 GB DDR5 RAM
  • 200 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

EPYC Genoa VPS.G7

£55.50 Save  27 %
£40.70 Щомісячно
  • 8 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4-го покоління 9xx4 із частотою 3.25 ГГц або подібним, на архітектурі Zen 4.
  • 32 GB DDR5 RAM
  • 250 GB NVMe сховище
  • Безлімітний трафік
  • IPv4 і IPv6 включено Підтримка IPv6 наразі недоступна у Франції, Фінляндії або Нідерландах.
  • 1 Гбіт/с мережа
  • Автобекуп включено
  • Керування фаєрволом
  • Безкоштовний моніторинг

Frequently asked questions

How do I migrate Heroku Postgres to Coolify without downtime?

The cleanest no-downtime path is to set up logical replication from your Heroku Postgres to your Coolify Postgres before cutover, let it catch up, then flip DNS. For most apps this is overkill, a brief read-only window during the dump and restore is fine. If your app can tolerate a few minutes of read-only mode, just put the Heroku app in maintenance mode (heroku maintenance:on), take the dump, restore on Coolify, deploy and switch DNS, then turn off maintenance mode. The total downtime is usually 5 to 30 minutes depending on database size.

Stop installing. Start shipping.

LumaDock Coolify plans come with the dashboard pre-installed, unmetered bandwidth and a flat monthly bill. Try the server risk free with a 30-day refund guarantee.

GPU products are in high demand at the moment. Fill the form to get notified as soon as your preferred GPU server is back in stock.