Back to Article List

How to deploy Next.js on Coolify in 2026

How to deploy Next.js on Coolify in 2026

If you've ever tried to build a Next.js app on a 2 GB VPS and watched it crash halfway through with the dreaded JavaScript heap out of memory error, you already know the awkward part of running Next.js outside Vercel. The framework is generous with build-time memory, the App Router is heavier than the old Pages Router and your VPS doesn't care about your feelings.

Coolify takes most of the operational pain out of self-hosting Next.js. You get Git-based deploys, automatic SSL through Let's Encrypt, environment variable management and zero-config reverse proxying, all from a dashboard. What it doesn't do is magically give your VPS more RAM, so part of this guide is about working around that.

Below is the full setup, written against Coolify v4.0.0 (the stable release that landed in April 2026), covering Next.js 13 through 16. I'll point out the parts where the framework version actually matters.

What you need before you start

You need a server with Coolify already running. If you're on a LumaDock Coolify VPS the dashboard is reachable on port 8000 the moment your server finishes provisioning, so this part is done for you. If you installed Coolify yourself, the getting started guide walks through dashboard access, securing port 8000 and connecting your localhost server.

You also need a Git repository with your Next.js app. GitHub, GitLab, Gitea and Bitbucket all work. For this guide I'll use GitHub because that's what most people are on and because the GitHub App integration unlocks a few features the others don't have yet.

One thing worth being honest about up front. Next.js production builds peak at roughly 1.5 to 3 GB of memory depending on your bundle size, the App Router vs Pages Router split and if you use output: 'standalone'. A 2 GB VPS is the absolute floor for a small static-leaning site. 4 GB is the realistic minimum for anything with a database. 8 GB is comfortable if you also want Postgres or Redis on the same machine. The 6 GB plan in the LumaDock starter range is a sweet spot for most solo SaaS builds.

Connect GitHub to Coolify

Coolify v4 introduced the concept of Sources, which is where you connect a Git provider once and then reference it across multiple apps. Before you can deploy from a private repo, you set up the source.

From the left sidebar in your Coolify dashboard, click Sources, then Add, then pick GitHub App. Coolify will prompt you to create a new GitHub App on your GitHub account or organization. Give it a name like coolify-prod (the name only matters to you) and Coolify pre-fills the permissions and webhook URL for you. Click through and authorize.

The permissions Coolify requests are read access to code, metadata, pull requests and issues, plus write access to checks and deployments. The write access is what lets it post commit statuses back to GitHub when a build succeeds or fails. If you're paranoid about scope, you can use a personal access token instead, but you lose PR preview deployments and inline commit statuses.

Once the app is authorized, you'll need to install it on the GitHub account or organization that owns the repo you're deploying. GitHub redirects you back to Coolify with the source connected. From this point on, every project on this Coolify instance can pick from the repos this app has access to.

Create the Next.js application

Sources are connected, time to add the app. Click Projects in the left sidebar, create a new project (call it whatever, projects are just organizational folders), then inside the project click Add a new resource.

You'll see a list of resource types. For a normal Next.js app from a private repo, pick Private Repository (with GitHub App). If your repo is public you can pick the simpler Public Repository path, which doesn't need the GitHub App at all and just clones over HTTPS.

Coolify will ask you to pick the repo from the list, then the branch (usually main or master), then it scans the repo to detect the framework. For Next.js it should pick up next in package.json and offer the Nixpacks build pack by default.

Nixpacks vs Dockerfile, which to pick

Coolify v4 supports four build packs out of the box: Nixpacks, Dockerfile, Docker Compose and Static. For Next.js the choice is really between Nixpacks and Dockerfile.

Nixpacks is the path of least resistance. Coolify auto-detects your Node.js version from package.json or .nvmrc, runs npm install (or yarn install or pnpm install depending on your lockfile), then npm run build, then starts the app with npm start. For 90% of Next.js apps this just works.

Dockerfile is the path you take when Nixpacks doesn't fit. Reasons to switch include needing a specific base image (Alpine vs Debian-slim for native dependencies like sharp), pinning Node.js to a version that doesn't match what Nixpacks picks, multi-stage builds where you want to copy only the standalone output for a smaller final image or you already have a working Dockerfile and don't want to fight with another build system.

If you go the Dockerfile route, the official Next.js examples repo has a reference Dockerfile that uses the standalone output mode. Drop it in your repo, set output: 'standalone' in next.config.js and Coolify will use it automatically when it detects a Dockerfile at the repo root.

Set environment variables (the part that trips people up)

Next.js has a sneaky environment variable model. Some vars need to be available at build time so they get inlined into the client bundle (anything starting with NEXT_PUBLIC_). Others only matter at runtime in the Node.js server (database URLs, secrets, third-party API keys without the NEXT_PUBLIC_ prefix).

Coolify v4 distinguishes these. When you add an environment variable to your application, you'll see two checkboxes: Is build variable and Is runtime variable. Both default to true, which is fine for most cases but causes problems with multi-line secrets (private keys, certificates) that confuse the Dockerfile parser. If you have a multi-line secret, uncheck Is build variable for that one variable so it only gets injected at runtime.

The pattern that works for almost every Next.js app:

  • NEXT_PUBLIC_SITE_URL: build + runtime (it's inlined into the client bundle)
  • NEXT_PUBLIC_POSTHOG_KEY or similar analytics keys: build + runtime
  • DATABASE_URL: runtime only (you don't want this in your client bundle)
  • NEXTAUTH_SECRET, NEXTAUTH_URL: runtime only
  • STRIPE_SECRET_KEY: runtime only
  • Any private API key: runtime only

You add these in the application's Environment Variables tab. Coolify also lets you import a .env file if you'd rather paste a block of vars in one go. Just make sure to review the build/runtime checkboxes on each line before you save.

Configure the start command and port

For a stock Nixpacks build of a Next.js app, the default start command is npm start and the default port is 3000. Coolify reads this from the build pack and you don't usually need to touch anything.

If you used a custom Dockerfile or you're running with output: 'standalone', your start command will be different. With standalone mode it's typically node server.js with the HOSTNAME environment variable set to 0.0.0.0 so the server binds to all interfaces inside the container. Don't bind to 127.0.0.1, your container's loopback isn't reachable from Traefik.

The port Coolify exposes externally is set in the application's General tab under Ports Exposes. The internal port (the one your Next.js app listens on inside the container) goes there. Traefik handles the public-facing 80 and 443 separately, so you don't expose the app's port directly to the internet.

Add a custom domain and SSL

Coolify uses Traefik as its default reverse proxy and pulls Let's Encrypt certificates automatically. To wire up a domain, open your application, go to the Domains tab and add your domain (like app.yourdomain.com). You can put multiple domains here separated by commas if you want www and apex versions both served.

Now point your DNS. The A record for your domain should point at your VPS public IP. If you have an AAAA record, set it to the IPv6 address. Don't add a CNAME pointing at anything else, Coolify needs DNS to resolve directly to the host running Traefik or Let's Encrypt won't be able to verify the HTTP-01 challenge.

Within a minute or two of DNS propagating, Traefik issues a certificate and your site is live over HTTPS. If the cert doesn't show up, check the Logs tab on the application and look for ACME errors. The two usual culprits are DNS hasn't propagated yet (give it longer) or you have Cloudflare proxying enabled in front of Coolify with the orange cloud on, which makes Let's Encrypt's HTTP-01 challenge fail. Set Cloudflare to DNS only first, get the cert, then turn on the proxy.

Trigger your first deploy and read the logs

Click Deploy. Coolify pulls the latest commit, runs the build and streams logs into the Deployments tab in real time. The first build is usually the slowest because Nixpacks has to download the Node.js base image and install all your dependencies fresh. Subsequent builds use the Docker layer cache and run much faster, often in 60 to 90 seconds for a typical Next.js app.

While the build runs, watch for two things. First, the npm install step finishing without errors (missing peer dependencies are common and usually safe to ignore, but actual install failures will stop the build). Second, the next build step completing. If your build dies during next build with a heap memory error, you've hit the OOM problem we'll address next.

Once the build succeeds, Coolify starts the container, Traefik picks it up and your domain serves the new version. If something goes sideways at runtime (like a missing env var causing the server to crash on startup), you'll see the container restart loop in the Logs tab. Catch it early, fix the var, redeploy.

Fix the build OOM problem

This is where most Next.js deployments on small VPS instances fall over. The Node.js process running next build defaults to a heap size of about 4 GB on a 64-bit system, but if your VPS only has 2 GB of physical RAM the process will start swapping aggressively and either grind to a halt or get killed by the OOM killer before the build finishes.

You have three workable fixes, in order of effort.

Fix 1, raise the Node heap and add swap

The cheapest fix is to give Node a higher heap budget and let your VPS swap to disk for the duration of the build. In your environment variables, add NODE_OPTIONS with the value --max-old-space-size=3072 (in MB). Set Is build variable to true, runtime can be either way. This tells the build process to ask for up to 3 GB before triggering garbage collection panic.

Then on the VPS itself, make sure you have at least 2 GB of swap space configured. The Node.js heap memory fix guide walks through both the heap flag and adding swap on Ubuntu/Debian. The combination buys you maybe 60 to 90 seconds of slower-but-completing builds on a 2 GB VPS, which is enough to ship.

Fix 2, switch to a bigger plan during builds

If your traffic at runtime is genuinely small (say a few thousand visitors a day) but your build is choking, you can run a temporary upgrade pattern. Resize your VPS to a 4 GB plan for a few hours while you ship a build, confirm it deploys, then resize back down. LumaDock supports plan resizing without rebuilding the server, so this is a real option, not a theoretical one. The deployment cost ends up being something like an extra dollar a month if you build once a week.

Fix 3, run a dedicated build server

Coolify v4 supports a Build Server feature where you offload builds to a separate machine, push the resulting image to a registry and pull it onto your runtime server. This is the right fix for agencies with many client sites or anyone running multiple Next.js apps on a shared box. It lets you keep a smaller runtime VPS and spin up a beefier build VPS only when you need it.

To set it up, add a second server in Servers, enable the Build Server toggle on its detail page and configure a Docker registry (Docker Hub, GHCR or a private registry). Coolify will then route builds for any app on this instance through the build server and pull the image down to the runtime server when it's ready. The downside is more moving parts. The upside is your production server stops swapping itself silly during deploys.

Add a Postgres database and connect it

Most Next.js apps that aren't static marketing sites need a database. Coolify makes Postgres, MySQL, MariaDB, MongoDB, Redis, KeyDB, DragonFly and ClickHouse available as one-click resources.

Inside your project, click Add a new resource again, pick Database, then PostgreSQL. Set a name, accept the auto-generated credentials (or set your own) and click Start. The database spins up in a separate container on the same Docker network as your application, which is what makes the next part work.

For your Next.js app to connect, set DATABASE_URL in the app's environment variables. The internal hostname is the database resource's name in Coolify (something like postgresql-database-abc123), the port is 5432 by default and the credentials are the ones you saw when you created the database. The full URL looks like postgresql://user:pass@postgresql-database-abc123:5432/dbname.

Critically, do not use the public IP or the external port that Coolify might expose. Use the internal Docker network hostname. This keeps database traffic inside the host, never hits the public internet and survives IP changes if you ever migrate.

If you're using Prisma or Drizzle, run your migrations as part of the deploy. With Nixpacks, you can add a Pre-deployment Command in the application settings (something like npx prisma migrate deploy) that runs before the new container takes over. With a Dockerfile, put the migration command in your entrypoint script.

Auto-deploy on git push

The whole reason most people use Coolify is the Git push to deploy workflow. With the GitHub App source connected, this happens automatically. Push a commit to the branch you connected (usually main), GitHub fires a webhook to Coolify and Coolify rebuilds and redeploys the app.

If you're using a personal access token instead of the GitHub App, you'll need to add the webhook manually. Coolify shows you the webhook URL in the application's Webhooks section. Copy it, head to your GitHub repo's settings, add a new webhook with that URL, content type application/json and pick the Just the push event option. Coolify will start receiving deploy triggers from then on.

If you want CI to gate deploys (run tests before deploying, deploy only on green), wire up GitHub Actions to call Coolify's deploy webhook only when tests pass. The GitHub Actions CI/CD guide covers the test-then-deploy pattern in detail and the same approach works for Coolify, you just swap the SSH-based deploy step for a curl call to the Coolify webhook URL.

Optional, put Cloudflare in front

You don't need Cloudflare to run a Next.js app on Coolify. Traefik plus Let's Encrypt gives you HTTPS, HTTP/2 and decent performance out of the box. Cloudflare adds three things on top, a global CDN cache, free DDoS protection at L7 and the ability to hide your origin IP behind their edge.

If you want it, set up your domain on Cloudflare (change nameservers at your registrar). Add the A record pointing to your VPS IP, set the proxy status to DNS only (gray cloud) initially. Wait for Coolify to issue the Let's Encrypt cert. Once HTTPS is working on the apex, switch the proxy status to Proxied (orange cloud). Set Cloudflare's SSL/TLS mode to Full (strict). You're done.

Two pitfalls to avoid. First, never put Cloudflare in proxy mode before the Let's Encrypt cert exists, the HTTP-01 challenge will fail because Cloudflare intercepts the validation request. Second, watch your Cloudflare cache rules carefully for Next.js. The default cache settings are too aggressive for a server-rendered app and will start serving stale HTML to logged-in users. Set page rules to bypass cache for any path that returns dynamic HTML or move to Cloudflare's "Cache Everything" mode only for static assets under /_next/static/.

Image optimization on a self-hosted Next.js

This is the gotcha that bites people moving from Vercel. Next.js has a built-in image optimization endpoint at /_next/image that resizes and serves images on demand. On Vercel, this runs on their edge for free. On a self-hosted Next.js instance, this runs in your Node.js process, which means it eats CPU and memory every time someone hits an image-heavy page.

You have three reasonable options.

Option one is to leave it alone. For low-traffic sites this is fine, your VPS handles it. Just make sure you have sharp installed (npm install sharp) so the image processing uses the native bindings instead of the slower JS fallback.

Option two is to use a remote image loader. In next.config.js, set images.loader to custom and provide a function that points at a CDN like Cloudflare Images, Imgix or Cloudinary. Your Next.js process never touches images, the CDN handles all of it.

Option three is to disable optimization entirely. Set images.unoptimized: true in next.config.js. This is the fastest path if you're already serving pre-optimized images from a CDN or you don't care about the optimization layer. Your Next.js process serves the raw image URL and the browser handles the rest.

Each of these saves you from the Vercel-style "free image optimization" turning into a CPU-spike problem on your VPS at exactly the moment you don't want it.

Common errors and how to fix them

The build succeeds but the site shows a 502 Bad Gateway

Almost always means the container started but isn't listening on the port Coolify expects. Check the Ports Exposes setting in the application's General tab matches the port Next.js is actually binding to (3000 by default). Also check the container logs in the Logs tab for startup errors, sometimes the app crashes within a second of starting and Traefik just sees a connection refused.

The site loads but assets 404 with the wrong base path

Usually means you have basePath or assetPrefix set in next.config.js and they don't match how you're serving the app. If your domain is app.example.com with no path prefix, both should be undefined or empty strings. If you're serving from example.com/app, set basePath: '/app' and the matching assetPrefix.

Database connection refused or hostname not found

Your DATABASE_URL is using the wrong hostname. Inside Coolify's Docker network, the hostname is the database resource name (you can copy it from the database's detail page), not localhost or 127.0.0.1. Also confirm the database container is actually running, sometimes it crashes on first start due to a credentials mismatch and the app can't connect because there's nothing to connect to.

Build fails with sharp not installed or sharp binary errors

The sharp package has native bindings that depend on the Linux distribution. If you're using a Dockerfile, make sure you're not using Alpine without the right build tools. Either switch the base image to node:20-slim (Debian-based) or install sharp with the Alpine-specific build flags. Nixpacks handles this for you on its default base image, so the issue mostly hits people who switched to a custom Dockerfile.

Out-of-memory during build, even with NODE_OPTIONS set

If you've already raised --max-old-space-size and added swap and you're still OOMing, your build is genuinely too big for the VPS. Either move to a build server (the Build Server feature described above) or upgrade the VPS plan during deploys or look at reducing the build's memory footprint by code-splitting more aggressively or moving heavy server components out of pages that bundle them. The Node.js heap memory guide has detail on the heap settings and the zero-downtime deploys guide covers patterns for deploying without taking the runtime container down.

Auto-deploy stops working after a while

Usually the GitHub App webhook secret got out of sync. Go to your Sources entry in Coolify, regenerate the webhook secret and re-install the GitHub App on the repo. Push a test commit to confirm. If you're on a personal access token instead, check the token expiration date, GitHub now expires tokens by default and a stale token silently breaks deploys.

Where to go from here

Once your Next.js app is shipping, the obvious next steps are monitoring (so you know when things break), backups (so you can recover from your own mistakes) and harder security defaults. The Node.js monitoring guide covers Prometheus and Grafana on a VPS, the Coolify database backups guide handles automated Postgres dumps to S3 and the Node.js security hardening guide walks through the production server defaults that aren't on by default.

If you outgrow the Nixpacks build pack and want full control over your build environment, you can also run Next.js the old-fashioned way with PM2 and nginx outside of Coolify, which the PM2 and nginx guide covers. Most teams stick with Coolify because the Git push workflow is hard to give up, but it's good to know the manual route exists when you need finer control.

Your idea deserves better hosting

24/7 support 30-day money-back guarantee Cancel anytime
Betalingscyclus

1 GB RAM VPS

£2.99 Save  25 %
£2.24 Maandelijks
  • 1 vCPU AMD EPYC
  • 30 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Firewall-beheer
  • Server-monitoring

2 GB RAM VPS

£4.48 Save  17 %
£3.74 Maandelijks
  • 2 vCPU AMD EPYC
  • 30 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Firewall-beheer
  • Server-monitoring

6 GB RAM VPS

£11.22 Save  33 %
£7.48 Maandelijks
  • 6 vCPU AMD EPYC
  • 70 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Firewall-beheer
  • Server-monitoring

AMD EPYC VPS.P1

£5.98 Save  25 %
£4.48 Maandelijks
  • 2 vCPU AMD EPYC
  • 4 GB RAM-geheugen
  • 40 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

AMD EPYC VPS.P2

£11.22 Save  27 %
£8.23 Maandelijks
  • 2 vCPU AMD EPYC
  • 8 GB RAM-geheugen
  • 80 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

AMD EPYC VPS.P4

£22.45 Save  20 %
£17.96 Maandelijks
  • 4 vCPU AMD EPYC
  • 16 GB RAM-geheugen
  • 160 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

AMD EPYC VPS.P5

£27.32 Save  21 %
£21.70 Maandelijks
  • 8 vCPU AMD EPYC
  • 16 GB RAM-geheugen
  • 180 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

AMD EPYC VPS.P6

£42.66 Save  21 %
£33.68 Maandelijks
  • 8 vCPU AMD EPYC
  • 32 GB RAM-geheugen
  • 200 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

AMD EPYC VPS.P7

£52.40 Save  20 %
£41.92 Maandelijks
  • 16 vCPU AMD EPYC
  • 32 GB RAM-geheugen
  • 240 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

EPYC Genoa VPS.G1

£3.74 Save  20 %
£2.99 Maandelijks
  • 1 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4e generatie 9xx4 met 3,25 GHz of vergelijkbaar, op Zen 4-architectuur.
  • 1 GB DDR5 geheugen
  • 25 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

EPYC Genoa VPS.G2

£9.72 Save  23 %
£7.48 Maandelijks
  • 2 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4e generatie 9xx4 met 3,25 GHz of vergelijkbaar, op Zen 4-architectuur.
  • 4 GB DDR5 geheugen
  • 50 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

EPYC Genoa VPS.G4

£19.46 Save  27 %
£14.22 Maandelijks
  • 4 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4e generatie 9xx4 met 3,25 GHz of vergelijkbaar, op Zen 4-architectuur.
  • 8 GB DDR5 geheugen
  • 100 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

EPYC Genoa VPS.G6

£36.67 Save  31 %
£25.45 Maandelijks
  • 8 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4e generatie 9xx4 met 3,25 GHz of vergelijkbaar, op Zen 4-architectuur.
  • 16 GB DDR5 geheugen
  • 200 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

EPYC Genoa VPS.G7

£56.14 Save  27 %
£41.17 Maandelijks
  • 8 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4e generatie 9xx4 met 3,25 GHz of vergelijkbaar, op Zen 4-architectuur.
  • 32 GB DDR5 geheugen
  • 250 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

1 vCPU AMD Ryzen 9

£10.47 Save  29 %
£7.48 Maandelijks
  • Dedicated CPU 4,5 GHz AMD Ryzen 9 7950X met een native frequentie van 4,5 GHz.
  • 4 GB DDR5 geheugen
  • 50 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

2 vCPU AMD Ryzen 9

£19.46 Save  19 %
£15.71 Maandelijks
  • Dedicated CPU 4,5 GHz AMD Ryzen 9 7950X met een native frequentie van 4,5 GHz.
  • 8 GB DDR5 geheugen
  • 100 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

8 vCPU AMD Ryzen 9

£69.61 Save  30 %
£48.65 Maandelijks
  • Dedicated CPU 4,5 GHz AMD Ryzen 9 7950X met een native frequentie van 4,5 GHz.
  • 32 GB DDR5 geheugen
  • 400 GB NVMe opslag
  • Ongelimiteerde bandbreedte
  • IPv4 & IPv6 inbegrepen IPv6-ondersteuning is momenteel niet beschikbaar in Frankrijk, Finland of Nederland.
  • 1 Gbps netwerk
  • Back-up inbegrepen
  • Firewall-beheer
  • Serverbewaking gratis

Frequently asked questions

How do I fix Next.js build out-of-memory errors on Coolify?

Set the environment variable NODE_OPTIONS to --max-old-space-size=3072 as a build variable and make sure your VPS has at least 2 GB of swap configured. If your build still OOMs after that, your VPS is genuinely too small for the build, so you have to either upgrade the plan, use Coolify's Build Server feature to offload builds to a separate machine or temporarily resize the VPS during deploys.

Stop installing. Start shipping.

LumaDock Coolify plans come with the dashboard pre-installed, unmetered bandwidth and a flat monthly bill. Try the server risk free with a 30-day refund guarantee.

GPU products are in high demand at the moment. Fill the form to get notified as soon as your preferred GPU server is back in stock.