Back to Article List

Zero-downtime Node.js deploys with PM2 cluster mode

Zero-downtime Node.js deploys with PM2 cluster mode - Zero-downtime Node.js deploys with PM2 cluster mode

Understanding fork mode vs cluster mode

PM2 offers two execution modes, each with different trade-offs. Choosing correctly depends on your application type and deployment constraints.

Fork mode runs a single instance of your application in its own process. This mode is simple and useful for background jobs, cron workers, or scripts that don't handle HTTP traffic. When you run pm2 reload in fork mode, PM2 stops the entire process, then starts it again. Any in-flight requests fail with connection errors. Your application experiences downtime proportional to startup time.

Cluster mode leverages the Node.js cluster module to run multiple instances of your application, typically one per CPU core. These instances share the same listening port through kernel-level load balancing. When you call pm2 reload in cluster mode, PM2 restarts one worker at a time while the others continue handling requests. Clients connecting during deployment hit healthy workers. This rolling restart approach eliminates downtime.

The trade-off is memory usage. Each instance consumes additional RAM for heap, code, and libraries. On a 4-core server with a 200MB per-instance footprint, you're committing 800MB to your application alone. Cluster mode also requires stateless design since requests may route to different workers.

When to use fork mode

Fork mode works well for background tasks that run on a schedule or queue-based workers. If your app is a cron job that runs every 5 minutes, a worker processing jobs from Redis, or a data sync script, fork mode is appropriate. These don't serve HTTP traffic and don't need load balancing. Fork mode also suits development environments where startup time is less critical.

Single-instance apps with extreme memory constraints may also use fork mode, though this is rare on modern VPS hardware.

When to use cluster mode

Use cluster mode for any HTTP or TCP server handling user traffic. Web APIs, real-time servers, websocket gateways, and REST backends all benefit from parallel request handling and rolling restarts. Cluster mode also provides automatic restart on crash across multiple workers, improving reliability.

Cluster mode shines during deployments because zero-downtime restarts become automatic without special tooling.

How the Node.js cluster module works internally

Understanding the internal architecture helps you debug issues and configure PM2 correctly.

When you run PM2 in cluster mode, PM2 uses the Node.js cluster module, which consists of a master process and one or more worker processes. The master process creates worker processes using the child_process.fork() API. Each worker is a separate V8 instance with its own memory space and event loop.

The master process never handles application requests directly. Instead, it manages worker lifecycle and load balancing. When a client connects, the kernel distributes the connection using one of two strategies.

Load balancing strategy

On Linux and macOS, Node.js uses round-robin load balancing. The master process creates a listening socket on port 3000 (or your app port). Client connections arrive at this socket. The master accepts each connection and distributes it to worker processes in a circular sequence: worker 1, worker 2, worker 3, worker 1, etc. This ensures even load distribution.

On Windows, the kernel handles load balancing directly. The master socket passes a handle to each worker, and each worker listens independently. The kernel spreads connections across them.

Inter-process communication (IPC)

The master and each worker maintain an IPC channel for communication. This is a bidirectional pipe created by fork(). Messages flow through this channel to coordinate restarts, signal workers, and relay status information.

Workers cannot communicate directly with each other via IPC. If two workers need to share data, they must use external storage like Redis or a database. This constraint ensures stateless design and horizontal scalability.

You can manually send messages between master and workers using process.send() and process.on('message'), but this is rarely needed in production code.

Server handle sharing

A clever detail in cluster mode is handle sharing. When a worker calls server.listen(3000), it doesn't create a new socket. Instead, it sends an IPC message to the master describing the listening request. If the master already has a listening socket matching that port, it passes the socket handle back to the worker. Multiple workers share one underlying socket through the OS kernel.

This allows all workers to accept connections on the same port without port conflicts. The kernel's socket accept queue distributes incoming connections fairly.

Complete ecosystem.config.js reference

The ecosystem file is your PM2 configuration manifest. Every production deployment needs one.

module.exports = {
  apps: [
    {
      name: 'my-api',
      script: './dist/server.js',
      cwd: '/var/www/my-api',
      instances: 'max',
      exec_mode: 'cluster',
      autorestart: true,
      max_restarts: 10,
      min_uptime: '10s',
      max_memory_restart: '500M',
      args: '--max-old-space-size=1024',
      env: {
        NODE_ENV: 'production',
        PORT: 3000
      },
      env_development: {
        NODE_ENV: 'development',
        DEBUG: 'app:*'
      },
      error_file: '/var/log/pm2/err.log',
      out_file: '/var/log/pm2/out.log',
      log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
      merge_logs: true,
      watch: false,
      ignore_watch: ['node_modules', 'logs', 'dist', '.next'],
      kill_timeout: 10000,
      listen_timeout: 10000,
      wait_ready: false
    }
  ],
  deploy: {
    production: {
      user: 'deploy',
      host: 'prod.example.com',
      ref: 'origin/main',
      repo: '[email protected]:user/app.git',
      path: '/var/www/my-api'
    }
  }
};

Instance count and CPU binding

The instances field controls worker count. Set it to a number like 4, or use 'max' for automatic detection based on CPU cores. The exec_mode field must be 'cluster' for load balancing.

Check your VPS CPU count with nproc or getconf _NPROCESSORS_ONLN. Most VPS providers run a single machine per customer, so matching instances to cores is straightforward. A 4-core server should run 4 instances.

If your application is CPU-bound (heavy computation), matching cores is optimal. If it's I/O-bound (database queries, API calls), you might run slightly more instances because threads spend time waiting for I/O rather than executing. Start with core count and adjust based on monitoring.

Memory limits and auto-restart

The max_memory_restart setting protects against memory leaks. Set it to a threshold in MB or with units like '500M'. If a worker exceeds this, PM2 gracefully restarts it.

The max_restarts and min_uptime combination prevents restart loops. If a worker crashes more than max_restarts times within min_uptime, PM2 stops trying and marks the app as errored. This prevents CPU spinning if your code has a fatal bug.

The autorestart field defaults to true. Set it to false only for one-off scripts.

Logging configuration

The error_file and out_file paths specify where logs go. Use absolute paths to avoid confusion. merge_logs: true combines output from all workers into one file, making tailing easier.

The log_date_format field customizes timestamp format. Use ISO 8601 for UTC-aware parsing by log aggregation tools. Add timezone offset handling if your server isn't in UTC.

Consider rotation with tools like pm2-logrotate to prevent disk filling. Install it with pm2 install pm2-logrotate.

Environment variables per environment

Use env for common variables and env_production, env_development, etc. for environment-specific overrides. PM2 merges these at startup.

Start your app with pm2 start ecosystem.config.js --env development to use development variables.

Never store secrets in ecosystem.config.js. Instead, load them from .env files using dotenv, or mount secrets from your orchestration platform.

Watch mode and ignore patterns

Set watch: true only in development. PM2 will restart your app whenever files change. In production, always use watch: false to prevent accidental restarts.

The ignore_watch array lists paths to ignore. Include node_modules, log directories, and temporary files. Use glob patterns like dist/**/*.map to ignore source maps.

Implementing graceful shutdown

Cluster mode only provides zero downtime if your application cooperates with restarts. Graceful shutdown means accepting no new requests, finishing existing work, and exiting cleanly within a timeout.

When PM2 sends SIGTERM to a worker, your app should stop accepting new connections, drain in-flight requests, close resources, and exit. If it takes too long, PM2 sends SIGKILL after kill_timeout milliseconds.

Express graceful shutdown

import express from 'express';

const app = express();
let server;

app.get('/api/users', (req, res) => {
  res.json({ users: [] });
});

// Start listening
server = app.listen(3000, () => {
  console.log('Server listening on port 3000');
});

// Track active connections
const activeConnections = new Set();

server.on('connection', (conn) => {
  activeConnections.add(conn);
  conn.on('close', () => {
    activeConnections.delete(conn);
  });
});

// Graceful shutdown handler
process.on('SIGTERM', async () => {
  console.log('SIGTERM received, starting graceful shutdown');
  
  // Stop accepting new connections
  server.close(async () => {
    console.log('HTTP server closed');
    
    // Close any long-lived connections
    for (const conn of activeConnections) {
      conn.destroy();
    }
    
    process.exit(0);
  });
  
  // Force exit after timeout if shutdown hangs
  setTimeout(() => {
    console.error('Forceful shutdown after timeout');
    process.exit(1);
  }, 10000);
});

// Also handle SIGINT for local development
process.on('SIGINT', () => {
  console.log('SIGINT received, exiting');
  process.exit(0);
});

The key steps are: call server.close() to stop accepting connections, wait for in-flight requests to complete, close database connections, and exit. The timeout ensures you don't hang indefinitely.

Fastify graceful shutdown

import Fastify from 'fastify';

const fastify = Fastify({ logger: true });

fastify.get('/api/health', async (request, reply) => {
  return { status: 'ok' };
});

fastify.listen({ port: 3000, host: '0.0.0.0' }, (err, address) => {
  if (err) {
    fastify.log.error(err);
    process.exit(1);
  }
  console.log(`Server listening at ${address}`);
});

process.on('SIGTERM', async () => {
  console.log('SIGTERM received, closing server');
  
  try {
    // Fastify handles graceful shutdown with close()
    await fastify.close();
    console.log('Fastify server closed');
    process.exit(0);
  } catch (err) {
    fastify.log.error(err);
    process.exit(1);
  }
});

// Timeout fallback
setTimeout(() => {
  console.error('Forced shutdown after 15 second timeout');
  process.exit(1);
}, 15000);

Fastify's close() method is cleaner than Express because it's promise-based and handles more shutdown logic automatically. Still set a timeout in case databases or external services hang.

NestJS graceful shutdown

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  
  const server = await app.listen(3000);
  
  // Handle graceful shutdown
  process.on('SIGTERM', async () => {
    console.log('SIGTERM received, closing NestJS app');
    
    try {
      // NestJS close() method shuts down all modules
      await app.close();
      console.log('NestJS app closed');
      process.exit(0);
    } catch (err) {
      console.error('Error during shutdown:', err);
      process.exit(1);
    }
  });
  
  // Timeout safety net
  setTimeout(() => {
    console.error('Forced shutdown after 20 second timeout');
    process.exit(1);
  }, 20000);
}

bootstrap();

NestJS's close() method closes all providers and stops listening. This is the preferred approach for NestJS applications.

PM2 reload vs restart

The difference is crucial. pm2 reload performs a graceful, rolling restart. pm2 restart force-kills the process.

In cluster mode, pm2 reload restarts workers one at a time. It sends SIGTERM to worker 1, waits for graceful shutdown, starts a new worker 1, then repeats for worker 2, etc. Throughout this sequence, workers 2-4 remain active handling traffic.

pm2 restart sends SIGKILL immediately without graceful shutdown. All workers exit, the app goes offline, then PM2 starts new workers. If startup takes 5 seconds, your API is unavailable for 5 seconds.

Use pm2 reload for deployments. Use pm2 restart only if reload hangs, indicating broken graceful shutdown code.

# Graceful rolling restart
pm2 reload my-api

# Force restart (causes brief downtime)
pm2 restart my-api

# Reload with wait-ready flag
pm2 reload ecosystem.config.js --wait-ready

The --wait-ready flag

By default, PM2 considers a worker started as soon as the process spawns. If your app takes time to establish database connections or warm up caches, PM2 might start the next worker before the previous one is ready.

The --wait-ready flag tells PM2 to wait for your app to explicitly signal readiness via process.send('ready'). Only after receiving this signal does PM2 move to the next worker.

Add this to your app startup:

const server = app.listen(3000, () => {
  console.log('HTTP server listening');
  
  // Signal readiness to PM2
  if (process.send) {
    process.send('ready');
  }
});

This is especially valuable for apps with warm-up phases. Set wait_ready: true in ecosystem.config.js or pass --wait-ready on the CLI.

Memory management with max_memory_restart

Memory leaks are insidious in production. Your app gradually consumes more RAM until the server runs out of swap, the kernel kills processes, and your service goes offline.

The max_memory_restart setting provides a safety net. PM2 monitors each worker's resident memory. If a worker exceeds the threshold, PM2 gracefully restarts it, releasing memory.

max_memory_restart: '500M'

This configuration restarts any worker exceeding 500MB. Set the threshold at about 80% of your expected peak memory per worker. If your app normally uses 350MB, set 450MB.

To find the right threshold, run your app for a few days in production and monitor memory growth.

pm2 monit

Watch the memory column. Note the typical peak for one worker. If it's 300MB on a normal day and 400MB during traffic spikes, set max_memory_restart to 500MB to give headroom before restart.

Check historical memory usage with:

pm2 web  # Open http://localhost:9615 in a browser

The PM2 web dashboard shows memory graphs over time.

Instance scaling strategy

The number of instances affects throughput and resource consumption. Too few instances mean CPU underutilization. Too many waste RAM with diminishing returns.

Start with instances matching CPU cores. A 4-core server runs 4 instances.

nproc  # Shows CPU core count

For I/O-bound applications (APIs calling databases, external services), you might run more instances because threads sit idle during I/O waits. Experimentation works best.

Monitor CPU and memory under typical load. If CPU sits at 40% and memory is healthy, your instances are underprovisioned. If CPU hits 95%, add instances if you have spare memory, or optimize code.

Never run more instances than you have system memory to support. Each instance consumes heap, code cache, and libraries. On a 4GB VPS running one app at 300MB per instance, you can sustain 12 instances, but only if you're comfortable with no other processes consuming memory.

WebSocket considerations in cluster mode

WebSockets introduce a challenge: connections are long-lived and stateful. When a client connects, it might establish state (like a session ID) on worker 1. If the client sends a subsequent message and the kernel routes it to worker 2, worker 2 has no knowledge of the connection.

Three approaches handle this:

Sticky sessions approach

Sticky sessions ensure all requests from a client route to the same worker. Nginx can implement this via IP hashing or a session cookie. The downside is uneven load distribution if one client sends many messages.

Redis adapter approach

Use a message broker like Redis to share WebSocket state across workers. Libraries like socket.io with a Redis adapter allow workers to broadcast events to all clients regardless of which worker they connect through.

import { createAdapter } from '@socket.io/redis-adapter';
import { io } from 'socket.io';
import { createClient } from 'redis';

const pubClient = createClient();
const subClient = pubClient.duplicate();

await Promise.all([pubClient.connect(), subClient.connect()]);

const ioServer = io(3000);
ioServer.adapter(createAdapter(pubClient, subClient));

ioServer.on('connection', (socket) => {
  socket.on('message', (msg) => {
    // Broadcast to all clients, all workers
    ioServer.emit('broadcast', msg);
  });
});

Separate WebSocket server approach

Run WebSockets on a separate PM2 instance listening on a different port (e.g., 4000), not in cluster mode. This avoids the distributed state problem altogether. Nginx routes WebSocket traffic to this separate instance and API traffic to the clustered instances.

Choose sticky sessions for simplicity and low overhead, Redis for complex state, or a separate server for maximum isolation.

Monitoring cluster mode

Visibility into worker health and performance is critical.

Using pm2 monit

pm2 monit

This opens an interactive dashboard showing CPU, memory, and restart count for each worker in real-time. Useful during deployments to watch rolling restarts or after incidents to spot memory leaks.

Tailing logs

pm2 logs my-api
pm2 logs my-api --err  # Only error logs
pm2 logs my-api --lines 200  # Last 200 lines

With merge_logs: true in ecosystem.config.js, all workers' logs appear in one stream with timestamps. Without merging, you get separate files per worker.

Listing processes

pm2 list
pm2 info my-api  # Detailed info for one process

The list shows PID, memory, CPU, restart count, uptime, and status for each worker. Watch the restart count spike if workers are crashing frequently.

When NOT to use cluster mode

Cluster mode isn't a panacea. Some apps suffer performance losses.

Background workers processing jobs from a queue don't need clustering. They're not serving HTTP traffic, so rolling restarts don't apply. Use fork mode.

Apps with in-process caching shared between requests have issues in cluster mode. Each worker maintains separate caches. If worker 1 caches a value, worker 2 requests the same data and misses the cache. Solution: move caching to Redis, or accept the overhead.

Cron jobs that run on a schedule should run once globally, not in every worker. PM2 cluster mode would execute the job 4 times on a 4-core server. Use fork mode and trigger it from one worker only, or use a distributed scheduler.

Apps requiring shared memory (using SharedArrayBuffer or native addons) may conflict in cluster mode. Test carefully.

PM2 deploy workflow

PM2 includes a deploy feature for pushing code and running deployment commands. This is an alternative to CI/CD platforms.

deploy: {
  production: {
    user: 'deploy',
    host: 'prod.example.com',
    ref: 'origin/main',
    repo: '[email protected]:user/app.git',
    path: '/var/www/my-api',
    'post-deploy': 'npm install && npm run build && pm2 reload ecosystem.config.js'
  }
}

Deploy from your local machine:

pm2 deploy ecosystem.config.js production setup  # First time setup
pm2 deploy ecosystem.config.js production  # Deploy

PM2 clones the repo, runs post-deploy commands, and reloads your app. This is simple but not as flexible as GitHub Actions. For complex deployments, use a CI/CD platform instead.

Troubleshooting cluster mode issues

Workers crashing in a loop

If you see max restarts exceeded, PM2 detected a worker crashing repeatedly and gave up restarting. Check the logs:

pm2 logs my-api --err --lines 500

Look for the error causing the crash. Common causes are missing environment variables, database connection failures, or module import errors. Fix the issue locally, rebuild, and redeploy.

Temporarily increase max_restarts and min_uptime if you're in the middle of debugging, but reset them once fixed.

Port conflicts (EADDRINUSE)

If you see Error: listen EADDRINUSE: address already in use :::3000, another process is using the port. Check what's listening:

sudo lsof -i :3000
sudo netstat -tulpn | grep 3000

Kill the conflicting process or change your port. In ecosystem.config.js, use a different port for development vs production via environment variables.

Graceful shutdown not working

If pm2 reload takes a long time and eventually force-kills workers, your app isn't handling SIGTERM. Test locally:

pm2 start ecosystem.config.js
kill -SIGTERM $(pgrep -f "dist/server.js" | head -1)

Your app should log the SIGTERM message and exit within 2 seconds. If it doesn't, add SIGTERM handling as shown in the graceful shutdown section.

Check that server.close() is actually being called and that you're not keeping long-lived connections open indefinitely.

Your idea deserves better hosting

24/7 support 30-day money-back guarantee Cancel anytime
Billing Cycle

1 GB RAM VPS

$3.99 Save  50 %
$1.99 Monthly
  • 1 vCPU AMD EPYC
  • 30 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Firewall management
  • Free server monitoring

2 GB RAM VPS

$5.99 Save  17 %
$4.99 Monthly
  • 2 vCPU AMD EPYC
  • 30 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Firewall management
  • Free server monitoring

6 GB RAM VPS

$14.99 Save  33 %
$9.99 Monthly
  • 6 vCPU AMD EPYC
  • 70 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Firewall management
  • Free server monitoring

AMD EPYC VPS.P1

$7.99 Save  25 %
$5.99 Monthly
  • 2 vCPU AMD EPYC
  • 4 GB RAM memory
  • 40 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

AMD EPYC VPS.P2

$14.99 Save  27 %
$10.99 Monthly
  • 2 vCPU AMD EPYC
  • 8 GB RAM memory
  • 80 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

AMD EPYC VPS.P4

$29.99 Save  20 %
$23.99 Monthly
  • 4 vCPU AMD EPYC
  • 16 GB RAM memory
  • 160 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

AMD EPYC VPS.P5

$36.49 Save  21 %
$28.99 Monthly
  • 8 vCPU AMD EPYC
  • 16 GB RAM memory
  • 180 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

AMD EPYC VPS.P6

$56.99 Save  21 %
$44.99 Monthly
  • 8 vCPU AMD EPYC
  • 32 GB RAM memory
  • 200 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

AMD EPYC VPS.P7

$69.99 Save  20 %
$55.99 Monthly
  • 16 vCPU AMD EPYC
  • 32 GB RAM memory
  • 240 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

EPYC Genoa VPS.G1

$4.99 Save  20 %
$3.99 Monthly
  • 1 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4th generation 9xx4 with 3.25 GHz or similar, on Zen 4 architecture.
  • 1 GB DDR5 memory
  • 25 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

EPYC Genoa VPS.G2

$12.99 Save  23 %
$9.99 Monthly
  • 2 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4th generation 9xx4 with 3.25 GHz or similar, on Zen 4 architecture.
  • 4 GB DDR5 memory
  • 50 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

EPYC Genoa VPS.G4

$25.99 Save  27 %
$18.99 Monthly
  • 4 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4th generation 9xx4 with 3.25 GHz or similar, on Zen 4 architecture.
  • 8 GB DDR5 memory
  • 100 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

EPYC Genoa VPS.G5

$44.99 Save  33 %
$29.99 Monthly
  • 4 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4th generation 9xx4 with 3.25 GHz or similar, on Zen 4 architecture.
  • 16 GB DDR5 memory
  • 150 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

EPYC Genoa VPS.G6

$48.99 Save  31 %
$33.99 Monthly
  • 8 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4th generation 9xx4 with 3.25 GHz or similar, on Zen 4 architecture.
  • 16 GB DDR5 memory
  • 200 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

EPYC Genoa VPS.G7

$74.99 Save  27 %
$54.99 Monthly
  • 8 vCPU AMD EPYC Gen4 AMD EPYC Genoa 4th generation 9xx4 with 3.25 GHz or similar, on Zen 4 architecture.
  • 32 GB DDR5 memory
  • 250 GB NVMe storage
  • Unmetered bandwidth
  • IPv4 & IPv6 included IPv6 support is currently unavailable in France, Finland or the Netherlands.
  • 1 Gbps network
  • Automatic backup included
  • Firewall management
  • Free server monitoring

FAQ

Can I use cluster mode with Express middleware that stores state in locals?

Yes, but be careful. Each request is scoped to one worker, so per-request state in res.locals or req.locals stays within that worker. Cross-request shared state fails. Use Redis for data needing to persist across workers or requests.

Skip the setup, start deploying

Your server comes ready with PM2, Nginx and Certbot. Pick a datacenter and push your first build in minutes.

GPU products are in high demand at the moment. Fill the form to get notified as soon as your preferred GPU server is back in stock.