Host Your Own AI Agent with OpenClaw - Free 1-Click Setup!

n8n Queue Mode Setup Guide for VPS Scalability

n8n Queue Mode

A single n8n instance worked great for about six months. Then we hit 200+ workflow executions per day and everything slowed to a crawl – webhooks timing out, the UI lagging when you tried to open the editor, new workflows waiting 30 seconds just to start because some CSV processing task was hogging the only available thread. The n8n scalability just wasn’t there.

n8n queue mode fixes this by splitting responsibilities in your n8n Docker setup. One n8n process handles the web UI and listens for incoming triggers like webhooks, schedules, and manual executions. Separate worker processes – could be three, could be ten – sit there pulling jobs from a Redis queue and actually executing the workflows. So now ten workflows can run at the same time instead of waiting in line. That’s real workflow automation.

You’ll configure this with Docker Compose, Redis for the job queue, and PostgreSQL because SQLite doesn’t handle multiple processes well (actually, it handles them terribly). Environment variables have to match exactly across all containers or nothing works. You’ll need VPS access and Docker knowledge. Expect to troubleshoot. But you’ll end up with a setup that can handle hundreds of simultaneous workflow executions without choking.

Ready to get started?

How n8n Queue Mode Works

Standard n8n crams everything into one process. The web interface runs there. Webhook listeners run there. Workflow executions run there. They all fight for CPU and memory. Start a workflow that processes a 50MB file and everything else waits. Your colleague tries to open a workflow in the editor – spinning loader for 15 seconds. A webhook from Stripe hits your endpoint and you get a timeout because the process is busy.​

Queue mode separates concerns in the n8n queue mode architecture. The main process serves the web UI and listens for triggers. When a workflow needs to run, the main process doesn’t touch it. It just pushes a job description into Redis with details like workflow ID, trigger data, and which credentials to use. Then it moves on to the next thing.

Workers watch the Redis message broker constantly. They’re running a tight loop: check queue, grab a job if one exists, execute it, write results to PostgreSQL, check queue again. Redis (using Bull internally) maintains these queues where jobs sit until a worker’s available. If ten workflows trigger simultaneously and you’ve got ten workers running, all ten execute at once.

Everything connects to the same PostgreSQL database. Credentials live there, along with workflow definitions and execution history. This shared state means any worker can execute any workflow – there’s no pinning specific jobs to specific machines, which would be a nightmare for workflow orchestration. The database tracks execution status in real-time, so the main process shows you progress updates in the UI even though execution’s happening on a completely different container somewhere.

Parallel execution is why you’d bother with all this complexity. One workflow crunching through 10,000 spreadsheet rows doesn’t block another workflow that just needs to send a Slack message. It’s usually best to run heavy file operations on dedicated workers with more RAM, while API-only workflows run on smaller workers, but this depends on your workflow.

Prerequisites

You need an n8n VPS or Dedicated Server with minimum 2 CPU cores and 4GB RAM. You’ll want 4+ cores and 8GB+ for production stuff handling real workflow automation volume – a Cloud VPS 10 from Contabo is a great place to start. Redis and PostgreSQL together eat about 1GB. Each worker process consumes somewhere between 200MB and 500MB depending on what your workflows do. Complex workflows with lots of data transformations need more.

Docker and Docker Compose have to be installed and working. This guide assumes Docker 24+ and Compose v2. If you’re still typing docker-compose (with the hyphen) instead of docker compose (space), you’re on the old version and should upgrade. The n8n docker compose syntax changed recently.

You should already know n8n self hosted basics. This isn’t “your first n8n installation” – you need to understand workflows, how credentials work, and the general n8n concepts around environment variables. Queue mode adds layers of complexity on top of that foundation. Start simple if you’re new – search “n8n” on our blog to see plenty of useful articles to help you get going.

PostgreSQL knowledge helps but honestly isn’t required. You’ll get a PostgreSQL docker setup running, but the actual database operations happen automatically once you’ve got the connection strings right. SQLite doesn’t work with queue mode at all. Multiple processes trying to access the same SQLite file creates corruption. PostgreSQL is mandatory, not a suggestion.

Command-line comfort is essential. You’ll edit docker-compose.yml files, set environment variables (lots of them), and run Docker commands to scale workers up and down. You’ll also need to check logs when containers fail to start. If navigating directories in a terminal and reading error messages makes you nervous, get comfortable with that first.

Configuring n8n Queue Mode

The n8n configuration for queue mode follows a specific order. You’ll work through the n8n Docker Compose setup systematically: Redis first because workers immediately try to connect when they start, then n8n environment variables that need to match across all containers, then PostgreSQL, then the main n8n process, and finally workers. Skip around and you’ll get dependency failures where containers crash because they’re trying to connect to services that don’t exist yet. Later when you need to add capacity, you’ll use Docker Compose scale commands to spin up additional workers, but the initial setup needs to happen in this exact sequence.

Prepare a Redis Container

Redis is the foundation for n8n’s job distribution. Workers pull jobs from the Redis queue. The main process pushes new executions into those queues. Without Redis running, queue mode literally doesn’t function.​

Make a directory for your setup and add Redis to your Docker Compose Redis config. Here’s the basic service definition:

redis:
image: redis:7-alpine
container_name: n8n_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
restart: unless-stopped

That --appendonly yes flag turns on Redis persistence. Without it, restarting the Redis container Docker instance means you lose all queued jobs. Gone. The volume mount stores this persistent data on disk.​

For the n8n Redis connection, workers and the main process will connect using redis as the hostname since they’re all in the same Docker network. Port 6379 is Redis default. You can add password authentication with --requirepass yourpassword in the command, though honestly for containers on a private Docker network it’s optional.

Start just Redis: docker compose up -d redis. Check the logs immediately: docker logs n8n_redis. You should see “Ready to accept connections” without errors. Test it from your host using redis-cli ping before moving forward. Better to catch connection issues now than after you’ve configured six other containers.

Configure Environment Variables

Environment variables control how n8n operates in queue mode. Get the n8n configuration wrong and processes won’t communicate, workers won’t start, or worse, workers will partially work but corrupt data because the encryption key doesn’t match.​

Here are the critical n8n environment variables for the n8n setup:

EXECUTIONS_MODE=queue switches n8n from default to queue mode. Set this on both the main process and all workers. Without it n8n just runs in standard single-process mode and ignores Redis entirely.

N8N_ENCRYPTION_KEY must be identical across the main process and every single worker. This encrypts credentials in the database. Workers with a different key can’t decrypt credentials, so workflows run but fail when they try to access APIs or databases. Silent failures are the worst kind. Generate a long random string once, reuse it everywhere, and never change it after deployment unless you want to manually migrate all encrypted data.

QUEUE_BULL_REDIS_HOST and QUEUE_BULL_REDIS_PORT point to Redis. Use the Docker service name redis as the host since containers talk via Docker’s internal network. Port is 6379 unless you changed it.

DB_TYPE=postgresdb switches from SQLite to PostgreSQL. Then add: DB_POSTGRESDB_HOSTDB_POSTGRESDB_DATABASEDB_POSTGRESDB_USER, and DB_POSTGRESDB_PASSWORD. All processes connect to the same database with identical credentials.

Put these in a .env file or directly in docker-compose.yml under environment sections. The configuration stays consistent across main and worker processes except a few worker-specific settings.

Deploy the Main n8n Process

The main n8n process handles the web UI, serves the API, and queues workflow executions – but doesn’t execute them. Users interact with this at port 5678. This is the core of your n8n deployment.​

Here’s the n8n docker service definition in docker-compose.yml for the n8n installation:

n8n:
image: n8nio/n8n:latest
container_name: n8n_main
ports:
- "5678:5678"
environment:
- EXECUTIONS_MODE=queue
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8ndb
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- postgres
- redis
restart: unless-stopped

The depends_on makes sure PostgreSQL and Redis start before n8n tries connecting. Volume mounts persist workflow files and custom nodes outside the container.

Start it: docker compose up -d n8n. Check the logs right away: docker logs -f n8n_main. You should see successful database connection messages and “Editor is now accessible via” without errors. Redis connection failures or database errors mean you need to stop and fix those before touching workers. The main process can’t connect to dependencies and nothing will work correctly.

Access the web UI at http://your-vps-ip:5678. Create a test workflow but don’t execute yet – workers aren’t running, so executions will queue forever.

Deploy PostgreSQL Database

PostgreSQL stores workflows, credentials, execution history, and queue state. Queue mode needs PostgreSQL because multiple processes are hitting the n8n database simultaneously. SQLite can’t handle that safely – actually it can’t handle it at all without corrupting data.

Add the n8n PostgreSQL service to docker-compose.yml:

postgres:
image: postgres:15-alpine
container_name: n8n_postgres
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=n8ndb
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped

The database credentials here match what you put in the main n8n config. The PostgreSQL container persists data in a Docker volume so contents survive restarts.

For PostgreSQL docker setup, the database initializes automatically the first time it starts. You don’t need to run SQL scripts manually – n8n handles schema creation when it first connects. Start PostgreSQL: docker compose up -d postgres. Check logs: docker logs n8n_postgres. Look for “database system is ready to accept connections.”​

If you already started the main n8n process, it’ll connect to PostgreSQL and initialize tables automatically. Check n8n’s logs again for successful database migration messages.

Launch n8n Worker Processes

Workers are n8n instances running in worker mode. No web interface. No exposed ports. They pull jobs from Redis, execute workflows, and write results to PostgreSQL. That’s it.

Here’s the n8n worker service definition – similar to main but with key differences:

n8n-worker:
image: n8nio/n8n:latest
command: worker
environment:
- EXECUTIONS_MODE=queue
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8ndb
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
depends_on:
- postgres
- redis
- n8n
restart: unless-stopped

The command: worker tells n8n to run in worker mode instead of serving the web UI. The n8n encryption key has to match the main process exactly. Workers decrypt credentials from the database using this key. A mismatch means workflows run but can’t access credentials, which creates silent failures that are hell to debug.

Start one worker initially: docker compose up -d n8n-worker. Check logs: docker logs n8n-worker. You should see “n8n worker is now ready” and connection messages for Redis and PostgreSQL. The n8n workers log when they pick up jobs, so initially you’ll just see initialization messages.

Scale workers using docker compose scale: docker compose up -d --scale n8n-worker=3. This creates three worker containers from the same service definition. Each connects to the same Redis queue and PostgreSQL database, pulling jobs independently.

How many workers you need depends on your actual workload, not theory. Start with one or two. Monitor queue depth and execution times under real load. Scale up if jobs back up. Each worker eats 200-500MB RAM. Running ten workers on a 4GB VPS will cause OOM kills.

Test the n8n Queue Setup

Create a workflow with a Wait node set to 10 seconds. Add simple HTTP Request nodes before and after so you can watch execution flow. Save and activate.

Trigger the workflow manually three times fast from the n8n setup web interface. Standard mode would execute these sequentially – 30+ seconds total. With workers running, they go in parallel.

Check the execution list in n8n’s web UI. All three should show “Running” status simultaneously if workers are processing in parallel. Watch worker logs: docker logs n8n-worker shows which worker grabbed which execution. You’ll see messages like “Job picked up” with execution IDs.

The n8n automation monitoring in the web UI shows execution times. Compare to standard mode. Queue mode often has slightly longer per-execution times due to queuing overhead, but total throughput increases massively because workflows run concurrently.

Monitor and Scale n8n Workers

Watch execution times and queue depth over real usage. If the Redis queue consistently grows during peak hours, workflows are arriving faster than workers can process them. Scale up.

Add workers on the fly: docker compose up -d --scale n8n-worker=5. New workers start immediately and begin pulling from the Redis queue. No workflow interruption. No config changes. Just more capacity. This is where n8n scalability really shows.

Scale down during quiet periods: docker compose up -d --scale n8n-worker=2. Docker stops excess workers gracefully – they finish current jobs before shutting down, so there are no mid-process kills.

Monitor per-worker resource usage with docker stats. This shows CPU and RAM for each worker container. If workers are consistently maxing out CPU or hitting memory limits, either optimize workflows or get a bigger VPS. Understanding n8n performance means tracking these metrics over time.

Enable metrics for deeper monitoring. Set N8N_METRICS=true and N8N_METRICS_INCLUDE_API_ENDPOINTS=true in environment variables. This exposes Prometheus-compatible metrics at the /metrics endpoint where you can track queue depth, execution counts, and worker performance over time.

Conclusion

Queue mode turns n8n from a single-user tool into something that can run n8n production workloads. Splitting the web UI, job queuing, and distributed execution creates real n8n scalability that grows with your needs instead of hitting hard limits at the worst possible moment.

The setup isn’t simple. You orchestrated Docker Compose across multiple services, synchronized environment variables that have to match perfectly, deployed PostgreSQL and Redis infrastructure, and launched worker processes. But now you’ve got a workflow automation system handling hundreds of simultaneous executions without breaking.

What’s next matters as much as getting it running. Set up automated PostgreSQL backups – workflows, credentials, and execution history all live there. Monitor worker resource usage and scale proactively based on traffic patterns. Get log aggregation working so you can troubleshoot issues across distributed workers. Secure Redis if your VPS faces networks you don’t control.

VPS hosting means you control scaling decisions. Cloud services with managed n8n charge per execution or cap workflows. Running queue mode on your infrastructure means scaling based on actual resource costs, not subscription tiers. The trade-off is you handle complexity. But you own the capability.

n8n Queue Mode FAQ

What is n8n queue mode?

n8n queue mode splits the web interface from workflow execution using a Redis-based job queue and separate worker processes. The main n8n process handles the UI and incoming triggers, pushing workflow jobs into Redis queues. Worker processes watch those queues constantly, pulling jobs and executing workflows in parallel.

This enables horizontal scaling where you add more workers to handle increased loads without touching workflow code. It requires PostgreSQL instead of SQLite because multiple processes need concurrent database access. Redis coordinates job distribution across workers. All processes share the same encryption key to decrypt credentials from the database.

The result is dozens or hundreds of workflows running simultaneously instead of waiting in line behind a single execution thread.

When do you use n8n queue?

In short: when you’re hitting limits on a single instance. The signs include slow workflow executions, webhook timeouts, UI lag when opening workflows, and execution queues backing up during peak traffic. If you’re processing 100+ executions per hour or running long-duration workflows that block other tasks, queue mode makes sense.

It’s overkill for personal use or light automation with a few daily runs. Standard mode works fine when executions are infrequent and workflows complete quickly. Queue mode adds complexity – Redis, PostgreSQL, multiple containers, synchronized environment variables – that only pays off under real load.​

Production environments running business-critical automation need queue mode for reliability. Even if current load doesn’t demand it, the architecture provides redundancy. One worker crashes and others keep processing. Database maintenance happens and workers briefly lose connection then resume without data loss because queue state persists in Redis and PostgreSQL.

What’s the best server configuration for n8n?

The best server configuration for n8n running queue mode starts at 2 CPU cores and 4GB RAM for small deployments. This handles the main process, PostgreSQL, Redis, plus one or two workers processing light workflows. You’ll use about 1GB for Redis and PostgreSQL combined, 500MB for the main n8n process, and 200-500MB per worker depending on workflow complexity.

Production setups handling serious loads want 4-8 CPU cores minimum and 8-16GB RAM. This supports multiple workers running concurrently without resource fights. PostgreSQL performance jumps with dedicated CPU cores and enough RAM for caching. Redis is lightweight but benefits from fast storage for queue persistence.

Monitor first, then scale based on actual usage rather than guessing. Run real workflows under expected load and watch CPU, RAM, and disk I/O using docker stats and monitoring tools. If workers are maxing out CPU, add cores or optimize workflows. If RAM fills up, reduce worker count or add more memory. Storage matters because PostgreSQL execution history grows over time, so plan for database growth in disk capacity estimates.

Scroll to Top