Introduction
n8n is an open-source workflow automation platform that connects APIs, services, and applications through a visual node-based editor. It supports over 400 integrations and lets you build automations ranging from simple data syncing to complex multi-step AI agent workflows — all without writing code for most tasks.
Unlike cloud-only automation services like Zapier or Make, self-hosting n8n on your own server gives you unlimited workflow executions at no per-task cost, full control over your data, and the ability to run custom JavaScript and Python code within your workflows. A single Raff VM running n8n can replace hundreds of dollars per month in cloud automation subscriptions.
In this tutorial, you will deploy n8n on your Raff Ubuntu 24.04 server using Docker Compose with PostgreSQL as the production database, configure Nginx as a reverse proxy with WebSocket support, secure it with HTTPS via Let's Encrypt, and verify the installation by creating your first workflow.
Step 1 — Installing Docker and Docker Compose
n8n's recommended deployment method is Docker Compose. If you have already installed Docker by following our Docker installation tutorial, you can skip to Step 2.
Update your system and install prerequisites:
bashsudo apt update && sudo apt upgrade -y
sudo apt install -y ca-certificates curl gnupg lsb-release
Add the official Docker GPG key and repository:
bashsudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine and the Compose plugin:
bashsudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Add your user to the docker group so you can run commands without sudo:
bashsudo usermod -aG docker $USER
newgrp docker
Verify Docker and Compose are working:
bashdocker --version
docker compose version
Both commands should return version numbers. Docker Compose comes bundled with Docker Engine now, so you do not need to install it separately.
Step 2 — Creating the Project Directory and Environment File
Create a dedicated directory for your n8n deployment, a data directory for persistent storage, and a directory for sharing files with workflows:
bashmkdir -p ~/n8n && cd ~/n8n
mkdir -p n8n_data local-files
n8n runs as user ID 1000 inside the container. Set the correct ownership on the data directory to avoid permission errors:
bashsudo chown -R 1000:1000 n8n_data
Warning
If you skip this step, n8n will fail to start with EACCES: permission denied errors when it tries to write to its data directory.
Now create an .env file to store your configuration. This keeps sensitive values like passwords and keys out of the Docker Compose file:
bashvi ~/n8n/.env
Add the following content, replacing the placeholder values with your own:
bash# Domain and SSL
DOMAIN_NAME=n8n.example.com
SSL_EMAIL=you@example.com
# PostgreSQL
POSTGRES_USER=n8n
POSTGRES_PASSWORD=your-strong-database-password-here
POSTGRES_DB=n8n
# n8n configuration
N8N_ENCRYPTION_KEY=your-random-encryption-key-here
N8N_HOST=n8n.example.com
N8N_PORT=5678
N8N_PROTOCOL=https
N8N_SECURE_COOKIE=true
WEBHOOK_URL=https://n8n.example.com/
N8N_EDITOR_BASE_URL=https://n8n.example.com/
GENERIC_TIMEZONE=UTC
# Performance and storage
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
Generate a strong random encryption key for the N8N_ENCRYPTION_KEY value:
bashopenssl rand -hex 32
Copy the output and paste it as your encryption key value. This key encrypts all stored credentials (API keys, OAuth tokens, passwords) in the database. If you lose this key, all saved credentials become unrecoverable.
Warning
Save a copy of your N8N_ENCRYPTION_KEY in a secure location outside this server. There is no recovery mechanism — if you lose it, you must re-enter every credential in every workflow.
The WEBHOOK_URL must match your actual domain exactly. If it is wrong, every webhook trigger in n8n will generate broken callback URLs, which is a frustrating bug to track down.
Set restrictive permissions on the .env file since it contains passwords:
bashchmod 600 .env
Step 3 — Creating the Docker Compose Configuration
Create the Docker Compose file that defines the n8n application and its PostgreSQL database:
bashvi ~/n8n/compose.yaml
Paste the following configuration:
yamlservices:
postgres:
image: postgres:16
restart: unless-stopped
shm_size: 512m
command: >
postgres
-c shared_buffers=2GB
-c effective_cache_size=6GB
-c work_mem=64MB
-c maintenance_work_mem=512MB
-c max_connections=100
-c checkpoint_completion_target=0.9
-c wal_buffers=16MB
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
n8n:
image: docker.n8n.io/n8nio/n8n
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
ports:
- "127.0.0.1:5678:5678"
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: ${POSTGRES_DB}
DB_POSTGRESDB_USER: ${POSTGRES_USER}
DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
N8N_HOST: ${N8N_HOST}
N8N_PORT: ${N8N_PORT}
N8N_PROTOCOL: ${N8N_PROTOCOL}
N8N_SECURE_COOKIE: "true"
N8N_PROXY_HOPS: 1
WEBHOOK_URL: ${WEBHOOK_URL}
N8N_EDITOR_BASE_URL: ${N8N_EDITOR_BASE_URL}
GENERIC_TIMEZONE: ${GENERIC_TIMEZONE}
N8N_DEFAULT_BINARY_DATA_MODE: filesystem
N8N_BINARY_DATA_STORAGE_PATH: /files
EXECUTIONS_DATA_PRUNE: "true"
EXECUTIONS_DATA_MAX_AGE: 168
volumes:
- ./n8n_data:/home/node/.n8n
- ./local-files:/files
volumes:
postgres_data:
There are several important details in this configuration:
- PostgreSQL over SQLite: n8n defaults to SQLite, which is fine for quick testing. But for production, PostgreSQL handles concurrent workflows much better and does not corrupt if your server crashes mid-write.
- PostgreSQL performance tuning: The
commandblock passes optimized settings directly to PostgreSQL, tuned for an 8 GB RAM server.shared_buffers=2GB(25% of RAM) is the primary memory cache for database pages.effective_cache_size=6GB(75% of RAM) tells the query planner how much memory is available for OS-level caching, which improves query plan selection.work_mem=64MBgives each sort and hash operation room to work in memory instead of spilling to disk.maintenance_work_mem=512MBspeeds up vacuum and index operations.checkpoint_completion_target=0.9spreads write I/O more evenly, reducing latency spikes. shm_size: 512m: PostgreSQL uses POSIX shared memory for its buffer pool. Docker's default of 64 MB is far too low for a 2 GBshared_bufferssetting and would cause PostgreSQL to crash. Setting this to 512 MB provides enough headroom.- Healthcheck: The
condition: service_healthyensures n8n waits for PostgreSQL to be fully ready before starting. Without it, you will get "connection refused" errors on first boot. - Localhost binding: The
127.0.0.1:5678:5678line means n8n is NOT exposed to the internet directly. All external traffic must go through the Nginx reverse proxy. This is a critical security measure. N8N_PROXY_HOPS: Since n8n sits behind Nginx, it needs to know there is one reverse proxy in front so it can correctly read the client's real IP address from forwarded headers. Set this to1.N8N_SECURE_COOKIE: Marks session cookies as Secure so they are only transmitted over HTTPS. Without this, cookies could theoretically be intercepted during the brief HTTP-to-HTTPS redirect window.N8N_DEFAULT_BINARY_DATA_MODE: filesystem: By default, n8n keeps binary data (file uploads, attachments processed in workflows) in memory. Even on an 8 GB VM, concurrent workflows processing large files can cause memory pressure. Writing binaries to disk via the/filesvolume is the recommended approach for single-node deployments.EXECUTIONS_DATA_PRUNEandEXECUTIONS_DATA_MAX_AGE: n8n stores a record of every workflow execution. Without pruning, the database grows unbounded and will eventually fill your disk. SettingEXECUTIONS_DATA_MAX_AGEto168(hours) automatically deletes execution logs older than 7 days. Adjust this value based on how long you need to review past executions.- Volumes: Two persistence layers —
postgres_datafor the database and./n8n_datafor n8n's configuration, encryption keys, and execution data. The./local-filesmount serves double duty: it lets n8n's Read/Write Files from Disk node access files on the host, and it stores binary data when filesystem mode is enabled.
Step 4 — Starting n8n and Verifying the Containers
Start the containers in detached mode:
bashcd ~/n8n
docker compose up -d
Docker will pull the n8n and PostgreSQL images on the first run. This takes one to two minutes depending on your connection speed.
Check that both containers are running:
bashdocker compose ps
You should see both n8n and postgres with status Up. If the n8n container shows Restarting, check the logs:
bashdocker compose logs -f n8n
Look for the line n8n ready on 0.0.0.0, port 5678. This confirms n8n has connected to PostgreSQL and is accepting requests. Press Ctrl+C to exit the log view.
Common startup issues include incorrect database credentials in the .env file or incorrect permissions on the n8n_data directory. If you see permission errors, re-run sudo chown -R 1000:1000 ~/n8n/n8n_data.
Test that n8n is responding on localhost:
bashcurl -s http://localhost:5678/healthz
You should see {"status":"ok"}. This confirms n8n is running and connected to the database.
Note
At this point n8n is only accessible from the server itself (localhost). The next step configures Nginx to make it available through your domain over HTTPS.
Step 5 — Configuring Nginx as a Reverse Proxy
Install Nginx if it is not already installed:
bashsudo apt install -y nginx
Create an Nginx server block configuration for your n8n domain:
bashsudo vi /etc/nginx/sites-available/n8n
Paste the following configuration, replacing n8n.example.com with your actual domain:
nginxserver {
listen 80;
listen [::]:80;
server_name n8n.example.com;
location / {
proxy_pass http://127.0.0.1:5678;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
client_max_body_size 50M;
proxy_read_timeout 300s;
}
}
Several settings in this configuration are critical for n8n to work correctly:
- WebSocket headers (
UpgradeandConnection "upgrade"): n8n uses WebSocket connections for its live workflow editor. Without these, you will connect to n8n and see "Connection lost" errors repeatedly. This is the most common issue people run into. client_max_body_size 50M: Nginx defaults to a 1 MB body limit, which is too small if your workflows handle file uploads. Raising it to 50 MB prevents413 Request Entity Too Largeerrors.proxy_read_timeout 300s: Some workflows take several minutes to execute. The default 60-second timeout would cause Nginx to cut off long-running executions.
Enable the server block and test the configuration:
bashsudo ln -s /etc/nginx/sites-available/n8n /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
If you have UFW configured, allow web traffic and explicitly block direct access to n8n's port:
bashsudo ufw allow OpenSSH
sudo ufw allow 80
sudo ufw allow 443
sudo ufw deny 5678
sudo ufw enable
The deny 5678 rule is a defense-in-depth measure. Even though n8n binds to localhost only, explicitly blocking the port in the firewall adds an extra layer of protection.
Step 6 — Securing n8n with HTTPS
n8n handles sensitive credentials — API keys, OAuth tokens, database passwords. Serving it over plain HTTP is not acceptable. Use Certbot to obtain a free TLS certificate from Let's Encrypt.
For a detailed walkthrough, see our Let's Encrypt tutorial.
Install Certbot with the Nginx plugin:
bashsudo apt install -y certbot python3-certbot-nginx
Obtain and install the certificate:
bashsudo certbot --nginx -d n8n.example.com
Certbot will ask for your email address (for renewal reminders), ask you to accept the Terms of Service, verify your domain ownership, and automatically update your Nginx configuration to use HTTPS with a redirect from HTTP.
After Certbot finishes, verify HTTPS is working:
bashcurl -I https://n8n.example.com
You should see a 200 OK response. Certbot automatically sets up a renewal timer, so your certificate renews before it expires without any manual intervention.
Tip
Certbot auto-renews your certificates. Verify the timer is active with sudo systemctl status certbot.timer. You never have to think about certificate expiry again.
Step 7 — Creating Your Admin Account and First Workflow
Open https://n8n.example.com in your browser. You should see the n8n setup wizard with a padlock icon in the address bar, confirming HTTPS is active.
Create your owner account by entering your name, email, and a strong password. This account has full administrative access to your n8n instance.
After creating your account, you land on the n8n canvas — the visual workflow editor. Create a quick test workflow to verify everything works end to end:
- Click the + button to add a new node
- Search for Schedule Trigger and select it — this runs on a cron schedule. This is one of the things Zapier charges you for every execution. Here, it is free.
- Set the trigger to run every minute (for testing)
- Add a second node: search for Set (Edit Fields) and select it
- Add a field called
messagewith the valueHello from n8n on Raff! - Connect the Schedule Trigger to the Set node
- Click Test Workflow to execute it manually
If the Set node shows your message in the output panel with green checkmarks on both nodes, your n8n instance is fully operational — the application is running, the database is connected, and workflows execute correctly.
After testing, either delete the workflow or set the Schedule Trigger to a less frequent interval. Leaving it at every minute generates unnecessary execution data.
Step 8 — Updating and Backing Up n8n
n8n releases new versions frequently. To update to the latest version:
bashcd ~/n8n
docker compose pull n8n
docker compose up -d --no-deps n8n
The --no-deps flag restarts only the n8n container without touching PostgreSQL. n8n automatically runs any required database migrations on startup.
Warning
The number one day-two issue with self-hosted n8n is database growth. If you did not set EXECUTIONS_DATA_PRUNE=true in your .env file, n8n stores every execution record forever. A workflow running every minute generates over 43,000 records per month. Check your database size periodically with docker compose exec postgres psql -U n8n -c "SELECT pg_size_pretty(pg_database_size('n8n'));".
Before updating, back up your data. Export all workflows:
bashdocker compose exec n8n n8n export:workflow --all --output=/home/node/.n8n/backup-workflows.json
Back up the PostgreSQL database:
bashdocker exec -t $(docker compose ps -q postgres) pg_dump -U n8n n8n > ~/n8n-db-backup-$(date +%Y%m%d).sql
To restore from a database backup:
bashcat ~/n8n-db-backup-20260320.sql | docker exec -i $(docker compose ps -q postgres) psql -U n8n n8n
For additional protection, enable automated backups on your Raff VM. This creates point-in-time snapshots of your entire server, including Docker volumes and configuration files.
Tip
To pin n8n to a specific version instead of always pulling the latest, change the image line in compose.yaml to include a version tag like docker.n8n.io/n8nio/n8n:1.80.0. Check the n8n releases page for available versions.
Conclusion
You have deployed n8n on your Raff Ubuntu 24.04 VM with a production-ready stack: Docker Compose for orchestration, PostgreSQL with performance tuning for 8 GB RAM, Nginx as a reverse proxy with WebSocket support, HTTPS via Let's Encrypt for secure access, and production hardening including secure cookies, filesystem-based binary storage, and automatic execution log pruning. Your workflows, credentials, and execution data are persisted across restarts and updates.
From here, you can:
- Connect n8n to external services like Gmail, Slack, GitHub, and databases using its 400+ built-in integrations
- Build AI-powered workflows using n8n's LangChain nodes with OpenAI, Anthropic, or local LLM providers
- Set up webhook-triggered automations that respond to events from your applications in real time
- Schedule automated VM backups to protect your n8n data and configuration
Raff's n8n VM product is specifically optimized for this workload. With NVMe SSD storage for fast workflow execution, AMD EPYC processors for consistent performance, and unmetered bandwidth for webhook-heavy workloads, your automation platform has the infrastructure it needs to scale.