Introduction
Deploy a multi-container application on a single Raff VM using Docker Compose to run a Node.js web app backed by PostgreSQL and Redis — all defined in one YAML file. Raff Technologies provides NVMe SSD storage and dedicated vCPU VMs that handle containerized workloads with consistent I/O performance.
Docker Compose lets you define services, networks, and volumes in a declarative compose.yml file. Instead of running multiple docker run commands with long flag lists, you describe your entire stack once and manage it with docker compose up and docker compose down. This is the standard approach for single-server container deployments.
In this tutorial, you will build a simple task-tracking API with Node.js, connect it to PostgreSQL for persistent data and Redis for caching, configure health checks so containers start in the correct order, and use named volumes so your data survives container restarts.
Step 1 — Verify Docker and Compose Are Installed
Before building the stack, confirm that Docker Engine and the Compose plugin are available on your Raff VM. If you have not installed Docker yet, follow our Docker installation tutorial first.
bashdocker --version && docker compose version
You should see output similar to:
Docker version 29.4.0, build 9d7ad9f
Docker Compose version v5.1.2
If docker compose returns "command not found," your Docker installation may be outdated. Reinstall Docker using the official repository method described in the prerequisite tutorial.
Step 2 — Create the Project Directory Structure
Set up the project layout. Keeping application code, configuration, and Compose files in one directory makes the project portable and easy to manage.
bashmkdir -p ~/task-api/{src,nginx}
cd ~/task-api
This creates:
src/— Node.js application codenginx/— reverse proxy configuration
Step 3 — Write the Node.js Application
Create a minimal Express.js API that connects to PostgreSQL for task storage and Redis for response caching. This is intentionally simple — the focus is on Compose, not the application itself.
bashcat > src/package.json << 'EOF'
{
"name": "task-api",
"version": "1.0.0",
"main": "index.js",
"dependencies": {
"express": "^4.21.0",
"pg": "^8.13.0",
"redis": "^4.7.0"
}
}
EOF
bashcat > src/index.js << 'APPEOF'
const express = require('express');
const { Pool } = require('pg');
const { createClient } = require('redis');
const app = express();
app.use(express.json());
const pool = new Pool({
host: process.env.POSTGRES_HOST || 'db',
port: 5432,
user: process.env.POSTGRES_USER || 'taskuser',
password: process.env.POSTGRES_PASSWORD || 'changeme',
database: process.env.POSTGRES_DB || 'taskdb',
});
let redis;
async function initRedis() {
redis = createClient({ url: `redis://${process.env.REDIS_HOST || 'cache'}:6379` });
redis.on('error', (err) => console.error('Redis error:', err));
await redis.connect();
}
async function initDB() {
await pool.query(`
CREATE TABLE IF NOT EXISTS tasks (
id SERIAL PRIMARY KEY,
title VARCHAR(255) NOT NULL,
done BOOLEAN DEFAULT false,
created_at TIMESTAMP DEFAULT NOW()
)
`);
}
app.get('/health', (req, res) => res.json({ status: 'ok' }));
app.get('/tasks', async (req, res) => {
const cached = await redis.get('tasks');
if (cached) return res.json(JSON.parse(cached));
const { rows } = await pool.query('SELECT * FROM tasks ORDER BY created_at DESC');
await redis.setEx('tasks', 30, JSON.stringify(rows));
res.json(rows);
});
app.post('/tasks', async (req, res) => {
const { title } = req.body;
if (!title) return res.status(400).json({ error: 'Title is required' });
const { rows } = await pool.query('INSERT INTO tasks (title) VALUES ($1) RETURNING *', [title]);
await redis.del('tasks');
res.status(201).json(rows[0]);
});
async function start() {
await initRedis();
await initDB();
app.listen(3000, '0.0.0.0', () => console.log('Task API running on port 3000'));
}
start().catch(console.error);
APPEOF
Step 4 — Write the Dockerfile
Create a Dockerfile for the Node.js application. Using a multi-stage approach is overkill for this size, but using a slim base image keeps the container small.
bashcat > src/Dockerfile << 'EOF'
FROM node:22-slim
WORKDIR /app
COPY package.json ./
RUN npm install --production
COPY index.js ./
EXPOSE 3000
USER node
CMD ["node", "index.js"]
EOF
Key decisions: node:22-slim reduces image size from ~1 GB to ~200 MB. Running as the node user (not root) is a basic security practice. Installing only production dependencies keeps the image lean.
Step 5 — Write the Docker Compose File
This is the core of the tutorial. The compose.yml file defines all three services, their dependencies, health checks, networks, and volumes.
bashcat > compose.yml << 'EOF'
services:
app:
build: ./src
container_name: task-api
restart: unless-stopped
environment:
POSTGRES_HOST: db
POSTGRES_USER: taskuser
POSTGRES_PASSWORD: changeme
POSTGRES_DB: taskdb
REDIS_HOST: cache
ports:
- "3000:3000"
depends_on:
db:
condition: service_healthy
cache:
condition: service_healthy
networks:
- backend
db:
image: postgres:16-alpine
container_name: task-db
restart: unless-stopped
environment:
POSTGRES_USER: taskuser
POSTGRES_PASSWORD: changeme
POSTGRES_DB: taskdb
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U taskuser -d taskdb"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
cache:
image: redis:7-alpine
container_name: task-cache
restart: unless-stopped
command: redis-server --maxmemory 64mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
volumes:
pgdata:
redisdata:
networks:
backend:
driver: bridge
EOF
What each section does:
depends_onwithcondition: service_healthy— the app container waits until PostgreSQL and Redis pass their health checks before starting. Without this, the app would crash on first boot because the database is not ready yet.- Named volumes (
pgdata,redisdata) — data persists acrossdocker compose downanddocker compose up. Without named volumes, you lose your database every time you recreate containers. - Custom bridge network (
backend) — containers resolve each other by service name (db,cache). This is isolated from the host network. restart: unless-stopped— containers restart automatically after a crash or VM reboot, unless you explicitly stop them.
Note
In production, never hardcode passwords in compose.yml. Use a .env file or Docker secrets. This tutorial uses inline values for clarity.
Step 6 — Build and Start the Stack
Build the application image and start all three containers in detached mode.
bashcd ~/task-api
docker compose up -d --build
Compose pulls postgres:16-alpine and redis:7-alpine, builds the app image from src/Dockerfile, creates the network and volumes, then starts containers in dependency order. First boot takes 1-2 minutes depending on your network speed.
Watch the startup logs to confirm everything connects:
bashdocker compose logs -f
You should see PostgreSQL finishing initialization, Redis reporting "Ready to accept connections," and the app printing "Task API running on port 3000." Press Ctrl+C to stop following logs.
Step 7 — Test the Application
Verify the full stack works by creating and retrieving tasks through the API.
Check the health endpoint:
bashcurl http://localhost:3000/health
Expected output:
json{"status":"ok"}
Create a task:
bashcurl -X POST http://localhost:3000/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Deploy with Docker Compose"}'
Expected output:
json{"id":1,"title":"Deploy with Docker Compose","done":false,"created_at":"2026-04-11T..."}
Retrieve all tasks:
bashcurl http://localhost:3000/tasks
Run it twice — the second request is served from Redis cache (you will notice it in the app logs if you are tailing them).
Step 8 — Manage the Running Stack
Learn the essential Compose commands for day-to-day management.
Check the status of all containers:
bashdocker compose ps
You should see all three containers with status Up and health state healthy.
View resource usage:
bashdocker compose stats --no-stream
On a Raff Tier 3 VM (2 vCPU, 4 GB RAM), this stack typically uses about 250 MB of RAM at idle — PostgreSQL takes roughly 150 MB, Redis about 10 MB, and the Node.js app about 80 MB.
Stop the stack without deleting data:
bashdocker compose stop
Start it again:
bashdocker compose start
Tear down containers and networks but keep volumes (your data survives):
bashdocker compose down
Tear down everything including volumes (destroys all data):
bashdocker compose down -v
Warning
The -v flag deletes named volumes permanently. Only use it when you intentionally want to wipe all database and cache data.
Step 9 — Use Environment Variables Securely
Move sensitive configuration out of compose.yml and into a .env file that Compose reads automatically.
bashcat > .env << 'EOF'
POSTGRES_USER=taskuser
POSTGRES_PASSWORD=a-strong-random-password-here
POSTGRES_DB=taskdb
EOF
Update the compose.yml to reference these variables. Replace the hardcoded environment blocks in both the app and db services:
yaml environment:
POSTGRES_HOST: db
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
REDIS_HOST: cache
Add .env to .gitignore so credentials are never committed to version control:
bashecho ".env" > .gitignore
Rebuild and restart with the new configuration:
bashdocker compose up -d --build
Step 10 — Verify Data Persistence
Confirm that your data survives a full container teardown and rebuild — this validates that named volumes are working correctly.
Create a test task:
bashcurl -X POST http://localhost:3000/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Persistence test"}'
Tear down and recreate all containers (without the -v flag):
bashdocker compose down
docker compose up -d
Wait a few seconds for health checks to pass, then retrieve tasks:
bashcurl http://localhost:3000/tasks
Your "Persistence test" task should still be there. This confirms PostgreSQL data is stored on the named volume, not inside the container filesystem.
Conclusion
You deployed a multi-container application with Docker Compose on a Raff Ubuntu 24.04 VM. The stack includes a Node.js API, PostgreSQL database, and Redis cache — all defined in a single compose.yml file with health checks, named volumes, and a private bridge network.
From here, you can:
- Add an Nginx reverse proxy in front of the app with Let's Encrypt SSL
- Monitor the stack with Portainer for a visual container management UI
- Scale to multiple app containers with
docker compose up -d --scale app=3behind a load balancer
This stack runs comfortably on a Raff CPU-Optimized Tier 3 VM ($19.99/month) with room to grow. For production deployments needing more headroom, Tier 4 (4 vCPU, 8 GB RAM) at $36.00/month gives you capacity for additional services.
This tutorial was tested on a Raff 2 vCPU / 4 GB RAM VM with Docker Engine 27.x and Compose v2.x. The idle memory footprint measured approximately 250 MB across all three containers.