Introduction
To install Coolify on Ubuntu 24.04, run the official one-line installer as root, wait for it to pull and start the Docker containers, then open the Coolify UI on port 8000 to complete the setup wizard. From there you can connect a Git repository, configure a domain, and deploy your first application — all through a browser. On a Raff Tier 3 VM (2 vCPU / 4 GB RAM), the installer completes in under 10 minutes and the first app deployment takes another 5.
We see this pattern constantly with Raff customers: teams start by manually deploying each service — FastAPI here, a Node.js app there, a Postgres instance somewhere else — and after three months they're spending more time managing deployment processes than building product. Coolify solves that. It is an open-source PaaS that runs on your own VM and gives you a Heroku-style deployment experience without the per-service billing. Connect a GitHub repository, set your environment variables, point a domain at it, and Coolify handles the Docker build, container lifecycle, SSL certificates, and reverse proxy configuration automatically.
The honest tradeoff: you own the infrastructure. If your VM goes down, your deployments go down with it. For teams that want the deployment simplicity of a managed PaaS but have predictable workloads and want control over their stack and costs, self-hosting Coolify on Raff is a rational decision. We've run Coolify internally on a Tier 3 VM to host several internal tools simultaneously — at peak, it used about 1.8 GB of RAM across six running containers, leaving comfortable headroom.
In this tutorial, you will prepare a Raff Ubuntu 24.04 VM for Coolify, run the installer, secure the initial setup, configure a domain and SSL, and deploy a sample application to verify the full pipeline works.
Note
Coolify manages its own Docker networking and reverse proxy (Traefik) internally. Do not install Nginx or another web server on the same VM before installing Coolify — port conflicts will break the setup. If you have an existing Nginx installation, stop it first with sudo systemctl stop nginx && sudo systemctl disable nginx.
Step 1 — Provision and Prepare the VM
Coolify's installer requires a minimum of 2 vCPU and 2 GB RAM. For comfortable operation with multiple deployments, use a Raff Tier 3 VM (2 vCPU / 4 GB RAM / 80 GB NVMe). The extra RAM headroom matters: each deployed application container consumes memory alongside Coolify's own services.
Update the system first:
bashsudo apt update && sudo apt upgrade -y
Install curl if it is not already present — the installer uses it:
bashsudo apt install -y curl
Confirm you have enough disk space. Coolify pulls Docker images for each deployment; 80 GB fills up faster than expected on busy instances:
bashdf -h /
A fresh Raff Tier 3 VM shows roughly 70 GB free. If you are on a smaller tier, extend your storage with a Raff block storage volume before proceeding.
Step 2 — Open Required Ports in the Firewall
Coolify needs three ports reachable from the internet:
| Port | Protocol | Purpose |
|---|---|---|
| 80 | TCP | HTTP (Let's Encrypt challenges + HTTP→HTTPS redirect) |
| 443 | TCP | HTTPS (all deployed app traffic) |
| 8000 | TCP | Coolify dashboard (can be restricted after setup) |
Open them in UFW:
bashsudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 8000/tcp
sudo ufw status
Also open these ports in your Raff cloud firewall via the control panel. The cloud-level firewall operates independently of UFW — traffic blocked there never reaches the VM regardless of what UFW allows.
Tip
Port 8000 (the Coolify dashboard) should not remain open to the world in production. After completing setup, restrict it to your IP address or route it through your VPN. The dashboard has its own authentication, but limiting network exposure is still the right posture.
Step 3 — Run the Coolify Installer
The Coolify project provides an official install script that handles Docker installation, image pulling, and initial container configuration. Run it as root:
bashcurl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
The script will:
- Detect the OS and confirm Ubuntu 24.04 is supported
- Install Docker and Docker Compose if not already present
- Pull the Coolify Docker images (this is the longest step — 3–5 minutes depending on connection speed)
- Start the Coolify stack as Docker containers
- Print the dashboard URL when complete
Expected output at completion:
Coolify is ready to use!
Please visit http://<your-vm-ip>:8000 to get started.
Verify the Coolify containers are running:
bashsudo docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
Expected output:
NAMES STATUS PORTS
coolify Up 2 minutes 0.0.0.0:8000->8000/tcp
coolify-db Up 2 minutes
coolify-redis Up 2 minutes
coolify-realtime Up 2 minutes
coolify-soketi Up 2 minutes
All five containers should be in Up state. If any show Restarting, give them 30 seconds and check again — the database container sometimes takes a moment longer to initialize on the first boot.
Note
If the installer fails partway through, the most common causes are insufficient disk space (Docker image pulls require ~3 GB free) or a previous Docker installation with conflicting configuration. Run sudo docker system prune -af to clear stale images, then re-run the installer.
Step 4 — Complete the Setup Wizard
Open a browser and navigate to:
http://<your-vm-public-ip>:8000
You will land on the Coolify registration screen. Create your admin account — this is the master account for your Coolify instance. Use a strong password and save it somewhere secure. There is no "forgot password" flow unless you have email configured.
After registration, Coolify walks you through a short wizard:
1. Instance settings — Set your instance name and the domain you will use to access the Coolify dashboard itself (e.g., coolify.your-domain.com). This is separate from the domains your apps will use.
2. Server configuration — Coolify will auto-detect localhost as the server where it is running. Click Validate Server to confirm Docker connectivity. You should see a green checkmark within a few seconds.
3. Create a team — Coolify uses a team model even for solo use. Name it whatever you like — you can add team members later.
Once the wizard completes, you land on the main Coolify dashboard.
Step 5 — Configure Your Domain and SSL for the Dashboard
Accessing Coolify over HTTP on port 8000 is fine for initial setup, but you want HTTPS and a proper domain for production use. Coolify handles its own SSL via Let's Encrypt — you just point the domain.
In DNS, create an A record pointing coolify.your-domain.com to your Raff VM's public IP. Allow a few minutes for propagation.
In the Coolify dashboard, go to Settings → Instance Settings and set:
- App URL:
https://coolify.your-domain.com - Wildcard Domain:
your-domain.com(Coolify uses this to auto-assign subdomains to deployed apps)
Click Save. Coolify will obtain a Let's Encrypt certificate for coolify.your-domain.com automatically through its internal Traefik reverse proxy. After a minute, navigate to:
https://coolify.your-domain.com
You should land on the Coolify dashboard over HTTPS with a valid certificate.
Once HTTPS is confirmed, go back to your Raff cloud firewall and restrict port 8000 to your IP address only. All dashboard traffic now flows through port 443 via Traefik — port 8000 no longer needs to be publicly accessible.
Step 6 — Connect a Git Repository and Deploy an Application
This step walks through deploying a real application to confirm the full pipeline works. We will deploy a simple Node.js app from a public GitHub repository — replace this with your own repository when you are ready.
In the Coolify dashboard:
1. Create a new project Go to Projects → New Project. Name it whatever makes sense for your work.
2. Add a new resource Inside the project, click + New Resource → Application.
3. Select source Choose Public Repository for this test. Paste the following URL:
https://github.com/coollabsio/coolify-examples
Select the branch main and the subdirectory nodejs-fastify (a minimal Node.js HTTP server included in Coolify's example repository).
4. Configure the application Set:
- Build Pack:
Nixpacks(Coolify auto-detects Node.js and builds the container) - Port:
3000 - Domain:
app.your-domain.com
Coolify will handle the DNS-to-container routing via Traefik automatically. Create the A record for app.your-domain.com pointing to your VM's IP before deploying.
5. Deploy Click Deploy. Coolify opens a real-time build log. You will see:
Building image with Nixpacks...
Pushing image to local registry...
Starting container...
Health check passed.
Deployment successful.
Navigate to https://app.your-domain.com. The app is live over HTTPS — certificate obtained and configured automatically.
Tip
For private repositories, go to Settings → Source → GitHub and connect your GitHub account via OAuth before adding the resource. Coolify stores the token securely and uses it for all subsequent deployments from that account.
Step 7 — Configure Automatic Deployments via Webhook
Coolify supports Git webhooks so every push to your main branch triggers a new deployment automatically — the closest thing to a managed PaaS experience on self-hosted infrastructure.
In your application's Coolify settings, click Webhooks. Copy the webhook URL shown — it looks like:
https://coolify.your-domain.com/webhooks/source/github/events/manual?token=<token>
In your GitHub repository, go to Settings → Webhooks → Add webhook:
- Payload URL: paste the Coolify webhook URL
- Content type:
application/json - Trigger:
Just the push event
Click Add webhook. From this point, every git push to your configured branch triggers a Coolify build and deployment automatically. No CI/CD pipeline configuration, no YAML files, no Actions workflows required.
Test it: make a small change to your repository, push it, and watch the Coolify deployment log update within seconds.
Step 8 — Verify the Full Stack and Check Resource Usage
With at least one application deployed, confirm the full stack is healthy:
bashsudo docker ps --format "table {{.Names}}\t{{.Status}}"
All Coolify containers plus your application container should show Up.
Check current resource usage across all containers:
bashsudo docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
On a Raff Tier 3 VM with the Coolify stack running and one Node.js app deployed, expected output is approximately:
NAME CPU % MEM USAGE / LIMIT
coolify 0.4% 180MiB / 3.8GiB
coolify-db 0.2% 95MiB / 3.8GiB
coolify-redis 0.1% 12MiB / 3.8GiB
coolify-realtime 0.1% 45MiB / 3.8GiB
coolify-soketi 0.1% 38MiB / 3.8GiB
your-app 0.2% 62MiB / 3.8GiB
Total: roughly 430 MB RAM at idle across the full stack, leaving over 3 GB available for additional application containers. This matches what we see in practice — a Tier 3 VM comfortably handles 8–10 small application containers alongside the Coolify overhead.
Verify Coolify starts automatically after a reboot:
bashsudo reboot
After reconnecting via SSH (give it ~60 seconds), check that the containers came back up:
bashsudo docker ps --format "table {{.Names}}\t{{.Status}}"
All containers should show Up X seconds. The Coolify installer configures Docker to start on boot and the containers to restart automatically — no manual intervention required after a reboot or unexpected VM stop.
Conclusion
You now have a self-hosted PaaS running on your Raff VM. Coolify handles Docker builds, SSL certificates, reverse proxy configuration, and Git-triggered deployments — the operational surface that would otherwise require separate tools or a managed platform subscription. Every new application is a few clicks and a domain name away.
A few things worth planning before you go further:
- Backups: Coolify stores its database in a Docker volume at
/data/coolify/database. Set up automated Raff VM backups or a scheduledpg_dumpexport to protect your deployment configuration. Losing the Coolify database does not affect running containers, but you would need to reconfigure all your applications from scratch. - Upgrade path: Coolify updates itself through the dashboard under Settings → Update. New versions pull updated Docker images and restart the stack with minimal downtime. Test updates on a staging instance before applying to production if your uptime requirements are strict.
- Scaling limits: A single VM is a single point of failure. For workloads that need higher availability, Coolify's server management feature lets you add additional Raff VMs as deployment targets — distributing applications across multiple nodes while keeping the same UI.
We chose to run Coolify on Raff's own internal infrastructure for tooling and staging environments specifically because the economics make sense: one Tier 3 VM at $19.99/month replaces what would otherwise be five or six individual managed service subscriptions. The tradeoff is ownership — but if you are already comfortable managing a Linux VM, the operational overhead is minimal.
For related tutorials in this cluster, the FastAPI deployment guide covers deploying a Python API manually without Coolify — useful context for understanding what Coolify is abstracting. The cloud firewall rules guide is worth reviewing after this tutorial to tighten your VM's network exposure now that multiple applications are running.
