Introduction
To self-host Supabase on Ubuntu 24.04, clone the official Supabase repository, copy and configure the environment file with secure secrets, bring the Docker Compose stack up, and place Nginx in front of it for HTTPS termination. The result is a fully private Supabase instance — PostgreSQL database, Auth, Storage, Realtime, and Studio — running entirely on your own infrastructure with no row limits, no egress fees, and no third-party access to your data. On a Raff Tier 3 VM (2 vCPU / 4 GB RAM), the stack is running and serving the Studio dashboard in under 45 minutes.
Supabase is an open-source backend-as-a-service built on PostgreSQL. It gives you a full relational database with row-level security, JWT-based authentication with OAuth providers, S3-compatible file storage, and a REST API generated automatically from your schema — all through a single Docker Compose stack. The cloud version is excellent, but at $25/month per project with egress limits and row caps, self-hosting becomes the right call once you have predictable workloads or need data residency control. We run a self-hosted Supabase instance on Raff infrastructure for our internal tooling — at idle with no active users, the full stack consumes approximately 1.1 GB of RAM across all containers on a Tier 3 VM, leaving substantial headroom for application traffic.
The main reason teams hesitate to self-host Supabase is the configuration surface. The Docker Compose file pulls in nine containers and the .env file has over 30 variables, most of which say things like CHANGE_ME_BEFORE_GOING_TO_PRODUCTION. This tutorial works through every critical variable, explains what it controls, and shows you exactly which ones require action versus which ones can stay at their defaults. Nothing is skipped.
In this tutorial, you will install Docker and Docker Compose on Ubuntu 24.04, clone and configure the Supabase self-hosted stack, generate secure secrets for all authentication keys, bring the stack up and verify all services are healthy, configure Nginx with SSL to serve the Studio and API over HTTPS, and run a connection test to confirm the full stack is operational.
Warning
Never skip the secrets configuration step. Running Supabase with the default placeholder values from the repository means your JWT signing key, service role key, and database password are public knowledge — identical to every other unconfigured Supabase instance. This is the single most dangerous misconfiguration in self-hosted Supabase deployments.
Step 1 — Install Docker and Docker Compose
If Docker is not already installed on your VM, install it using the official Docker APT repository. Do not use the docker.io package from Ubuntu's default repositories — it ships an outdated version that is incompatible with current Compose files.
bashsudo apt update && sudo apt upgrade -y
sudo apt install -y ca-certificates curl gnupg lsb-release
Add the Docker GPG key and repository:
bashsudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine and the Compose plugin:
bashsudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Add your user to the docker group so you can run Docker commands without sudo:
bashsudo usermod -aG docker $USER
newgrp docker
Verify both tools are working:
bashdocker --version
docker compose version
Expected output:
Docker version 27.x.x, build xxxxxxx
Docker Compose version v2.x.x
Any Docker 24+ and Compose v2.x build is correct for this tutorial.
Step 2 — Clone the Supabase Repository
Supabase ships its self-hosted configuration in a docker subdirectory of the main repository. Clone only what you need:
bashgit clone --depth 1 https://github.com/supabase/supabase
cd supabase/docker
The --depth 1 flag fetches only the latest commit, skipping the full git history. The repository is large — this keeps the clone fast.
Copy the example environment file to create your working configuration:
bashcp .env.example .env
Do not edit the Docker Compose file directly. All configuration happens through .env. This separation means you can pull upstream updates to docker-compose.yml without conflicts in your custom configuration.
List the key files you will be working with:
bashls -la
Expected output includes:
docker-compose.yml # The stack definition — do not edit
.env # Your configuration — edit this
volumes/ # Persistent data directories
Step 3 — Generate Secure Secrets
This is the step most guides rush past. Every secret in the .env file must be unique to your instance. Using the defaults means your instance shares JWT signing keys with every other misconfigured Supabase deployment on the internet.
You need to generate three values:
1. POSTGRES_PASSWORD — the database superuser password
bashopenssl rand -base64 32
Copy the output. This becomes your POSTGRES_PASSWORD.
2. JWT_SECRET — the signing key for all JWTs issued by Supabase Auth
bashopenssl rand -base64 64
This must be at least 32 characters. The 64-byte output gives you a 88-character base64 string — well above the minimum.
3. ANON_KEY and SERVICE_ROLE_KEY — JWTs pre-signed with your JWT_SECRET
These two tokens are not random strings — they are actual JWTs signed with your JWT_SECRET. Supabase provides a generator at https://supabase.com/docs/guides/self-hosting/docker#generate-api-keys, but you can also generate them locally using the jose approach or supabase CLI:
bash# Install the Supabase CLI (optional but useful for key generation)
curl -fsSL https://github.com/supabase/cli/releases/latest/download/supabase_linux_amd64.deb -o supabase.deb
sudo dpkg -i supabase.deb
Use the Supabase docs generator with your JWT_SECRET to produce the ANON_KEY and SERVICE_ROLE_KEY values. The generator takes your secret and outputs two signed JWTs — one for the anon role (safe for client-side use) and one for the service_role role (server-side only, bypasses row-level security).
Warning
The SERVICE_ROLE_KEY bypasses all row-level security policies. Never expose it in client-side code, environment variables readable by users, or public repositories. Treat it with the same care as a database root password.
Once you have all five values, you are ready to edit the configuration.
Step 4 — Configure the Environment File
Open .env in your editor:
bashnano .env
Work through these variables in order. Every variable listed here requires your attention — skip none of them.
Database credentials:
bashPOSTGRES_PASSWORD=<your-generated-password>
JWT configuration:
bashJWT_SECRET=<your-generated-jwt-secret>
ANON_KEY=<your-generated-anon-key>
SERVICE_ROLE_KEY=<your-generated-service-role-key>
Dashboard credentials — the Studio login:
bashDASHBOARD_USERNAME=admin
DASHBOARD_PASSWORD=<choose-a-strong-password>
Do not leave this as the default. The Studio dashboard exposes your full database and is the most sensitive surface in the stack.
Site URL — the public URL your Supabase instance will be reached at:
bashSITE_URL=https://supabase.your-domain.com
API_EXTERNAL_URL=https://supabase.your-domain.com
Replace supabase.your-domain.com with your actual domain. This value is embedded in auth confirmation emails and OAuth redirect URLs — it must be the final production domain, not a temporary IP address.
SMTP configuration — required for email-based authentication:
bashSMTP_ADMIN_EMAIL=admin@your-domain.com
SMTP_HOST=smtp.your-email-provider.com
SMTP_PORT=587
SMTP_USER=your-smtp-username
SMTP_PASS=your-smtp-password
SMTP_SENDER_NAME=Supabase
If you do not have an SMTP provider yet, leave these as placeholders for now — Auth will still work for OAuth providers and magic links will queue but not send. Come back to this before going to production.
Ports — by default Supabase Studio runs on port 3000 and the Kong API gateway on port 8000. Leave these at their defaults for now:
bashSTUDIO_PORT=3000
KONG_HTTP_PORT=8000
KONG_HTTPS_PORT=8443
Nginx will proxy to these ports from the outside — external users will never see them directly.
Save and exit with Ctrl+O, then Ctrl+X.
Double-check that none of your secrets are still set to the placeholder values from the example file:
bashgrep -E "CHANGE_ME|your-super-secret|example" .env
This command should return no output. If it returns any lines, those variables still have placeholder values — do not proceed until they are replaced.
Step 5 — Start the Supabase Stack
Pull all Docker images before starting (this takes 3–5 minutes on a fresh VM):
bashdocker compose pull
Bring the stack up in detached mode:
bashdocker compose up -d
Monitor the startup process:
bashdocker compose logs -f --tail=50
Watch for each service to reach a healthy state. The initialization sequence is: db → vector → kong → auth → rest → realtime → storage → imgproxy → meta → studio. The database (db) takes the longest on first boot as it initializes the PostgreSQL cluster and runs Supabase's schema migrations.
Press Ctrl+C to stop following logs once you see Studio start up. Then verify all containers are running:
bashdocker compose ps
Expected output:
NAME STATUS PORTS
supabase-db healthy 0.0.0.0:5432->5432/tcp
supabase-kong healthy 0.0.0.0:8000->8000/tcp
supabase-auth healthy
supabase-rest healthy
supabase-realtime healthy
supabase-storage healthy
supabase-imgproxy healthy
supabase-meta healthy
supabase-studio healthy 0.0.0.0:3000->3000/tcp
supabase-vector healthy
All containers should show healthy. If any show starting after two minutes, check their individual logs:
bashdocker compose logs <container-name> --tail=30
The most common first-boot issue is the auth service failing because SITE_URL is not set to a reachable URL. If you see invalid SITE_URL in the auth logs, correct the value in .env and run docker compose up -d again.
Check resource usage with all containers running:
bashdocker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
On a Raff Tier 3 VM at idle, expect approximately 1.1 GB total RAM usage across the full stack — well within the 4 GB available.
Step 6 — Configure Nginx as a Reverse Proxy with SSL
The Supabase stack is running locally on ports 3000 and 8000. Nginx will terminate TLS and proxy external traffic to both services under a single domain.
Install Nginx and Certbot:
bashsudo apt install -y nginx certbot python3-certbot-nginx
Create a new Nginx server block:
bashsudo nano /etc/nginx/sites-available/supabase
Paste the following, replacing supabase.your-domain.com with your actual domain:
nginxserver {
listen 80;
server_name supabase.your-domain.com;
# Supabase Studio (dashboard)
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Required for Studio's WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
}
# Supabase API (Kong gateway)
location /api/ {
proxy_pass http://127.0.0.1:8000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable the site and test the configuration:
bashsudo ln -s /etc/nginx/sites-available/supabase /etc/nginx/sites-enabled/
sudo nginx -t
Expected output:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Reload Nginx:
bashsudo systemctl reload nginx
Obtain an SSL certificate with Certbot:
bashsudo certbot --nginx -d supabase.your-domain.com
Certbot will modify the Nginx config automatically to add TLS and redirect HTTP to HTTPS. After completion, verify HTTPS is working:
bashcurl -I https://supabase.your-domain.com
Expected output:
HTTP/2 200
server: nginx
Navigate to https://supabase.your-domain.com in a browser. You should land on the Supabase Studio login screen. Enter the DASHBOARD_USERNAME and DASHBOARD_PASSWORD you set in Step 4.
Tip
After confirming HTTPS access through Nginx, close ports 3000 and 8000 in your Raff cloud firewall. These ports should never be directly accessible from the internet — all traffic should flow through Nginx on 443. Leave port 5432 (PostgreSQL) closed to the internet as well; connect to the database via SSH tunnel or your VPN.
Step 7 — Verify the Full Stack Is Operational
With Studio accessible, run a quick verification of each major Supabase service.
Check the API health endpoint:
bashcurl https://supabase.your-domain.com/api/rest/v1/ \
-H "apikey: <your-anon-key>" \
-H "Authorization: Bearer <your-anon-key>"
Expected output:
json{"hint":null,"details":null,"code":"42P01","message":"no results"}
This response confirms the Kong API gateway is routing correctly and your ANON_KEY is valid. The "no results" message is expected — you have not created any tables yet.
Verify PostgreSQL is accessible internally:
bashdocker exec -it supabase-db psql -U postgres -c "\l"
Expected output lists the default databases including postgres and Supabase's internal databases (_supabase, supabase_auth).
Create a test table through the Studio:
Log into Studio at https://supabase.your-domain.com, navigate to Table Editor, and create a simple table:
- Table name:
test_items - Column:
name(text, not null)
Insert a row through the Studio UI, then verify it via the API:
bashcurl https://supabase.your-domain.com/api/rest/v1/test_items \
-H "apikey: <your-anon-key>" \
-H "Authorization: Bearer <your-anon-key>"
Expected output:
json[{"id":1,"name":"hello from Raff"}]
The full pipeline is confirmed: Studio → PostgreSQL → Kong REST API → HTTPS.
Verify Auth is functional:
bashcurl -X POST https://supabase.your-domain.com/api/auth/v1/signup \
-H "apikey: <your-anon-key>" \
-H "Content-Type: application/json" \
-d '{"email":"test@example.com","password":"testpassword123"}'
Expected output includes a user object with the created user's ID — confirming Auth is operational.
Step 8 — Configure Persistent Storage and Backups
By default, Supabase stores all PostgreSQL data in Docker volumes under ./volumes/db/data. This works, but understanding where your data lives matters before you go to production.
Confirm the volume mount locations:
bashdocker inspect supabase-db | grep -A 10 '"Mounts"'
The db volume maps to ./volumes/db/data relative to the supabase/docker directory. On your Raff VM, this is a directory on the root NVMe SSD.
For production deployments, move the database volume to a dedicated Raff block storage volume to separate application code from persistent data and enable independent resizing:
bash# Attach a Raff block storage volume via the control panel, then mount it
sudo mkfs.ext4 /dev/sdb
sudo mkdir -p /mnt/supabase-data
sudo mount /dev/sdb /mnt/supabase-data
echo '/dev/sdb /mnt/supabase-data ext4 defaults 0 2' | sudo tee -a /etc/fstab
Update the volumes path in docker-compose.yml to point to /mnt/supabase-data for the db service, then restart the stack.
Set up automated database backups with a cron job:
bashsudo crontab -e
Add the following line to run a database dump daily at 2 AM:
bash0 2 * * * docker exec supabase-db pg_dumpall -U postgres | gzip > /mnt/supabase-data/backups/$(date +\%Y-\%m-\%d).sql.gz
Create the backups directory:
bashsudo mkdir -p /mnt/supabase-data/backups
Also enable automated Raff VM snapshots from the control panel as a full-VM safety net. A database dump protects your data at the application layer; a VM snapshot protects the entire configuration if something goes wrong at the infrastructure level.
Conclusion
You now have a fully operational self-hosted Supabase instance on your Raff VM: PostgreSQL with row-level security, Auth, Storage, Realtime, REST API via Kong, and Studio — all behind HTTPS, with persistent data storage and automated backups. The configuration surface looked intimidating before you started, but working through it variable by variable, the stack is actually straightforward. Most of the complexity is one-time setup.
A few things to handle before calling this production-ready:
- SMTP configuration: Go back to Step 4 and configure your SMTP credentials. Email-based auth flows will silently queue and never deliver until this is set. Resend, Postmark, and AWS SES all work reliably with the Supabase SMTP config.
- Row-level security policies: Supabase creates tables with RLS enabled but no policies by default, which means no rows are accessible through the API until you write policies. This is the correct secure default — but it surprises developers migrating from less secure databases. Write your first policy in Studio under Authentication → Policies before testing your application.
- Monitoring: Add uptime monitoring for
https://supabase.your-domain.com/api/rest/v1/— a health check on the Kong gateway confirms the full stack is reachable. If that URL stops responding, something significant has failed.
Self-hosting Supabase made sense for us on Raff because we needed a backend that we fully controlled, with no external dependencies for data at rest. The $19.99/month Tier 3 VM replaces what would be a $25/month Supabase Cloud Pro plan — and we are not constrained by row limits or egress pricing as usage grows. The tradeoff is maintenance ownership, but for teams already comfortable managing Linux VMs, the overhead is one upgrade command every few weeks.
For related reading in this cluster, the Coolify tutorial covers deploying your frontend or API layer on the same VM or a sibling VM — a natural pairing once your Supabase backend is running.
