Introduction
Docker is the industry-standard platform for building, shipping, and running applications inside containers. By installing Docker on your Raff VM, you can isolate applications, simplify deployments, and scale services independently without worrying about dependency conflicts between projects.
Containers are lightweight alternatives to virtual machines. They share the host OS kernel while keeping applications isolated, which means they start in seconds and use a fraction of the resources a full VM would require. Docker containers are portable across environments — what runs on your local machine runs identically on your cloud server.
In this tutorial, you will add the official Docker APT repository to your Ubuntu 24.04 server, install Docker Engine along with the Docker Compose plugin, configure your user account to run Docker without sudo, and verify the installation by running a test container.
Step 1 — Update the Package Index and Install Prerequisites
Before adding external repositories, update your existing package index and install the packages that allow APT to use repositories over HTTPS.
bashsudo apt update
sudo apt install -y ca-certificates curl gnupg
The ca-certificates package ensures your system trusts SSL certificates, curl downloads files from the web, and gnupg handles GPG key verification for package authentication.
Step 2 — Add the Official Docker GPG Key and Repository
Docker packages are signed with a GPG key. Add this key to your system so APT can verify package integrity.
Create the keyring directory and download the key:
bashsudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Now add the Docker repository to your APT sources:
bashecho "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update the package index again to include Docker packages:
bashsudo apt update
You should see the Docker repository listed in the output, confirming it was added successfully.
Note
This installs Docker from Docker's official repository, not the docker.io package in Ubuntu's default repository. The official repository provides the latest stable version with more frequent updates and the full plugin ecosystem.
Step 3 — Install Docker Engine
Install Docker Engine, the CLI tool, containerd (the container runtime), and the Buildx and Compose plugins:
bashsudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
This installs the latest stable version of Docker Engine along with Docker Compose v2, which you will need for deploying multi-container applications.
Verify the installation by checking the Docker version:
bashdocker --version
Expected output:
Docker version 27.x.x, build xxxxxxx
Verify that Docker Compose is also installed:
bashdocker compose version
Expected output:
Docker Compose version v2.x.x
Step 4 — Start and Enable the Docker Service
Docker should start automatically after installation. Confirm it is running:
bashsudo systemctl status docker
You should see active (running) in the output.
If Docker is not running, start it manually and enable it to start at boot:
bashsudo systemctl start docker
sudo systemctl enable docker
Here are the essential systemd commands for managing Docker:
bashsudo systemctl stop docker # Stop Docker
sudo systemctl start docker # Start Docker
sudo systemctl restart docker # Restart Docker
sudo systemctl status docker # Check status
Step 5 — Configure Non-Root Docker Access
By default, Docker commands require sudo. Add your user to the docker group to run commands without elevated privileges.
bashsudo usermod -aG docker $USER
Apply the group change without logging out:
bashnewgrp docker
Warning
Adding a user to the docker group grants root-equivalent privileges on the host system. Only add trusted users to this group. On shared servers, consider keeping the sudo requirement for tighter access control.
Verify that you can run Docker without sudo:
bashdocker ps
This should return an empty container list without any permission errors.
Step 6 — Verify the Installation with a Test Container
Run the Docker hello-world container to confirm the entire pipeline works — image pull, container creation, and execution:
bashdocker run hello-world
You should see output that includes:
Hello from Docker!
This message shows that your installation appears to be working correctly.
This confirms that Docker can pull images from Docker Hub, create containers, and execute them on your server.
To see the container that was created (it exits immediately after printing the message):
bashdocker ps -a
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1b2c3d4e5f6 hello-world "/hello" 30 seconds ago Exited (0) 29 seconds ago friendly_name
Clean up the stopped test container:
bashdocker rm $(docker ps -aq --filter "ancestor=hello-world")
Step 7 — Run a Practical Test with Nginx
To confirm Docker works with a real-world application, run an Nginx web server container:
bashdocker run -d --name test-nginx -p 8080:80 nginx
This command does the following:
-d— Runs the container in detached (background) mode--name test-nginx— Assigns the name "test-nginx" to the container-p 8080:80— Maps port 8080 on your server to port 80 inside the containernginx— Uses the official Nginx image from Docker Hub
Verify the container is running:
bashdocker ps
You should see the test-nginx container with status Up.
Test that Nginx is serving content:
bashcurl http://localhost:8080
You should see HTML output containing "Welcome to nginx!" confirming the containerized web server is working.
When you are done testing, stop and remove the container:
bashdocker stop test-nginx
docker rm test-nginx
Tip
To allow external access to the Nginx container, open port 8080 in UFW: sudo ufw allow 8080/tcp. Then visit http://your_server_ip:8080 in your browser.
Conclusion
You have installed Docker Engine on your Raff Ubuntu 24.04 VM from the official Docker repository, configured non-root access, installed the Docker Compose plugin, and verified the installation with both a test container and a practical Nginx deployment. Your server is now ready to run containerized applications.
From here, you can:
- Deploy multi-container applications using Docker Compose
- Explore Docker Hub for pre-built images of databases, web servers, and application frameworks
- Set up self-hosted tools like Uptime Kuma, Nextcloud, or n8n using Docker
- Configure automated backups of your Docker volumes using Raff's snapshot and backup features
Raff VMs provide NVMe SSD storage and AMD EPYC processors, which give Docker containers fast I/O and consistent compute performance for production workloads. With unmetered bandwidth on all tiers, you can pull images and serve containerized applications without worrying about transfer costs.