Introduction
To connect two Raff VMs over a private network, create a VPC (Virtual Private Cloud) from the Raff control panel, attach both VMs to it, and verify connectivity by pinging each VM's private IP address from the other. Traffic between VMs on the same VPC never leaves Raff Technologies' internal infrastructure — it is completely isolated from the public internet, never metered, and significantly lower latency than routing through a public IP.
A private network (VPC) is a logically isolated network segment within a cloud provider's infrastructure. VMs inside the same VPC can communicate with each other using private IP addresses, which are unreachable from the public internet. This matters the moment your architecture involves more than one server — a web server that should talk to a database, two application nodes behind a load balancer, or a staging environment that needs access to a shared cache. Sending that traffic over public IPs means every packet travels out to the internet and back in, adding latency, consuming bandwidth, and exposing service ports to the open internet unnecessarily.
In this tutorial, you will: create a private VPC network in the Raff control panel, attach two VMs to that network and confirm they receive private IPs, verify basic connectivity with ping, configure UFW on each VM to allow traffic from the private subnet, test a real inter-VM connection using iperf3, and learn the networking pattern used for web-server-to-database setups. By the end, you will have a working private channel between two Raff VMs that you can expand to as many nodes as your stack requires.
Note
This tutorial uses the Raff control panel for network creation and VM attachment. All OS-level commands run on Ubuntu 24.04. If your VMs run a different supported distro, the ping and iperf3 steps are identical — only the package manager differs.
Step 1 — Create a Private Network in the Raff Control Panel
To connect two VMs privately, you create a VPC network first, then attach VMs to it. Log in to the Raff control panel at rafftechnologies.com.
Navigate to Networking → Private Networks in the left sidebar and click Create Network.
Fill in the network details:
- Name: Give the network a meaningful name — for example,
app-private-net. This is for your reference only. - IPv4 Subnet: Raff assigns a private subnet automatically. The default range is typically within
10.0.0.0/8. Accept the default or specify your own CIDR block — for example,10.10.0.0/24— if you plan to connect multiple VPCs and need predictable addressing.
Click Create Network. The network appears in the list with status Active within a few seconds. Note the subnet CIDR — you will need it when writing UFW rules in Step 4.
Tip
You can create a private network before provisioning VMs and attach VMs during creation. If your VMs already exist, follow Step 2 to attach them afterward — both workflows produce the same result.
Step 2 — Attach Both VMs to the Private Network
With the private network created, attach your two VMs to it. You can do this from each VM's settings page without rebooting.
In the Raff control panel, navigate to Compute → Virtual Machines and click the name of your first VM (call it VM-A for this tutorial — for example, your web server).
Go to the Networking tab and click Attach Network. Select the private network you created in Step 1 from the dropdown. Click Attach.
Repeat this for your second VM (VM-B — for example, your database server): open its Networking tab, click Attach Network, select the same private network, and click Attach.
After attaching, both VM detail pages will show two network interfaces:
| Interface | Type | IP |
|---|---|---|
eth0 | Public | Your VM's public IPv4 address |
eth1 | Private | A private IP from your VPC subnet |
Note the private IP of each VM. You will use these throughout the rest of this tutorial. In the examples below:
- VM-A private IP:
10.10.0.2 - VM-B private IP:
10.10.0.3
Replace these with your actual assigned private IPs.
Tip
Private IPs are assigned from your VPC subnet range by DHCP. They persist for the life of the attachment — detaching and re-attaching may assign a different IP, so record them before proceeding.
Step 3 — Confirm the Private Interface is Active on Each VM
SSH into VM-A using its public IP and verify the new private interface appeared:
baship addr show
You should see two interfaces — eth0 (public) and eth1 (private):
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
inet 203.0.113.10/24 brd 203.0.113.255 scope global eth0
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
inet 10.10.0.2/24 brd 10.10.0.255 scope global eth1
If eth1 is present but shows no IP, the interface may need to be brought up. Ubuntu 24.04 uses Netplan for network configuration. Check whether Netplan has picked up the new interface:
bashnetworkctl status
If eth1 appears as unmanaged or configuring, apply Netplan:
bashsudo netplan apply
After applying, re-run ip addr show and confirm eth1 has a private IP from your VPC subnet.
Run the same verification on VM-B:
baship addr show eth1
Expected output on VM-B:
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
inet 10.10.0.3/24 brd 10.10.0.255 scope global eth1
Both VMs are now connected to the private network. The private interface is live and ready for traffic.
Step 4 — Configure UFW to Allow Traffic from the Private Subnet
By default, UFW on Ubuntu 24.04 blocks all incoming traffic not explicitly allowed. Traffic between VMs on the private network arrives on eth1 from your VPC subnet and will be dropped unless you create a rule for it.
On VM-A, allow all incoming traffic from the VPC subnet:
bashsudo ufw allow from 10.10.0.0/24
Confirm the rule was added:
bashsudo ufw status
Expected output includes:
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
Anywhere ALLOW 10.10.0.0/24
Repeat the same command on VM-B:
bashsudo ufw allow from 10.10.0.0/24
Tip
If you prefer tighter rules, restrict by port rather than allowing all traffic from the subnet. For a database server that only needs to accept PostgreSQL connections from the web VM: sudo ufw allow from 10.10.0.2 to any port 5432 proto tcp. This allows only the web server's exact private IP to reach the database port. For monitoring tools that use Node Exporter on port 9100, add a separate rule from the monitoring VM's private IP.
Warning
If UFW is not yet enabled on your VMs, enable it carefully — always add the SSH allow rule first before enabling. See our UFW setup tutorial for the full initial configuration workflow. Enabling UFW without an SSH rule will lock you out.
Step 5 — Verify Connectivity with ping
With UFW configured, test basic connectivity between the two VMs using their private IPs. From VM-A, ping VM-B's private IP:
bashping -c 4 10.10.0.3
Expected output:
PING 10.10.0.3 (10.10.0.3) 56(84) bytes of data.
64 bytes from 10.10.0.3: icmp_seq=1 ttl=64 time=0.38 ms
64 bytes from 10.10.0.3: icmp_seq=2 ttl=64 time=0.31 ms
64 bytes from 10.10.0.3: icmp_seq=3 ttl=64 time=0.29 ms
64 bytes from 10.10.0.3: icmp_seq=4 ttl=64 time=0.32 ms
--- 10.10.0.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.290/0.325/0.380/0.033 ms
Sub-millisecond round-trip time confirms the traffic is flowing entirely through Raff's internal infrastructure — not out to the internet and back. We consistently see under 0.5ms RTT between VMs on the same Raff private network within the Virginia data center, compared to 5–15ms when routing the same inter-VM traffic over public IPs.
From VM-B, ping VM-A to verify the return path:
bashping -c 4 10.10.0.2
Both directions should succeed with consistent sub-millisecond latency. If ping fails, jump to the troubleshooting checklist at the end of this tutorial.
Step 6 — Measure Actual Throughput with iperf3
ping confirms reachability but does not tell you bandwidth. Use iperf3 to measure the real throughput of your private connection — useful for sizing your VM tier before deploying bandwidth-intensive services.
Install iperf3 on both VMs:
bashsudo apt update && sudo apt install -y iperf3
On VM-B (the receiver), start the iperf3 server. By default it listens on TCP port 5201:
bashiperf3 -s
Expected output:
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Open a second SSH session to VM-A (the sender) and run the iperf3 client against VM-B's private IP:
bashiperf3 -c 10.10.0.3
Expected output (results will vary by VM tier):
Connecting to host 10.10.0.3, port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.12 GBytes 9.63 Gbits/sec
[ 5] 1.00-2.00 sec 1.13 GBytes 9.70 Gbits/sec
[ 5] 2.00-3.00 sec 1.11 GBytes 9.57 Gbits/sec
...
* - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 11.2 GBytes 9.64 Gbits/sec sender
[ 5] 0.00-10.00 sec 11.2 GBytes 9.61 Gbits/sec receiver
On Raff Tier 3 VMs (2 vCPU / 4 GB RAM) we measured approximately 9.6 Gbits/sec between two VMs on the same private network — fast enough for database replication, log streaming, or any internal service communication. Higher tiers will push closer to line rate.
After the test completes, stop the iperf3 server on VM-B with Ctrl+C.
Tip
To allow iperf3 server traffic through UFW without opening port 5201 globally, use a subnet-restricted rule: sudo ufw allow from 10.10.0.0/24 to any port 5201 proto tcp. This is already covered by the broad subnet rule from Step 4 — add a dedicated rule only if you later tighten the Step 4 rule.
Step 7 — Apply the Pattern: Web Server to Database
The private network is now working. Here is how you apply this same pattern to a real-world web-server-to-database setup — the most common reason teams set up a private network.
On VM-B (the database server), instead of the broad subnet allow rule from Step 4, use a tighter rule that allows only VM-A (the web server) to connect to the database port:
bash# Remove the broad allow if you added it
sudo ufw delete allow from 10.10.0.0/24
# Allow only VM-A's private IP to reach PostgreSQL
sudo ufw allow from 10.10.0.2 to any port 5432 proto tcp
# Allow only VM-A's private IP to reach MySQL (if applicable)
sudo ufw allow from 10.10.0.2 to any port 3306 proto tcp
On VM-A (the web server), update your application's database connection string to use VM-B's private IP instead of localhost or a public IP. For a typical .env file:
bashDB_HOST=10.10.0.3
DB_PORT=5432
DB_NAME=myapp
DB_USER=appuser
DB_PASS=your-database-password
The database port (5432 or 3306) is now completely invisible to the public internet. The only path to it is from VM-A's private IP over the VPC. This is the architecture behind any multi-tier Raff deployment: web servers and databases communicating on private IPs, with public-facing ports open only on the web tier.
Troubleshooting
ping fails between VMs
Check that both VMs are attached to the same private network:
baship addr show eth1
Both should show IPs in the same subnet (e.g., 10.10.0.x/24). If one VM shows no eth1, the network was not attached or Netplan has not applied. Re-run sudo netplan apply and check again.
Check UFW rules on the receiving VM:
bashsudo ufw status
If no rule allows traffic from the private subnet, add it:
bashsudo ufw allow from 10.10.0.0/24
Check whether UFW is actually enabled:
bashsudo ufw status | head -1
If it shows Status: inactive, UFW is off and traffic is flowing (or being blocked by the Raff firewall, not the OS firewall).
Private IP not assigned after attachment
If ip addr show shows eth1 without an IP, Ubuntu's Netplan may not have a configuration for the new interface yet. Create a minimal Netplan snippet:
bashsudo nano /etc/netplan/60-private.yaml
Add:
yamlnetwork:
version: 2
ethernets:
eth1:
dhcp4: true
Apply it:
bashsudo netplan apply
Then verify the IP appeared:
baship addr show eth1
iperf3 connection refused
Confirm the iperf3 server is running on VM-B:
bashsudo ss -tlnp | grep 5201
If port 5201 is not listening, the server is not running. Start it again: iperf3 -s. Also confirm the UFW rule on VM-B allows port 5201 from the private subnet.
Conclusion
You have connected two Raff VMs over a private network and verified the connection with ping and iperf3. Traffic between VMs on the same Raff VPC stays entirely within Raff Technologies' internal infrastructure, giving you sub-millisecond latency and full network bandwidth without any of the traffic being visible on the public internet.
The web-server-to-database pattern from Step 7 is the most common use of private networking — but the same setup works for any internal service: cache servers, message queues, inter-service APIs, log aggregation, and monitoring exporters. Add VMs to the private network, restrict UFW to allow only the specific ports and source IPs required, and no additional configuration is needed at the Raff infrastructure level.
As next steps, consider:
- Block storage: Attach a Raff block storage volume to your database VM for persistent, expandable storage that survives VM resizes
- Automated backups: Enable Raff automated backups on your database VM to protect data without installing any backup agent
- Private networking with more VMs: Add a third VM — a Redis cache or a monitoring server — to the same private network by repeating Step 2. All VMs in the same VPC can reach each other without any additional network configuration
This tutorial was tested on two Raff Tier 3 VMs (2 vCPU / 4 GB RAM each) running Ubuntu 24.04 LTS, connected to a /24 VPC subnet.

