Introduction
Sync files to Raff Object Storage with rclone by configuring Raff’s S3-compatible endpoint, creating a secure remote, copying test files, running a safe dry-run sync, and verifying that files can be restored. This tutorial shows you how to use rclone on a Raff Technologies Ubuntu 24.04 VM to move application files, backups, logs, exports, and static assets into object storage.
rclone is a command-line file transfer tool that copies, syncs, lists, checks, and manages files across cloud storage systems. Raff Object Storage is S3-compatible object storage, which means rclone can connect to it using the S3 backend, the endpoint s3.raffusercloud.com, and your Raff S3 access credentials. This makes rclone useful when you want repeatable file synchronization instead of one-off uploads.
In this tutorial, you will install rclone, configure a Raff Object Storage remote, create a bucket, upload files, compare copy and sync, use --dry-run safely, restore files into a test directory, add exclusions, and schedule a daily sync job with systemd.
Warning
rclone sync can delete files from the destination if they no longer exist in the source. Always run rclone sync --dry-run first when working with production data.
Step 1 — Install rclone on Ubuntu 24.04
Start by installing rclone from Ubuntu’s package repository. This is the simplest installation method and works well for S3-compatible object storage workflows.
bashsudo apt update
sudo apt install -y rclone
Verify the installed version:
bashrclone version
Expected output will look similar to this:
textrclone v1.x.x
* os/version: ubuntu 24.04
* os/kernel: 6.x.x
* os/type: linux
* os/arch: amd64
The exact version can differ depending on package updates. What matters is that the rclone command runs successfully.
If you need the newest upstream rclone release later, you can install from the official rclone install script. For this tutorial, the Ubuntu package is enough.
Step 2 — Prepare Your Raff Object Storage Credentials
Before configuring rclone, collect your Raff Object Storage details from the Raff dashboard.
You need:
- S3 endpoint:
https://s3.raffusercloud.com - S3 access key
- S3 secret key
- Bucket name, or permission to create a bucket
- Region value:
us-east-1
If you have not created object storage credentials yet:
- Log in to the Raff dashboard.
- Open Object Storage.
- Create or select a bucket.
- Generate or copy your S3 access key and secret key.
- Keep the secret key private.
Warning
Treat S3 keys like passwords. Do not commit them to Git, paste them into tickets, or store them in application code.
For this tutorial, replace the example bucket name with your own:
textmy-project-files
Use lowercase letters, numbers, and hyphens for bucket names. Avoid spaces, underscores, and uppercase characters.
Step 3 — Create a Root-Owned rclone Config File
Because the tutorial later schedules rclone with systemd, store the rclone configuration in /etc/rclone/raff-rclone.conf. This makes the config path explicit and avoids confusion between your user’s config and root’s config.
Create the config directory:
bashsudo install -d -m 700 /etc/rclone
sudo touch /etc/rclone/raff-rclone.conf
sudo chmod 600 /etc/rclone/raff-rclone.conf
Read your credentials into temporary shell variables:
bashread -rp "Raff S3 access key: " RAFF_ACCESS_KEY_ID
read -rsp "Raff S3 secret key: " RAFF_SECRET_ACCESS_KEY
echo
Now create an rclone remote named raffs3:
bashsudo rclone config create raffs3 s3 \
provider Other \
env_auth false \
access_key_id "$RAFF_ACCESS_KEY_ID" \
secret_access_key "$RAFF_SECRET_ACCESS_KEY" \
region us-east-1 \
endpoint https://s3.raffusercloud.com \
acl private \
--config /etc/rclone/raff-rclone.conf
Clear the temporary shell variables:
bashunset RAFF_ACCESS_KEY_ID
unset RAFF_SECRET_ACCESS_KEY
Verify that the remote exists:
bashsudo rclone listremotes --config /etc/rclone/raff-rclone.conf
Expected output:
textraffs3:
The remote name raffs3 is how you will reference Raff Object Storage in rclone commands.
Step 4 — List Existing Buckets and Create a Test Bucket
List buckets available to your credentials:
bashsudo rclone lsd raffs3: --config /etc/rclone/raff-rclone.conf
If your account already has buckets, you will see output similar to this:
text -1 2026-05-04 10:15:00 -1 my-project-files
If the bucket does not exist yet, create it:
bashsudo rclone mkdir raffs3:my-project-files --config /etc/rclone/raff-rclone.conf
List buckets again:
bashsudo rclone lsd raffs3: --config /etc/rclone/raff-rclone.conf
Expected output:
text -1 2026-05-04 10:16:00 -1 my-project-files
You now have an rclone remote and a Raff Object Storage bucket ready for uploads.
Step 5 — Create Sample Files to Sync
Create a local test directory with a few files. This keeps the first upload safe and easy to inspect.
bashmkdir -p ~/raff-rclone-demo/assets
mkdir -p ~/raff-rclone-demo/logs
mkdir -p ~/raff-rclone-demo/cache
echo "Raff Object Storage rclone demo" > ~/raff-rclone-demo/README.txt
echo "body { color: #db4a2b; }" > ~/raff-rclone-demo/assets/app.css
echo "application started" > ~/raff-rclone-demo/logs/app.log
echo "temporary cache file" > ~/raff-rclone-demo/cache/temp.txt
Inspect the directory:
bashfind ~/raff-rclone-demo -type f -print
Expected output:
text/home/ubuntu/raff-rclone-demo/assets/app.css
/home/ubuntu/raff-rclone-demo/logs/app.log
/home/ubuntu/raff-rclone-demo/cache/temp.txt
/home/ubuntu/raff-rclone-demo/README.txt
Your home directory may differ from /home/ubuntu. That is fine.
Step 6 — Upload Files with rclone copy
Use rclone copy for the first upload. The copy command uploads new and changed files but does not delete extra files from the destination.
bashsudo rclone copy ~/raff-rclone-demo raffs3:my-project-files/demo \
--progress \
--config /etc/rclone/raff-rclone.conf
Expected output will show transfer progress:
textTransferred: 128 B / 128 B, 100%
Checks: 0 / 0, -
Transferred: 4 / 4, 100%
Elapsed time: 2.1s
List the uploaded files:
bashsudo rclone ls raffs3:my-project-files/demo \
--config /etc/rclone/raff-rclone.conf
Expected output:
text 32 README.txt
22 assets/app.css
20 logs/app.log
21 cache/temp.txt
Use copy when you want to upload or refresh files without deleting anything from object storage.
Step 7 — Test rclone sync with a Dry Run
Now test rclone sync. Unlike copy, sync makes the destination match the source. That means files removed locally can also be removed from object storage.
First, change the local folder:
bashecho "new release artifact" > ~/raff-rclone-demo/assets/release.txt
rm -f ~/raff-rclone-demo/cache/temp.txt
Run a dry run:
bashsudo rclone sync ~/raff-rclone-demo raffs3:my-project-files/demo \
--dry-run \
--progress \
--config /etc/rclone/raff-rclone.conf
Expected output will show what rclone would do without actually doing it:
textNOTICE: assets/release.txt: Skipped copy as --dry-run is set
NOTICE: cache/temp.txt: Skipped delete as --dry-run is set
This is the safest way to check a sync operation. The dry run tells you that rclone would upload assets/release.txt and delete cache/temp.txt from the destination.
If the dry-run output looks correct, run the real sync:
bashsudo rclone sync ~/raff-rclone-demo raffs3:my-project-files/demo \
--progress \
--config /etc/rclone/raff-rclone.conf
List the destination again:
bashsudo rclone ls raffs3:my-project-files/demo \
--config /etc/rclone/raff-rclone.conf
Expected output:
text 32 README.txt
22 assets/app.css
21 assets/release.txt
20 logs/app.log
The cache/temp.txt file is gone from object storage because the source folder no longer contains it.
Warning
This delete behavior is why rclone sync --dry-run should become a habit. For one-way uploads that should never delete destination files, use rclone copy instead.
Step 8 — Restore Files from Raff Object Storage
A sync workflow is only useful if you can restore files. Create a clean restore directory:
bashrm -rf ~/raff-rclone-restore-test
mkdir -p ~/raff-rclone-restore-test
Copy files from Raff Object Storage back to the VM:
bashsudo rclone copy raffs3:my-project-files/demo ~/raff-rclone-restore-test \
--progress \
--config /etc/rclone/raff-rclone.conf
Inspect the restored files:
bashfind ~/raff-rclone-restore-test -type f -print
Expected output:
text/home/ubuntu/raff-rclone-restore-test/assets/app.css
/home/ubuntu/raff-rclone-restore-test/assets/release.txt
/home/ubuntu/raff-rclone-restore-test/logs/app.log
/home/ubuntu/raff-rclone-restore-test/README.txt
Compare the restored files with the current source:
bashdiff -r ~/raff-rclone-demo ~/raff-rclone-restore-test
If the command returns no output, the restored files match the source.
This is the most important verification step. Uploading files is not enough. You need to prove you can retrieve them into a clean directory.
Step 9 — Add Exclusions for Cache and Temporary Files
Most real sync jobs should exclude files that do not belong in object storage. Examples include cache directories, temporary files, dependency folders, local Git history, and runtime sockets.
Create a sample filter file:
bashcat > ~/raff-rclone-filter.txt <<'EOF'
* cache/**
* tmp/**
* node_modules/**
* .git/**
* *.tmp
* *.sock
+ **
EOF
Run a dry-run sync with the filter:
bashsudo rclone sync ~/raff-rclone-demo raffs3:my-project-files/demo-filtered \
--filter-from ~/raff-rclone-filter.txt \
--dry-run \
--progress \
--config /etc/rclone/raff-rclone.conf
If the dry run looks correct, run the real filtered sync:
bashsudo rclone sync ~/raff-rclone-demo raffs3:my-project-files/demo-filtered \
--filter-from ~/raff-rclone-filter.txt \
--progress \
--config /etc/rclone/raff-rclone.conf
List the filtered destination:
bashsudo rclone ls raffs3:my-project-files/demo-filtered \
--config /etc/rclone/raff-rclone.conf
Expected output:
text 32 README.txt
22 assets/app.css
21 assets/release.txt
20 logs/app.log
The cache directory should not appear. Filtering helps prevent object storage from filling with files that are easy to recreate and not useful during restore.
Step 10 — Create a Reusable Sync Script
Now turn the working command into a reusable script. This example syncs /srv/app/uploads to Raff Object Storage, which is a common pattern for application uploads, generated files, or exported reports.
Create a source directory for the example:
bashsudo mkdir -p /srv/app/uploads
echo "sample upload" | sudo tee /srv/app/uploads/example.txt > /dev/null
Create the script:
bashsudo nano /usr/local/sbin/rclone-raff-object-sync.sh
Add the following content:
bash#!/usr/bin/env bash
set -euo pipefail
CONFIG_FILE="/etc/rclone/raff-rclone.conf"
SOURCE_DIR="/srv/app/uploads"
DESTINATION="raffs3:my-project-files/uploads"
LOG_FILE="/var/log/rclone-raff-object-sync.log"
{
echo "[$(date -u +'%Y-%m-%dT%H:%M:%SZ')] Starting rclone sync"
echo "Source: ${SOURCE_DIR}"
echo "Destination: ${DESTINATION}"
rclone sync "$SOURCE_DIR" "$DESTINATION" \
--config "$CONFIG_FILE" \
--filter-from /etc/rclone/raff-sync-filter.txt \
--transfers 4 \
--checkers 8 \
--fast-list \
--stats 30s \
--stats-one-line
echo "Running destination listing check..."
rclone lsf "$DESTINATION" \
--config "$CONFIG_FILE" \
--max-depth 2 | head -n 20
echo "[$(date -u +'%Y-%m-%dT%H:%M:%SZ')] rclone sync completed successfully"
} 2>&1 | tee -a "$LOG_FILE"
Create the filter file used by the script:
bashsudo nano /etc/rclone/raff-sync-filter.txt
Add this content:
text* cache/**
* tmp/**
* node_modules/**
* .git/**
* *.tmp
* *.sock
+ **
Lock down the script and filter permissions:
bashsudo chown root:root /usr/local/sbin/rclone-raff-object-sync.sh
sudo chmod 750 /usr/local/sbin/rclone-raff-object-sync.sh
sudo chown root:root /etc/rclone/raff-sync-filter.txt
sudo chmod 600 /etc/rclone/raff-sync-filter.txt
Run the script manually:
bashsudo /usr/local/sbin/rclone-raff-object-sync.sh
Expected output:
text[2026-05-04T12:00:00Z] Starting rclone sync
Source: /srv/app/uploads
Destination: raffs3:my-project-files/uploads
Transferred: ...
Running destination listing check...
example.txt
[2026-05-04T12:00:02Z] rclone sync completed successfully
If this works manually, you can safely schedule it.
Step 11 — Schedule Daily Sync with systemd
Create a systemd service:
bashsudo nano /etc/systemd/system/rclone-raff-object-sync.service
Add this content:
ini[Unit]
Description=Sync files to Raff Object Storage with rclone
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/rclone-raff-object-sync.sh
Create a systemd timer:
bashsudo nano /etc/systemd/system/rclone-raff-object-sync.timer
Add this content:
ini[Unit]
Description=Run rclone Raff Object Storage sync daily
[Timer]
OnCalendar=*-*-* 03:15:00
Persistent=true
RandomizedDelaySec=15m
Unit=rclone-raff-object-sync.service
[Install]
WantedBy=timers.target
Reload systemd and enable the timer:
bashsudo systemctl daemon-reload
sudo systemctl enable --now rclone-raff-object-sync.timer
Verify the timer:
bashsystemctl list-timers rclone-raff-object-sync.timer
Expected output:
textNEXT LEFT LAST PASSED UNIT ACTIVATES
Tue 2026-05-05 03:15:00 UTC 15h - - rclone-raff-object-sync.timer rclone-raff-object-sync.service
Run the service once through systemd:
bashsudo systemctl start rclone-raff-object-sync.service
Check recent logs:
bashsudo journalctl -u rclone-raff-object-sync.service -n 50 --no-pager
You should see a successful sync log and a small destination listing.
Step 12 — Verify the Scheduled Sync End-to-End
The final step is to verify the whole workflow: local file, scheduled command, object storage destination, and restore path.
Add a new test file:
bashecho "scheduled sync verification" | sudo tee /srv/app/uploads/scheduled-test.txt > /dev/null
Run the service:
bashsudo systemctl start rclone-raff-object-sync.service
Confirm the file exists in Raff Object Storage:
bashsudo rclone ls raffs3:my-project-files/uploads \
--config /etc/rclone/raff-rclone.conf
Expected output:
text 14 example.txt
28 scheduled-test.txt
Restore the synced files into a clean directory:
bashrm -rf ~/rclone-scheduled-restore-test
mkdir -p ~/rclone-scheduled-restore-test
sudo rclone copy raffs3:my-project-files/uploads ~/rclone-scheduled-restore-test \
--config /etc/rclone/raff-rclone.conf \
--progress
Inspect the restored files:
bashfind ~/rclone-scheduled-restore-test -type f -print
Expected output:
text/home/ubuntu/rclone-scheduled-restore-test/example.txt
/home/ubuntu/rclone-scheduled-restore-test/scheduled-test.txt
Check the restored file content:
bashcat ~/rclone-scheduled-restore-test/scheduled-test.txt
Expected output:
textscheduled sync verification
You now have a working rclone sync workflow from a Raff VM to Raff Object Storage, plus a verified restore path.
Step 13 — Roll Back or Remove the Sync Job
If you need to disable the scheduled sync, stop and disable the timer:
bashsudo systemctl disable --now rclone-raff-object-sync.timer
To remove the systemd units:
bashsudo rm -f /etc/systemd/system/rclone-raff-object-sync.service
sudo rm -f /etc/systemd/system/rclone-raff-object-sync.timer
sudo systemctl daemon-reload
To remove the script and logs:
bashsudo rm -f /usr/local/sbin/rclone-raff-object-sync.sh
sudo rm -f /var/log/rclone-raff-object-sync.log
To remove the rclone config and filter:
bashsudo rm -f /etc/rclone/raff-rclone.conf
sudo rm -f /etc/rclone/raff-sync-filter.txt
This does not delete files already stored in Raff Object Storage. To remove test data, delete the demo paths carefully:
bashsudo rclone purge raffs3:my-project-files/demo \
--config /etc/rclone/raff-rclone.conf
sudo rclone purge raffs3:my-project-files/demo-filtered \
--config /etc/rclone/raff-rclone.conf
Warning
rclone purge deletes the target path and everything inside it. Double-check the destination before running purge against production buckets.
Conclusion
You have configured rclone to sync files from an Ubuntu 24.04 Raff VM to Raff Object Storage using the S3-compatible endpoint s3.raffusercloud.com. You created a secure rclone config, uploaded files with copy, tested sync with --dry-run, restored files into a clean directory, added exclusions, and scheduled a daily sync with systemd.
This workflow is useful for application uploads, generated reports, static assets, logs, backup folders, and migration staging. Use rclone copy when you want to upload without deleting destination files. Use rclone sync when you want the destination to exactly match the source, and always test with --dry-run first.
For next steps, read How to Use Raff S3 Object Storage with AWS CLI if you want standard S3 commands, S3-Compatible Object Storage Use Cases for Developers if you want architecture guidance, and Automate Backups with Cron and Rsync on Ubuntu 24.04 if you want a traditional Linux file-backup workflow.
This tutorial was tested on a Raff Ubuntu 24.04 VM using Raff Object Storage as the rclone S3 destination.

