Introduction
Store and retrieve files from Raff S3-compatible object storage using the AWS CLI — the same tool you would use with Amazon S3. Raff Technologies built its object storage on Ceph, providing an S3-compatible API endpoint at s3.raffusercloud.com that works with any existing S3 tool, library, or script without code changes.
Object storage is fundamentally different from the NVMe SSD block storage attached to your Raff VM. Block storage behaves like a hard drive — you mount it, create a file system, and access files through directory paths. Object storage is accessed over HTTP: you upload objects (files) to buckets (containers), and retrieve them with URLs or API calls. This makes it ideal for backups, media assets, application logs, static website files, and any data that needs to exist independently of a specific server.
In this tutorial, you will configure the AWS CLI to connect to Raff Object Storage, create buckets, upload and download files, generate pre-signed URLs for temporary access, and set up a sync workflow for automated backups.
Step 1 — Install the AWS CLI
The AWS CLI v2 works with any S3-compatible service, not just AWS. Install it on your Raff VM or local machine.
bashsudo apt update && sudo apt install -y unzip curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm -rf aws awscliv2.zip
Verify the installation:
bashaws --version
You should see output like aws-cli/2.x.x.
Step 2 — Get Your S3 Credentials from the Raff Dashboard
Before configuring the CLI, you need your S3 access key and secret key from the Raff dashboard.
- Log in to app.rafftechnologies.com
- Navigate to the Object Storage section
- Copy your Access Key and Secret Key
Warning
Treat your secret key like a password. Never commit it to version control, share it in chat, or include it in scripts that are publicly accessible.
Step 3 — Configure the AWS CLI for Raff
Create a named profile for Raff Object Storage. This lets you keep the Raff configuration separate from any AWS credentials you might also have.
bashaws configure --profile raff
Enter the following when prompted:
AWS Access Key ID: <your-access-key>
AWS Secret Access Key: <your-secret-key>
Default region name: us-east-1
Default output format: json
The region us-east-1 is used for S3 signature compatibility. Raff's storage is physically located in Virginia regardless of the region string.
Now test the connection by listing your buckets:
bashaws s3 ls --endpoint-url https://s3.raffusercloud.com --profile raff
If you have no buckets yet, the command returns an empty result with no errors. If you see an error about invalid credentials, double-check your access key and secret key.
Tip
To avoid typing --endpoint-url and --profile on every command, create a shell alias:
bashecho 'alias raffs3="aws s3 --endpoint-url https://s3.raffusercloud.com --profile raff"' >> ~/.bashrc
echo 'alias raffs3api="aws s3api --endpoint-url https://s3.raffusercloud.com --profile raff"' >> ~/.bashrc
source ~/.bashrc
Now you can use raffs3 ls instead of the full command. The rest of this tutorial uses the full syntax for clarity.
Step 4 — Create a Bucket
Buckets are top-level containers for your objects. Bucket names must be globally unique across the Raff Object Storage namespace.
bashaws s3 mb s3://my-project-backups \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Expected output:
make_bucket: my-project-backups
Naming conventions for buckets:
- 3–63 characters, lowercase letters, numbers, and hyphens only
- Must start and end with a letter or number
- No periods (they break SSL with virtual-hosted URLs)
- Use a descriptive prefix:
myapp-assets,myapp-backups,myapp-logs
Verify the bucket exists:
bashaws s3 ls --endpoint-url https://s3.raffusercloud.com --profile raff
Step 5 — Upload and Download Files
Upload a single file:
bashecho "Hello from Raff Object Storage" > test.txt
aws s3 cp test.txt s3://my-project-backups/test.txt \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Upload an entire directory:
bashaws s3 cp /var/log/nginx/ s3://my-project-backups/logs/nginx/ \
--recursive \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
List objects in a bucket:
bashaws s3 ls s3://my-project-backups/ \
--recursive \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Download a file:
bashaws s3 cp s3://my-project-backups/test.txt ./downloaded-test.txt \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Download everything from a bucket:
bashaws s3 cp s3://my-project-backups/ ./restore/ \
--recursive \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Step 6 — Sync a Directory for Backups
The s3 sync command is the most practical feature for backups. It only uploads files that are new or changed, saving bandwidth and time.
bashaws s3 sync /home/myuser/myapp/ s3://my-project-backups/myapp/ \
--endpoint-url https://s3.raffusercloud.com \
--profile raff \
--exclude "node_modules/*" \
--exclude ".git/*" \
--exclude "*.log"
This uploads only changed files, skips node_modules, .git, and log files. On subsequent runs, only modified files are transferred.
To make this a scheduled backup, add it to cron. See our backup automation tutorial for the full cron setup, and add this S3 sync step as an additional off-server backup destination.
bashcrontab -e
Add a daily backup at 3 AM:
0 3 * * * aws s3 sync /home/myuser/myapp/ s3://my-project-backups/myapp/ --endpoint-url https://s3.raffusercloud.com --profile raff --exclude "node_modules/*" --exclude ".git/*" >> /home/myuser/s3-backup.log 2>&1
Step 7 — Generate Pre-Signed URLs
Pre-signed URLs let you share a private object temporarily without making the bucket public.
bashaws s3 presign s3://my-project-backups/test.txt \
--expires-in 3600 \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
This generates a URL valid for 1 hour (3600 seconds). Anyone with the URL can download the file without credentials. Common use cases: sharing database dumps with a team member, providing a download link in an email, serving temporary media files in an application.
Note
Pre-signed URLs include your access key (but not your secret key) as a query parameter. The URL is safe to share, but treat it like a temporary password — do not post it publicly unless the file is intended to be public.
Step 8 — Delete Objects and Buckets
Delete a single object:
bashaws s3 rm s3://my-project-backups/test.txt \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Delete all objects in a prefix (like a directory):
bashaws s3 rm s3://my-project-backups/logs/ \
--recursive \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Delete a bucket (must be empty first):
bashaws s3 rb s3://my-project-backups \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Force-delete a bucket and all its contents:
bashaws s3 rb s3://my-project-backups --force \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
Warning
The --force flag permanently deletes all objects in the bucket and then removes the bucket itself. There is no undo. Double-check the bucket name before running this command.
Step 9 — Set a Bucket Policy for Public Read Access
If you are serving static assets (images, CSS, JavaScript), you can make a bucket publicly readable.
bashcat > /tmp/public-read-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-static-assets/*"
}
]
}
EOF
aws s3api put-bucket-policy \
--bucket my-static-assets \
--policy file:///tmp/public-read-policy.json \
--endpoint-url https://s3.raffusercloud.com \
--profile raff
After setting this policy, objects are accessible via https://s3.raffusercloud.com/my-static-assets/filename. Only use public read for content intentionally shared with the world — never for backups, logs, or sensitive data.
Step 10 — Verify Your Setup
Run through a final check to confirm everything works end-to-end.
List buckets:
bashaws s3 ls --endpoint-url https://s3.raffusercloud.com --profile raff
Upload, list, and download a test file:
bashecho "Final verification" > /tmp/verify.txt
aws s3 cp /tmp/verify.txt s3://my-project-backups/verify.txt \
--endpoint-url https://s3.raffusercloud.com --profile raff
aws s3 ls s3://my-project-backups/ \
--endpoint-url https://s3.raffusercloud.com --profile raff
aws s3 cp s3://my-project-backups/verify.txt /tmp/verify-downloaded.txt \
--endpoint-url https://s3.raffusercloud.com --profile raff
cat /tmp/verify-downloaded.txt
You should see "Final verification" printed. Clean up:
bashaws s3 rm s3://my-project-backups/verify.txt \
--endpoint-url https://s3.raffusercloud.com --profile raff
Conclusion
You connected the AWS CLI to Raff S3-compatible object storage and learned how to create buckets, upload and download files, sync directories for backups, generate temporary pre-signed URLs, and set bucket policies. Every S3 tool and library works the same way — just point the endpoint to s3.raffusercloud.com.
From here, you can:
- Automate database backups to S3 using our cron and rsync tutorial
- Use Raff Object Storage as a Docker registry backend or artifact store
- Integrate with application code using boto3 (Python), aws-sdk (Node.js), or any S3 SDK — see our guide on S3 use cases for developers
Raff Object Storage is built on Ceph and runs on the same NVMe infrastructure as our VMs. We designed it to handle high-throughput workloads — our internal benchmarks show sustained write speeds exceeding 500 MB/s for large objects when uploading from a VM in the same Virginia data center.