Scope: UCR Researchers, External Collaborators, Project Polaris Migrants Audience: All Users Last Updated: Feb 23, 2026
Ceph Research Data Service (CephRDS) is UCR’s premier on-premise object storage cluster, funded by the NSF CC* program. It provides 2 PB of highly resilient (6+3 Erasure Coded), S3-compatible storage designed specifically for active research data and large-scale computational workflows.
Benefits:
rclone, Python (boto3), and visual clients (Cyberduck, Mountain Duck).We operate a Split Allocation model to provide the best experience and ensure security depending on your affiliation.
If you are a UCR PI needing storage for your lab, or migrating from an older system:
research-computing@ucr.edu with your requested storage quota (in TB) and your grant Chart of Accounts (COA) for billing (if exceeding the initial allocation).If you are an external collaborator (e.g., from UCSD, or another institution) working with a UCR lab on data hosted in a UCR bucket, we do not use an external federation portal.
research-computing@ucr.edu or submit a BearHelp ticket explicitly requesting access for you.Unlike standard campus services, CephRDS uses the S3 API Protocol. This means you do not use your NetID and password to log in interactively. Instead, you use an Access Key and a Secret Key.
Security Warning: Treat your Secret Key like a highly sensitive password. Never commit it to GitHub or share it in plaintext.
We recommend different tools based on your operating system and technical comfort level.
For a drag-and-drop experience similar to Google Drive or Dropbox, we recommend Cyberduck (Free) or Mountain Duck (Paid, mounts as a local drive).
Note: We do not recommend or support FileZilla for CephRDS as it does not natively support the modern S3 protocols required for our deployment.
Setup Cyberduck:
s3.ucr.edu (Note: exact endpoint pending final DNS configuration)443For moving massive datasets (terabytes) or running scheduled jobs on the HPCC, rclone is the standard. It can saturate 100Gbps network links through parallel transfers.
Setup rclone:
rclone confign for New remote.cephrds.s3 (Amazon S3 Compliant Storage Providers).Ceph as the provider.false to enter credentials manually.https://s3.ucr.eduExample Transfer Command: Copy a local folder to a bucket on CephRDS:
rclone copy /local/data/my_dataset cephrds:my_project_bucket/ --progress --transfers 8
boto3)If you are writing data analysis pipelines in Python:
import boto3
# Initialize the S3 client
s3 = boto3.client('s3',
endpoint_url='https://s3.ucr.edu',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# List all buckets
response = s3.list_buckets()
print(response['Buckets'])
Q: I got a “403 Forbidden” error when trying to upload. A: You have likely hit your assigned storage or object quota. Please contact support to request a quota increase.
Q: I lost my Secret Key. A: Secret Keys cannot be recovered, only regenerated. Contact support to generate a new key pair. You will need to update all your scripts and clients.