Access to UO Information Services storage (UOCloud)
RACS has provided access to UO Information Services (IS) storage (UOCloud) from Talapas. This makes available a lower cost and lower performance storage tier for research data management needs. UOCloud is not intended for use as a data target or destination with jobs on the cluster and is therefore not available on compute nodes.
Request UOCloud storage
Visit the IS service portal, https://service.uoregon.edu/TDClient/2030/Portal/Requests/ServiceDet?ID=18714&SIDs=3235
Click the ‘Request Help’ button to open a ticket
In the ticket, enter the following:
What Type of Storage Do You Need?
select ‘Server Attached Storage (SMB, NFS, iSCSI)'
What do you need help with?
select 'New storage allocation’
Storage name?
enter your PIRG
Storage access method?
select NFS
Storage size?
enter size in TB
Export client IP netmask?
enter 10.223.40.6/27
Storage snapshots?
Yes
Index?
enter your billing index
Confirm Requirements
click the check box
Description
enter ‘Talapas’
Access /uocloud
UOCloud mount points are available on the data transfer node dtn01 at /uocloud/<pirg>
Login to dtn01 is available from the Talapas login nodes through ssh
. On dtn01 standard Linux tools such as cp
, mv
, and rsync
are available to use to transfer data movement between Talapas storage /gpfs
and UOCloud storage /uocloud
Data transfer
Copy or move data from the CLI or a script. Working with a large amount of data takes time. When running your data transfer commands interactivly through the CLI avoid network disconnects interrupting your file transfers by wrapping your shell session in a persistent session with screen
or a similar utility.
Simple screen commands,
#create a new session and attach to it
$ screen -S <name>
#detach from screen session
$ <cntl> <a><d>
#show your screen sessions
$ screen -ls
#attach to a session
screen -r <name>
#delete session, while inside the session
$ exit
See man screen
For example, create a screen session, run an rsync command, then detach from the session while the rsync runs,
$ screen -S myscreen
$ rsync ...
$ <cntl> <a><d>
rsync script example
Here is an example script, we’ll call it rsyncjob.sh
,
#!/bin/bash
#
# example script to rsync files to /uocloud
#
#check the host and only run on dtn01
whathost=$(hostname -s)
if [ "$whathost" != "dtn01" ]; then
echo "Usage: this script runs from dtn01 only, exiting..."
exit 1
fi
#capture date for the log file
thedate=$(date +%y%m%d%H%M)
logfile="/projects/pirg/cron/rsyncjob.sh.out.$thedate"
#variables for rsync source and destination
sourcepath="/projects/pirg/5GB-in-small-files/"
destpath="/uocloud/pirg/testdir/5GB-in-small-files/"
#check that /uocloud/dir is mounted
if [ ! -d "$destpath" ]; then
echo "$destpath is not mounted, exiting..."
exit 1
fi
#rsync files, send output to the log file
/usr/bin/rsync -axv "$sourcepath" "$destpath" > $logfile 2>&1
echo "All done."
exit 0
See man rsync
for an explanation of the options as well as more examples.
rsync: include the trailing slash “/” on both source and destination directories. Makes the syntax easier to remember and gives the most desired behavior with rsync. For a deeper explanation, see man rsync
Best practice: redirect output to a log file so you can validate output and review any errors (2>&1 captures stderr and sends it to the logfile as well)
Login to dtn01 from a login node
Run the rsync script on dtn01 for the initial copy of the data to UOCloud
cron example
On dtn01 you can create a crontab entry to trigger your rsync script.
Simple cron commands,
In a crontab
each line is made up of two parts, 1) when to run and 2) what to run. For example,
See the wikipedia page on cron, https://en.wikipedia.org/wiki/Cron
For example on dtn01,
Best practice: redirect output from your crontab to a log file so you can validate output and review any errors.