...
Code Block |
---|
scp chr1.fasta myusername@talapas-ln1.uoregon.edu:. |
will to copy the named file to your Talapas home directory.
...
Note |
---|
You are responsible for backing up all of your data on Talapas. This data is not backed up by RACS. Some snapshotting is performed for all directories except for the host temporary directory, and in some cases lost files can be recovered from a snapshot. This cannot be relied on as a backup, however. |
...
Talapas uses the SLURM job scheduler and resource manager to provides provide a way to submit large computational tasks to the cluster in a batch fashion.
...
SLURM uses a number of queues, which it calls paritions partitions, to run jobs with different properties. Normal jobs will use the short partition, but there are other partitions for jobs that need to run longer than a day, need more memory, or need to use GPUs, for example. You can use the sinfo
command to see detailed information about the partitions and their states.
...
For every SLURM job you should specify the amount of memory the job needs (with --mem
and related flags) and the amount of time it needs (with --time
). If you don't do so, default values are used, but these are often less than ideal.
As a general rule, the less fewer resources your job requires, the sooner it will run. This is because smaller/shorter jobs are easier for SLURM to schedule sooner. (It Requesting fewer resources also benefits other users by allowing SLURM to use schedule cluster resources more efficiently.) That said, if you specify less space or time than your job needs, it will be killed before it can complete, since these specifications are enforced. So, you want to err somewhat on the high side. For any given application, you might have to experiment some to get this right.
...
If your job needs more than the default, you must explicitly specify a larger value. Alternatively, if your job needs less, you might want to specify less, increasing which will increase the odds that it will be scheduled sooner.
...