Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Logging on to Talapas

Your user name and password on Talapas will be your duckID and it's associated password (i.e. the same that you use for your UO email). To manage your password use the official UO password reset page.

Hostnames

Talapas has two login nodes which can be reached directly at hpc-ln1.uoregon.edu and hpc-ln2.uoregon.edu respectively. 

Job Submission

Users are not allowed to run applications and simulations on the login nodes of Talapas and doing so will result in loss of privileges. Jobs must be submitted to the scheduler, SLURM, or run via interactive sessions on the compute nodes.

SLURM

Talapas uses SLURM as its job scheduler and resource manager. To run a job on Talapas you must first create a SLURM job script describing the resources your job requires and the executables to be run. You then submit your job script to the scheduler using the sbatch command. If the necessary resources are currently available, your job will run immediately. If not, your job will be placed in the job queue and will be run when the necessary resources become available. To check on the status of your job, use the squeue command. To cancel a job you've submitted, use the scancel command. To list the partitions on the cluster and see their status use the sinfo command. 

Partitions

Talapas is run on a dual club/condo model. Members of the compute club will have access to all the University owned compute resources while condo users will have access to the condo partition corresponding the resources they have purchase (note that users may be members of both the condo and club). For a list of partitions and which Principal Investigator Research Groups have access to them see the Partition List.

Storage

Storage space on Talapas is made available via the Talapas storage club and can be purchased by a PIRG. Storage is accounted for according to the group ownership of the file and it's important that ownership is correctly attributed. 

Home Space

Each user on Talapas is assigned a private home directory located at /home/<duckID>. By default, permissions are set to drwx------, i.e. user only.

Project Space

Each PIRG on the system has a shared project space located at /projects/<PIRG>. By default, permissions are set to drwxrws---,i.e. group permisions.

Local Scratch

Each compute node has a local scratch disk. The size of the local disk depends on the type of compute node and can be found on the Machine Specifications page.

Software

Talapas uses the LMOD environment module software to control the linux environment variables and provide multiple software versions. Users can run the module spider command to search for particular software packages on the system. The module avail command will show a list of packages whose dependencies are currently loaded. Use module load to add a software package to your environment and module unload to remove it.

Filter by label (Content by label)
cqllabel in ("slurm","partitions","modules","storage","software")

...