The new HPC cluster has the same name, Talapas (pronounced ta-la-pas) but with newer hardware and a newer operating system. Although some things have changed, most changes are for the better, and most software should continue to “just work”.
Notable updates
Red Hat Enterprise Linux 8 (RHEL8) operating system
Intel (Ice Lake) and AMD (Milan) 3rd generation processors
Optane and DDR4 3200MT/s memory
Nvidia 80GB A100 PCIe4 GPUs
More storage in user home directories and job I/O scratch space
Login
Duckid
UO Duckid is required to access the cluster. There are no ‘local’ accounts.
Talapas uses UO identity access management (Microsoft Active Directory) and therefore requires users to have a valid UO Duckid. Links are provided below for external collaborators or graduating researchers to continue their access to the cluster.
External collaborators have 2 options
Graduating researchers
Talapas VPN
Talapas VPN is required to access the new cluster. The Talapas VPN should provide all the same capabilities as UO VPN as well as access to Talapas.
Instructions here: Article - Getting Started with UO VPN (uoregon.edu)
Use “uovpn.uoregon.edu/talapas
" as the connection URL. The username and password are your standard DuckID and its password.
[Some advanced users might want to use OpenConnect (OpenConnect VPN client.) instead. This would support connection using a command like:
sudo openconnect --protocol=anyconnect uovpn.uoregon.edu/talapas
If you’re an ordinary user, you can ignore this option.]
An important detail is that access to Talapas VPN will be removed if your access to Talapas is removed. So, for example, if you’re a student only using Talapas for a course, at some point after the course has ended, your access will be removed. You will see error messages like “login failed” at this point when trying to connect to Talapas VPN. The fix is to switch back to UO VPN, if desired, or to just stop using VPNs.
Most crucially, do not repeatedly attempt to log in when you’re getting error messages. As with other uses of your DuckID at UO, if you generate a large number of failures, all DuckID access (including things like e-mail) will be locked University-wide, and you will have to talk to IT about getting it unlocked again. Similarly, be aware of automated processes like cron jobs that might trigger this situation without your notice.
Blocked ports
Note that inbound access to Talapas is only allowed for SSH and (eventually) Open OnDemand. All other ports are blocked.
Talapas now uses a load balancer
The preferred method of accessing the new Talapas is via “login.talapas.uoregon.edu
".
The new Talapas uses a load balancer, which will redirect your SSH connection to a particular login node in a somewhat arbitrary way. In particular, connections from a particular IP address will go to a login node chosen on the basis of being up and having a light load. The choice of login node is “sticky”. That is, further connections from your IP address will go to the same login node, as long as there has been some activity within the last 24 hours.
This has some implications for workflow. First, tools like ‘tmux’ and ‘screen’ will no longer work reliably in some cases. In particular, if you have a ‘tmux’ session that you’re using at the University, and you try to connect to it from home (which will have a different IP address), it probably won’t work. As a distinct case, if you have no usage for 24 hours on a host, even on-campus, the “sticky” effect will expire, and trying to connect to your ‘tmux’ session probably won’t work. It’s also worth noting that your ‘tmux’ server won’t be killed--it will just hang around in an orphaned state. If this happens, you can send a ticket to RACS, and we’ll kill it for you.
Not yet available but coming soon
Open OnDemand
The new Intel compilers (the existing compilers are down/gone due to licensing issues)
More A100s
cron jobs
Notable issues
CUDA MIG slicing (on A100s)
Due to limitations with CUDA MIG slicing, it appears that a job can only use one slice (GPU) per host. That means one per job, unless MPI is being used to orchestrate GPU usage on multiple hosts. See NVIDIA Multi-Instance GPU User Guide :: NVIDIA Tesla Documentation.
Technical Differences
These probably won’t affect you, but they are visible differences that you might notice.
Hostnames now use the long form. (e.g., “login1.talapas.uoregon.edu”)
You may need to use the long form of hostnames to access other campus hosts. That is, using “somehost” may not work, but “somehost.uoregon.edu” will.
Linux group names have changed and are now longer. For example, “is.racs.pirg.bgmp” instead of “bgmp”. Since this information is now coming from the campus Active Directory server, there are a number of other mysterious AD groups included. You can just ignore these.
Currently, lookup of group names can be quite slow, taking 30 seconds or longer. We’ll work on speeding this up.
Generally, RACS is discouraging the use of POSIX ACLs on the new cluster. You can still use them yourself, but there are now other alternatives. If you’re tempted to use ACLs to solve a problem, consider asking about the alternatives.
In RHEL 8, the distribution executables seem to be fully stripped, removing all debug symbols. There’s probably an alternate way to add this, and we’ll look for it eventually.
The least you need to know
Access is now allowed only via the Talapas VPN. See below for connection instructions.
Talapas login nodes are now behind a load balancer. This means that ‘tmux’, ‘screen’, and other long-running server processes will no longer work as before. See below.
The partitions have changed. You can see them with the ‘sinfo’ command, and the naming is intuitive. The time limits are currently as on the existing Talapas.
Default memory for all jobs is now 4GB. If your job needs more, you will need to explicitly request it.
Depending on how existing GPU software was compiled, it may need to be recompiled or upgraded to work with the newer GPUs.
In some cases, RHEL shared library changes or other things may break existing software. File a ticket, and we’ll get it fixed ASAP.