Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • /projects/<pirg>/<user>: store your PIRG specific work files here.

  • /projects/<pirg>/shared: store PIRG collaboration data here.

  • /projects/<pirg>/scratch: store job specific I/O data here. Each PIRG has 10TB quota and this directory is purged every 30 days (any data older than 30 days is deleted).

Software

Most existing software will run fine on the new cluster. In some cases, though, there will be problems that require workarounds, re-compiling, or upgrades. If you encounter this, and aren’t sure what to do, let us know.

Generally, issues would typically be due to shared library differences in RHEL 8 (vs RHEL 7) and CPU architecture issues. For the latter, it’s important to note that the new login nodes have a newer CPU architecture than some of the compute nodes. If you compile software on a login node in a way that specifically assumes that architecture, it might not run on all of the compute nodes. (Typically, you’ll see an “Illegal instruction” error in that case.)

Conda

In addition to the original ‘miniconda’ instance, we now have a ‘miniconda-t2’ instance. To avoid compatibility issues, we will create and update Conda environments only in the latter instance on the new cluster. (Similarly, we won’t make updates on the original instance on the new cluster.) If you have personal conda environments, you might wish to follow a policy like this as well. Note that using existing Conda environments on either cluster should work fine--it’s making changes that might cause problems.

Spack

Similarly, in addition to our original ‘racs-spack’ Spack instance, there is now a new ‘spack-rhel8’ instance. An additional factor is that most Spack software is compiled locally, whereas Conda software is generally compiled upstream. Also, by default, Spack will compile software to assume the CPU architecture of the host it’s compiling on. So, as above, if you compile software on a new login node, it won’t necessarily run on all compute nodes.

One solution is to specify a CPU architecture that’s compatible with all of our existing hosts. We think something like this will work:

Code Block
spack install your-package  arch=linux-rhel8-broadwell

If you’re using your own Spack instance, you might want to take similar measures.

Technical Differences

These probably won’t affect you, but they are visible differences that you might notice.

...