Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: mention job size

...

The primary advantage of this approach is that the job will probably be scheduled sooner, since SLURM is free to use any available cores, rather than having to arrange for nodes with sufficient free cores to become available.  Depending on the I/O properties of the job, the job might run more slowly in this configuration, and runtime will vary a bit depending on exactly how the slots are spread across nodes.  If it works for your job, though, this could be a huge win, in terms of getting your job started sooner.

Whichever method you use, also consider the effect of job size on your wait time.  In particular, the more CPU cores you ask for, the longer you are likely to wait for your job to start.  For some jobs, there is a minimum CPU core count (because of the requirements of the software).  For others, the core count might be relatively arbitrary.  Usually adding more cores would be expected to make the job run more quickly.  Using fewer cores might lead to earlier job completion, though, if it results in your job starting significantly sooner.

Specifying Memory

For single-node jobs, it's common to use the SLURM --mem flag to specify the entire amount of memory the job will be allocated.  For multi-node jobs, though you will probably find it more intuitive and predictable to specify the amount of memory available to each individual task, like so

...