Monthly Archives: January 2020

Memory requirements when submitting a job

Dear All

To avoid HPC users downing compute nodes, memory limits are introduced on zeus HPC.

If your jobs are not that memory-hungry, you probably will not notice this at all. By “memory-hungry” it is meant exceeding 4GB per CPU-core (default value).

If your job require more than that, you can request more memory using --mem=... or --mem-per-cpu=... [MB] parameter with sbatch.

E.g.

1) ask for “full” 48 GB of memory to be available to your job e.g. on Nehalem (8-CPU) nodes:

sbatch -n8 -N1 --mem=48000 -t=8:00:00 myslurmscript.slurm

In this case one node is requested, 8 tasks (CPUs) and total job memory is 48GB (48000 MB)

If this is not specified max memory would be 4GB x 8 CPUs = 32 GB

2) If you using 32-CPU Broadwell nodes, which have 128GB of RAM, the default value of 4GB/CPU is max possible. If you want more RAM per CPU, e.g. if you use only 2 CPU-tasks, but need all node memory allocated to them, you can do:

sbatch -n2 -N1 --mem-per-cpu=64000 ...

3) if you need more than 128GB per node, you can use SMP node (zeus15, max 512GB/node) by requesting “–constraint=smp” when submitting the job.

If you request amount of memory, exceeding physically available, job will fail to submit with error message “error: Memory specification can not be satisfied”, “error: Unable to allocate resources: Requested node configuration is not available”.

If you requested certain memory for your job (or left it with default 4GB/CPU), which is then exceeded during the run, slurm will terminate the job.

Before this measure, it was possible to “oversubscribe” (consume more memory than available RAM by using disk swap space) and make a node unresponsive/slow, which result in job termination anyway, but in certain cases led to node failure.

Regards

Alex

css.php