We have R 3.3.2 installed on zeus (login nodes and compute nodes). You can also access R on one login node via RStudio web interace at http://zeus.coventry.ac.uk/R
Alex Pedcenko
We have R 3.3.2 installed on zeus (login nodes and compute nodes). You can also access R on one login node via RStudio web interace at http://zeus.coventry.ac.uk/R
Alex Pedcenko
The total number of nodes (or CPUs) you can use depend on how long your job has to run (i.e. in which queue/partition it was submitted):
[queues are listed from higher to lower priority, i.e. shorter queues have higher priority in the waiting list!]
for up to 4-hours short jobs:
for up to 12hours jobs:
For up to 24hrs jobs:
For up to 36hrs jobs:
(specialized queues SMP,GPU,NGPU have higher priority in the waiting list, i.e. if you need to use GPUs on these nodes, you have higher “weight”)
For up to 48hrs jobs:
For >48 hrs long jobs:
There is a standing reservation of all nodes in “all” queue and “Broadwell+NGPU+Phi” nodes for this Sunday 13/11/16 from 0:00 to 12:00, which is needed for conducting more performance tests before commissioning of HPC. So If your submitted job spans through this time period you will get this message as a reason for “queueing”.
Alex Pedcenko
Just few final tests left after hardware / software upgrades, which took place during couple of weeks at the end of October 2016. The major “full blast heat generation” test will be performed on Monday 7/11/2016 to see if the server room survives. If you want to test some of your codes/jobs before then please ask Alex Pedcenko aa3025@coventry.ac.uk to enable your slurm accounts.
the HPC specs we will have after the latest upgrade:
Compute nodes of HPC are interconnected with High-Speed QDR Infiniband (40 GBps)
2 x 15 Tb file servers (/home and /share)
Outside of zeus HPC
Compute nodes of new HPC part are interconnected with High-Speed FDR Infiniband fabric (54 Gbps)
Alex Pedcenko
Recent Comments