Access to the ALICE cluster
All University of Leicester Staff are able to access and submit jobs on the HPC cluster. However, users who are members of an HPC project can queue and run more jobs and are able to access specialist partitions (long, lmem, gpu, parallel partitions).
Request creation of a new HPC project or for a new user to have access to an existing one via:
Service Catalogue > Make a request > High Performance Computing
Default job restrictions
Jobs which are not submitted to an HPC project recieve the default Quality Of Service which permits:
- Maximum running jobs per user: 4
- Maximum queued jobs per user: 16
- Maximum cpus usable by user: 128
- Maximum gpus usable per user: 0
Default jobs have access to the following partitions:
- devel
- short
- medium
So can request a maximum job size of two nodes (128 cores), maximum run time of 7 days and are unable to access GPUs, large memory nodes or the specialised parallel queues.
High priority jobs
Jobs which are submitted to an HPC project get a higher Quality of Service - this permits:
- Maximum running jobs per user: 800
- Maximum queued jobs per user: 6000
- Maximum cpus usable per user: 1280
-
Maximum gpus usable per user: 4
-
Access to all partitions on the system, including all specialised nodes
If you are a member of one or more HPC projects, you just need to specify the project you are submitting the job to in your job file or interactive job command line.
For job files, just add:
#SBATCH -A MY_HPC_PROJECT
where 'MY_HPC_PROJECT' is the name of an HPC project of which you are a member.
Alternatively, you can add -A MY_HPC_PROJECT
on the command line to sbatch, salloc or srun.