Compute Nodes

The ALICE compute nodes provide the processor power required to run large simultations/applications. The compute nodes are accessed by first logging in to the ALICE login nodes, then submitting an interactive or batch job to the SLURM scheduler for scheduling. This job will then be scheduled and run when the resources requested for it become available.

Tasks which are run on login nodes share that nodes resources with all other users. Tasks submitted to the compute nodes via the SLURM scheduler will have access to dedicated resources.

The compute nodes available on the system consist of:

  • 56 standard compute nodes - each with 64 AMD EPYC 7532 processor cores and 256 GB memory
  • 4 large memory nodes - each with 64 AMD EPYC 7532 processor cores and 2 TB memory
  • 4 GPU nodes - each with 64 Intel IceLake 6338 processor cores, 512 GB memory and 2xA100-80 Nvidia ampere GPUs.
  • 4 GPU nodes - each with 28 Intel Broadwell E5-2680 processor cores, 128 GB memory and 2xP100-32 Pascal GPUs

All compute nodes have access to the /home, /data and /scratch shared filesystems and are connected by an HDR-100 infiniband high performance interconnect.

Note that the Research File System is not available on compute nodes.