Hardware - compute

ALICE provides the following compute resources to users:

  1. Login nodes: 4 Login nodes provide remote desktop (via NoMachine) and ssh access to the system.
  2. Standard compute: 54 compute nodes, each provide 64 AMD Rome cores and 256GB of memory.
  3. Large Memory compute: 4 large memory nodes, each with 64 AMD Rome cores and 2TB of memory
  4. GPU accelerated compute: 4 GPU nodes each with 2 NVIDIA A100-80 GPUs and 512GB memory, 4 GPU nodes each with 2 NVIDIA P100-16 GPUs and 128GB memory.

Login and standard compute nodes are available to all users. Users who are members of an HPC project can also make use of Large memory and GPU compute and recieve an increased job priority.

Hardware - network and storage

  1. Shared home directories: up to 80GB per user of home directory space.
  2. Shared parallel storage: 2.8PB of shared scratch and data storage space, available to HPC project users.
  3. Interconnect: Infiniband HDR100 network, allowing 100gb/s low latency communications between compute nodes and storage.
  4. Access to the RFS: The research file system is available on login nodes.

Software

  1. An HPC scheduling system (SLURM) to co-ordinate access to the available resources
  2. A variety of compilers, interpreters and optimised numeric libraries.
  3. Scientific applications.