Hardware - compute
ALICE provides the following compute resources to users:
- Login nodes: 4 Login nodes provide remote desktop (via NoMachine) and ssh access to the system.
- Standard compute: 54 compute nodes, each provide 64 AMD Rome cores and 256GB of memory.
- Large Memory compute: 4 large memory nodes, each with 64 AMD Rome cores and 2TB of memory
- GPU accelerated compute: 4 GPU nodes each with 2 NVIDIA A100-80 GPUs and 512GB memory, 4 GPU nodes each with 2 NVIDIA P100-16 GPUs and 128GB memory.
Login and standard compute nodes are available to all users. Users who are members of an HPC project can also make use of Large memory and GPU compute and recieve an increased job priority.
Hardware - network and storage
- Shared home directories: up to 80GB per user of home directory space.
- Shared parallel storage: 2.8PB of shared scratch and data storage space, available to HPC project users.
- Interconnect: Infiniband HDR100 network, allowing 100gb/s low latency communications between compute nodes and storage.
- Access to the RFS: The research file system is available on login nodes.
Software
- An HPC scheduling system (SLURM) to co-ordinate access to the available resources
- A variety of compilers, interpreters and optimised numeric libraries.
- Scientific applications.