Apptainer (previously called Singularity) allows HPC users to run containers (including most containers on Docker Hub) on the HPC cluster. These can be used to run single node, MPI or GPU software distributed as a container.

Quickstart instructions

If the usage instructions for the docker or singularity image you want to use advise running:

docker pull dockeruser/dockerimage

or

singularity pull docker://dockeruser/dockerimage

then to use the docker image with apptainer, enter:

module load apptainer
apptainer pull docker://dockeruser/dockerimage
apptainer run dockerimage_latest.sif

Please read below for more detailed instructions.

Advanced usage information

Using pre-existing containers

Prebuilt images can be downloaded from a number of places

You can also distribute your own containers using Sylab cloud

Images can be downloaded using the pull or build commands

  • NB: Images can be quite large
  • And therefore it can take quite long time to download an image
module load apptainer
apptainer build helloworld.sif library://sylabsed/examples/hellompi

But you do not need to do this every time, typically you download the image, and use it many times

To run the application

module load apptainer
apptainer run helloworld.sif

Running apptainer on ALICE

Long running, intensive applications should be submitted via job scheduler:

#SBATCH --time=01:20:00
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=12g

module load apptainer
apptainer run mycontainer.sif
  • You will need to request enough memory for: Shell + Image + application data
  • If you need to download a very large container you can do this as part of the job too.

MPI and apptainer

To run apptainer with MPI you need:

  1. To know the path to the executable within the container, in this case /sc19-tutorial/hello
  2. To use a compatible version of MPI

Therefore, you need to know which version of MPI was used to compile the application

  • If the application was built with OpenMPI -> use OpenMPI on ALICE
  • If the application was built MPICH/MVAPICH/IntelMPI -> use IntelMPI on ALICE

Example:

module load intel-oneapi-compilers/2023.1.0
module load intel-oneapi-mpi/2021.9.0
mpirun apptainer exec ./helloworld.sif /sc19-tutorial/hello

Nvidia / GPU containers

Log on to GPU node by starting an interactive job via the scheduler:

srun --partition=gpu-devel --account=MY_HPC_PROJECT --cpus-per-task=4 \
     --time=01:00:00 --mem=128G --gres=gpu:ampere:1 --pty /bin/bash

Load modules:

module load gcc/12.3.0
module load cuda12.1/toolkit
module load cudnn8.9-cuda12.1
module load apptainer

For example, download PyTorch container (this will take quite a while but only needs to be done once):

apptainer pull docker://nvcr.io/nvidia/pytorch:20.03-py3
apptainer run --nv pytorch:20.03-py3

Making your own containers

There is more than one way to do this

  • From a definition file
  • Cloning and modifying an existing image

Ordinarily it requires administrative privileges to create a container, but you can use Singularity Remote builder to build images in the cloud

cloud.sylabs.io/builder

This can be slow to build and transfer the image back to ALICE but can be done as a regular (unprivileged) user. You will need an account on Sylabs cloud.