29 November 2023 - ALICE2 / SPECTRE2 are no longer in service.
A note on naming!
In the following documentation we will refer to the old system as ALICE2 / SPECTRE2. The replacement system, though it is technically ALICE3 will simply be refered to as ALICE.
Parallel running.
29 November 2023 - ALICE2 / SPECTRE2 are no longer in service.
- 14th September: ALICE available to all users.
- 6th October: SPECTRE2 NoMachine service shut down - SPECTRE2 and ALICE2 login nodes will remain accesible via ssh from ALICE login nodes.
- 29th November: ALICE2 / SPECTRE2 service shut down.
- 2024: Existing ALICE storage will be replaced.
Filesystems.
The shared ALICE filesystems (/home, /data and /scratch) are accessible on both systems.
RFS is available on the new ALICE login nodes via the same paths as the ALICE2 / SPECTRE2 login nodes.
Accessing ALICE2 / SPECTRE2 and ALICE login nodes.
Connection instructions for ALICE are provided here.
Please note that ALICE2 / SPECTRE2 are no longer available.
Portability of code
It is likely that software compiled on one system will not run on the other system, due to the differences is key system library versions. We recommend re-building any code that you are moving to the new ALICE system.
Compilers and interpreters
The ALICE system has a much more recent set of compilers than ALICE2 / SPECTRE2. Please see the compiler toolchains section. Please note, the old intel studio compilers have been retired - they are replaced by intel OneAPI.
Converting Torque/PBS scripts to SLURM
The scheduler on ALICE2 / SPECTRE2 is Torque - a variant of the portable batch system scheduler. The new ALICE system uses SLURM. Job scripts will need to be rewritten before submitting them to the new system. In most cases a simple line by line change of the #PBS directives to #SLURM is all that is needed.
Please see the SLURM documentation on this site for more information about SLURM and the next section for details on the SLURM equivalents for Torque / PBS commands such as 'qsub', 'qdel', 'qstat' and 'showq'.
Conda environments
Existing Conda environments may still function as expected, but it is advised that you reinstall Conda and any environment(s) you have.
Before doing this, take a record of what software is installed in each environment.
First, on ALICE2 / SPECTRE2, list the environments:
conda env list
# conda environments:
#
base /home/a/abc1/miniconda3
project1 /home/a/abc1/miniconda3/envs/project1
HAL /home/a/abc1/miniconda3/envs/HAL
Then, for each Conda environment in turn, activate it (eg for 'project1'):
conda activate project1
and from the Conda environment prompt, record a copy of python modules installed within it:
(project1) [abc1@spectre10]$ conda list -e > ~/project1.list
Now, login to new ALICE, move your old Conda installation out of the way, eg:
mv ~/miniconda3 $SCRATCHDIR/
Reinstall Conda and then for each of your environments, recreate them and reinstall the Python modules within (eg for Conda environment 'project1':
conda create -n project1 --file ~/project1.list