Converting Torque / PBS to SLURM
The following tables provide the SLURM equivalents for Torque/PBS commands, environment variables and scheduler paramters.
Basic User Commands
Description | Torque / PBS | SLURM |
---|---|---|
Submit a job | qsub [job_script] | sbatch [job_script] |
Delete a job | qdel | scancel [job_id] |
Job status (just your jobs) | qstat | squeue --me |
Job status (all jobs) | showq | squeue |
Job Environment
The following are the environment variables made available when your job runs.
Description | Torque / PBS | SLURM |
---|---|---|
Job ID | PBS_JOBID | SLURM_JOBID |
Job name | PBS_JOBNAME | SLURM_JOB_NAME |
Submission directory | PBS_O_WORKDIR | SLURM_SUBMIT_DIR |
Nodelist (for MPI) | PBS_NODEFILE | SLURM_JOB_NODELIST |
Array ID (for array jobs) | PBS_ARRAYID | SLURM_ARRAY_TASK_ID |
Job Parameters
Parameters passed either via command line (for batch / interactive jobs) or via the batch job script.
Jobs on the new system may need the following additional directives. If you are part of one (or more) hpc project, you should specify the project to which you are submitting the job:
--account=[HPC_PROJECT]
By default, slurm will export your environment on job submission to the jobs environment when it runs. In most cases you will wish to prevent this by adding:
--export=NONE
to your submission arguments.
Most other job submission arguments / directives have a one-to-one mapping between them (directives start #PBS in torque, #SLURM in slurm).
Description | PBS/Torque | SLURM |
---|---|---|
Directive | #PBS | #SBATCH |
Job Name | -N [name] | --job-name=[Name] |
Queue/partition | -q [queue name] | --partition=[partition name] |
Number of nodes | -l nodes=[N] | --nodes=[N] |
Tasks per node (for MPI) | -l ppn=[count] | --tasks-per-node=[count] |
Processors per node (for Shmem/hybrid jobs) |
-l ppn=[count] | --cpus-per-task=[count] |
Wall clock time | -l walltime=[hh:mm:ss] | --time=[hh:mm:ss] or --time=[days-hours] |
Redirect standard out | -o [file_name] | --output=[file_name] |
Redirect standard error | -e [file_name] | --error=[file_name] |
Combine standard error/output | -j oe -o [file_name] | --output=[filename] Do not set --error too |
Event notifications | -m bae | --mail-type=BEGIN,END,FAIL |
email address | -M [email_address] | --mail-user=MAIL_ADDRESS |
Memory (single node jobs) | -l mem=[memory] | --mem=[memory] |
Memory (multi node jobs) | -l vmem=[memory] | --mem-per-cpu=[memory] |
Array Job | -t [array_spec] | --array=[array_spec] |
Request GPUs | -q gpu -l nodes=1:gpus=[N] | --gres=gpu:ampere:1 No need to request the gpu partition |
If your job is multiprocessor (either shared or distributed memory), uses GPUs or large memory nodes please read the relevent sections of this documentation before submitting it to the new ALICE cluster.
Example 1 - Simple serial job:
Torque/PBS (e.g. on previous version of ALICE)
#!/bin/bash
#
#PBS -N JobName
#PBS -l walltime=06:00:00
#PBS -l vmem=4gb
#PBS -m bea
#PBS -M username@le.ac.uk
cd $PBS_O_WORKDIR
./run_a_program
SLURM (current ALICE)
#!/bin/bash
#
#SBATCH --job-name=JobName
#SBATCH --account=MY_HPC_PROJECT
#SBATCH --time=06:00:00
#SBATCH --mem=4G
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH --mail-user=username@le.ac.uk
#SBATCH --export=NONE
cd $SLURM_SUBMIT_DIR
./run_a_program