Slurm batch example

  1. Object and contents of the batch
  2. Batch execution
  3. Explanation of the SBATCH options used
  4. Jobs Steps explanation
  5. Environnement variables
  6. Job execution diagram

The Batch defines the two components of a Job: the requested resources (defined in resource blocks called Tasks) and the script that describes the different Job Steps. Job Steps are declared via the "srun" command and run sequentially or in parallel. They create one or more Tasks (run in parallel) and manage the distribution of allocated resources. Each Task (command, program, script ...) has its allocation as it sees fit (sub-process, threads, combination of both).

1. Object and contents of the batch "" :

Purpose : Multithreaded encoding of 3 video files in parallel. Encoding will be done using "ffmpeg" and "-threads" option.

# Options SBATCH :
#SBATCH --job-name=TestJob               # Job Name
#SBATCH --ntasks=3                       # Number of Tasks : 3
#SBATCH --cpus-per-task=4                # 4 CPU allocation per Task
#SBATCH --partition=8CPUNodes            # Name of the Slurm partition used

#SBATCH --mail-type=ALL                  # Mail notification of the events concerning the job : start time, end time,…
module purge                             # delete all loaded module environments
module load ffmpeg/0.6.5                 # load ffmpeg module version 0.6.5

# Jobs Steps (étapes du Job) :

# Step 1 : Preparation
srun -n1 –N1

# The following 3 Steps are the 3 ffmpeg encoding processes executed in parallel.
srun -n1 -N1 ffmpeg -i video1.mp4 -threads $SLURM_CPUS_PER_TASK [...] video1.mkv &
srun -n1 -N1 ffmpeg -i video2.mp4 -threads $SLURM_CPUS_PER_TASK [...] video2.mkv &
srun -n1 -N1 ffmpeg -i video3.mp4 -threads $SLURM_CPUS_PER_TASK [...] video3.mkv &

wait      # Wait for the end of the "child" processes (Steps) before finishing the parent process (Job).

2. "" batch execution :

The batch is started via the "sbatch" command :

[bob@co2-slurm-client ~]$ sbatch

3. Explanation of the SBATCH options used :

In this example, the SBATCH options define the Job name, the partition used, the number of Tasks (--ntasks) and the number of CPUs per Task (--cpus-per-task). The allocation will then be 12 CPUs (3 Tasks of 4 CPUs in parallel).

The number of nodes (--nodes) not being defined, it will be determined by Slurm. Here, the Batch uses the "8CPUNodes" partition consisting of nodes each having 8 CPUs. The requested allocation being 12 CPUs, the number of nodes used will be implicitly 2.

Note :

In general, when "--nodes" is not defined, Slurm automatically determines the number of nodes needed (depending on node usage, number of CPUs-per-Node / Tasks-per-Node / CPUs -by-Task / Tasks, etc.).

If "--ntasks" is not defined, one Task per node will be allocated.

Note that the number of Tasks of a Job can be defined either explicitly with "--ntasks" or implicitly by defining "--nodes" and "--ntasks-per-node".

If the "--ntasks", "--nodes" and "--ntasks-per-node" options are all set, "--ntasks-per-node" will then indicate the maximum number of Tasks per node.

For more information on SBATCH options, see the topic "Current sbatch options"

4. Job Steps explanation :

The Batch defines Job Steps using the "srun" command (more on this command at:

Without explicit declaration of Job Step (s), only one Task will be created and the parameters "--ntasks", "--nodes", "--nodelist" ... will be ignored.

A Job Step also accepts a number of Tasks and the number of nodes on which to distribute these Tasks via the same options as "sbatch" respectively "--ntasks" ("-n") et "--nodes" ("-N"). If these options are not defined for a Job Step, the global values of the Job will be used (all Tasks and all nodes will be allocated to Step).

Note : A Task can not be run / distributed on multiple nodes; the number of Tasks must always be greater than or equal to the number of nodes (in the Batch as in a Step).

Like any script command, Steps are, unless otherwise specified, executed iteratively. To run Steps in parallel, simply execute each Step to parallelize in "background".

Note that this parallelization is done by the SHELL ('&' at the end of the line), which executes the command "srun" in a sub-process (sub-shell) of the Job, and not by Slurm.

In the example, the 3 encoding Steps are run in parallel, each using a Task, for a total of 12 CPUs.

These 3 Steps are written here one per line for the sake of clarity; in practice, to parallelize a large number of steps, we can use :

# Sources listed in a array :
vids=('video1' 'video2' 'video3')
for v in "${vids[@]}"; do
     srun -n1 -N1 ffmpeg [...] $v.mkv &

# Sources read line by line from a file :
while read v; do
    srun -n1 -N1 ffmpeg [...] $v.mkv &
done <"/path/to/vids"

Note : When Steps are executed in parallel, it is imperative in the parent script (Job) to wait for the completion of child processes with a "wait", otherwise they will be automatically interrupted (killed) once the end of Batch reached.

5. Slurm environment variables:

Slurm exports a number of environment variables related to the Cluster, the general configuration and the actual parameters of the Job. These variables are accessible in the script and have very similar names to the corresponding Slurm or SBATCH options and are therefore easily identifiable.

In the case of our Job, the "-threads" parameter passed to ffmpeg uses the environment variable "SLURM_CPUS_PER_TASK" (which contains the value of the "--cpus-per-task" option of the Batch) rather than value written 'hard', which allows to always use a number of threads equal to the number of CPUs of a Task, regardless of the value declared in the Batch, without having to modify the script.

For more information, see the topic "Using environment variables"

6. Job execution diagram :

Job : TestJob