Software on PALMA-II can be accessed via environment modules. These are small scripts that set environment variables (like PATH and LD_LIBRARY_PATH) pointing to the locations where the software is installed. The tool to manage environment modules is called LMOD. On PALMA-II a so called hierarchical module naming scheme is employed in combination with the Easybuild framwork to deploy scientific software. This means, that using the command module avail does not show you all available software at once. Instead only those modules are visible, which do not depend on other modules to be loaded first.


Software dependencies

Most (scientific) software relies on multiple libraries (e.g. to perform specific linear algebra operations) to be functional. These dependencies can be very specific: For example application A at version 1.0 depends on library B version 2.1, while the same application A at version 1.5 also depends on library B but at version 2.1.2. On top of that library B can also depend on another library C and so on. These very strict dependencies are often referred to as the dependency hell and make it challenging to provide consistent software versions, especially in a scientific environment where results have to be reproducible.

In addition to these intrinsic dependencies, applications running on a cluster environment like PALMA-II are compiled from scratch for performance reasons and make use of libraries to help run the code in parallel. This usually means a specific application relies on a specific compiler, an MPI implementation for parallelization and numerical libraries to accelerate mathematical operations.


Examples of available compilers, MPI and numerical libraries:

CompilerMPI stackNumerical libraries
GCC, Intel, PGI, Clang (LLVM)OpenMPI, MPICH2, MVAPICH2, Intel MPIBLAS, (Sca)LAPACK, FFTW, Intel MKL


Most of these three software stacks can be combined with each other, leading to a wide variety of combinations for one specific application. On PALMA-II we therefore deploy software depending on so called toolchains. A toolchain is a specific combination of the above mentioned compilers and libraries. Currently we provide a foss (free and open source software) and intel (proprietary) toolchain which is updated roughly every six months. Older software is kept to guarantee reproducibility of computational results but may not be visible to the user. If you rely on older software which you cannot find in the currently available software stack, please contact us.

In addition we created palma (meta-)modules to group the different toolchains. These help us to keep an overview and have to be loaded before you can access any of the underlying toolchains. The default palma module, already loaded when you login, is palma/2022a.

Currently available toolchains:

PALMA Software StackToolchainCompilerMPI stackNumerical libraries
palma/2019a



foss/2019aGCC/8.2.0OpenMPI/3.1.3OpenBLAS/0.3.5,  ScaLAPACK/2.0.2, FFTW/3.3.8
intel/2019aicc, ifort /2019.1.144impi/2018.4.274imkl/2019.1.144
palma/2019b

foss/2019bGCC/8.3.0OpenMPI/3.1.4OpenBLAS/0.3.7,  ScaLAPACK/2.0.2, FFTW/3.3.8
intel/2019biccifort/2019.5.281impi/2018.5.288imkl/2019.5.281
palma/2020a
foss/2020aGCC/9.3.0OpenMPI/4.0.3OpenBLAS/0.3.9,  ScaLAPACK/2.1.0, FFTW/3.3.8
intel/2020aiccifort/2020.1.217impi/2019.7.217imkl/2020.1.217
palma/2020bfoss/2020bGCC/10.2.0OpenMPI/4.0.5OpenBLAS/0.3.12, ScaLAPACK/2.1.0, FFTW/3.3.8
intel/2020biccifort/2020.4.304impi/2019.9.304imkl/2020.4.304
palma/2021afoss/2021aGCC/10.3.0OpenMPI/4.1.1ScaLAPACK/2.1.0-fb, FlexiBLAS/3.0.4, OpenBLAS/0.3.15, FFTW/3.3.9
intel/2021aintel-compilers/2021.2.0impi/2021.2.0imkl/2021.2.0
palma/2021bfoss/2021bGCC/11.2.0OpenMPI/4.1.1ScaLAPACK/2.1.0-fb, FlexiBLAS/3.0.4, OpenBLAS/0.3.18, FFTW/3.3.10
intel/2021bintel-compilers/2021.4.0impi/2021.4.0imkl/2021.4.0, imkl-FFTW/2021.4.0
palma/2022afoss/2022aGCC/11.3.0OpenMPI/4.1.4ScaLAPACK/2.2.0-fb, FlexiBLAS/3.2.0, OpenBLAS/0.3.20, FFTW.MPI/3.3.10
intel/2022aintel-compilers/2022.1.0impi/2021.6.0imkl/2022.1.0, imkl-FFTW/2022.1.0
Also refer to the easybuild documentation for further details.

CPU architectures and GPUs

On PALMA-II users also have access to the (older) Broadwell CPU architecture nodes as well as to specific nodes with different GPUs. For every architecture and GPU type the ZIV maintains speparate software stacks. The correct software stack is automatically loaded depending on where you want to perform your computations. However you might find it difficult to search for a specific software for example when using GPU nodes as the provided software can differ. For this reason please refer to the software section to obtain an overview of all available applications and where it can be found. The GPU toolchains are reffered to as fosscuda and intelcuda, respectively.

How-To

Loading software from a specific toolchain into your environment is relatively straight forward. Have a look and what software and toolchains are directly available using the module avail command. Load an application (if it does not depend on anything else) or load a toolchain via the module load command. Another module avail will reveal all software available under the currently loaded toolchain. Pick your applications of choice by running the module load command again.

If you are not sure where to find some software or what toolchain it depends on make use of the module spider command. This will search for the specified software and give you instructions how to load it into your environment.


Show available modules
$ module avail
--------------------------- /Applic.HPC/modules/all/Core ----------------------------
...
   GCC/6.3.0-2.27                        flex/2.6.0
   GCC/6.4.0-2.28                        flex/2.6.3
   GCC/7.3.0-2.30              (D)       flex/2.6.4                         (D)
   GCCcore/6.3.0                         foss/2018a
   GCCcore/6.4.0                         gettext/0.19.8.1
   GCCcore/7.3.0               (D)       gompi/2018a
   Gaussian/G16                          help2man/1.47.4
   Go/1.11.5                             icc/2018.1.163-GCC-6.4.0-2.28
   Go/1.12                               iccifort/2018.1.163-GCC-6.4.0-2.28
   Go/1.12.7                   (D)       ifort/2018.1.163-GCC-6.4.0-2.28
   IMPUTE2/2.3.2_x86_64_static           iimpi/2018a
   Inspector/2018_update2                intel/2018a
...
Load a module
$ module load intel/2018a
Show newly available modules
$ module avail
---------------- /Applic.HPC/modules/all/MPI/intel/2018.1.163-GCC-6.4.0-2.28/impi/2018.1.163 ----------------
...
   GROMACS/2016.5                       VMD/1.9.3-Python-2.7.14
   HDF5/1.10.1                          Valgrind/3.13.0
   Hypre/2.14.0                         Voro++/0.4.6                   (D)
   Instant/2017.2.0-Python-3.6.4        ZeroMQ/4.2.5
   METIS/5.1.0                   (D)    arpack-ng/3.6.2
   Mako/1.0.7-Python-2.7.14             dijitso/2018.1.0-Python-3.6.4
   Mesa/17.3.6                          imkl/2018.1.163                (L)
   NLopt/2.4.2                          libGLU/9.0.0
   NONMEM/7.3                           matplotlib/2.1.2-Python-2.7.14
   NONMEM/7.4.3                  (D)    matplotlib/2.1.2-Python-3.6.4  (D)
   OpenFOAM/5.0-20180108                netCDF-C++4/4.3.0
   OpenFOAM/6                    (D)    netCDF-Fortran/4.4.4
   PETSc/3.9.3                          netCDF/4.6.0
...
Load a now newly available module
$ module load OpenFOAM/6


Job scripts and modules

Please use the same commands to load and unload modules in a job script as you would in an interactive session. It is recommended to use the module purge command at the very beginning to guarantee a clean environment. For further details have a look at the Submitting Jobs section.