This page gives you an overview of the most important aspect of the PALMA-II cluster, how to connect to it and what a typical workflow could look like. Specifics can be found in the subsections for each topic separately.
PALMA-II consist of several hundred compute nodes. Every node consists of 2 Intel Xeon Gold 6140 CPUs containing 18 CPU cores each. Users will first have to connect to so-called login nodes to gain access to the compute nodes themselves. Applications or compute jobs can then be started via an interactive session or via job/batch scripts using the slurm workload manager.
Access
To access the PALMA-II cluster you have to connect to one of the login nodes via an SSH connection using a terminal emulator. If you are on Linux or Mac OS, you can use the terminal applications that come pre-installed with your OS. If you are on windows you can use PuTTY.
SSH address: username@palma.uni-muenster.de
Filesystem
When you log in to the cluster for the first time, a directory /home/<your-username>
is created for you (cf. Linux FHS). Please use this folder only to store your applications. Do not store your numerical results in here. Storage in /home
is limited. For (numerical) results of your computations, please create the folder /scratch/tmp/<your-username>
. This working directory is not intended as an archive. We kindly ask you to remove your data there as soon as it is no longer needed.
Directory | Purpose |
---|---|
/home/<username> | Personal applications, binaries, libraries etc. |
/scratch/tmp/<username> | Working directory, storage for numerical output (start your applications from here!) |
Datatransfer
To transfer data from and to the cluster, you can either use the scp command from a terminal (Linux & Mac OS) or use something like WinSCP (Windows) or FileZilla (all platforms).
An example using scp is given below:
Task | Command |
---|---|
Copy a local file to your home folder on PALMA-II | scp MyLocaleFile username@palma.uni-muenster.de:/home/username/ |
Copy a local folder to your working directory on PALMA-II | scp -r MyLocalDir username@palma.uni-muenster.de:/scratch/tmp/username/ |
Copy a file on PALMA-II to your local computer | scp username@palma.uni-muenster.de:/scratch/tmp/username/MyData/MyResult.dat /PATH/ON/YOUR/LOCAL/COMPUTER/ |
Loading available software
Common applications are provided through a module system and have to be loaded into you environment before usage. You can load and unload them depending on your needs and preferences. The most important commands are:
Command | Description |
---|---|
module avail | List currently available modules (software) |
module spider <NAME> | Search for a module with a specific NAME |
module load <NAME> | Load a module into your environment |
module unload <NAME> | Un-load a module from your environment |
module list | List all currently loaded modules |
module purge | Unload all module in your environment |
Details and specifics can be found in the module system section.
Starting Jobs
Jobs can be started in three different ways using the the slurm workload manager:
- Interactive (giving you direct access to the compute nodes)
- Non-interactive (using slurm to directly queue a job)
- Batch system (submitting a job script)
Type | Command |
---|---|
Interactive | srun <config options> --pty bash |
Non-interactive | srun <config options> <my-applications-to-run> |
Batch | sbatch my_job_script.sh |
Both methods will reserve a certain amount of CPU cores or nodes for a given amount of time depending on your settings. Further information about the submission of jobs, configuration options, example job scripts can be found in the Job scheduling / submission section.
Workflow
An example of a typical workflow on PALMA-II is given below:
Step | Command |
---|---|
1. Connect to the login node of palma | ssh username@palma.uni-bremen.de |
2. Navigate to your working directory | cd /scratch/tmp/username/MySimulation/ |
3. Load the needed software into your environment |
|
4. Start your simulation / computation | srun -N 4 -n 36 --partition normal --time 12:00:00 gmx mdrun -v --deffnm NPT |