Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 23 Nächste Version anzeigen »

This page gives you an overview of the most important aspects of the PALMA-II cluster, how to connect to it and what a typical workflow could look like. Specifics can be found in the subsections for each topic separately.

PALMA-II consist of several hundred compute nodes. Every node consists of 2 Intel Xeon Gold 6140 CPUs containing 18 CPU cores each. Users will first have to connect to so-called login nodes to gain access to the compute nodes themselves. Applications or compute jobs can then be started via an interactive session or via job/batch scripts using the slurm workload manager.


Access

To access the PALMA-II cluster you have to connect to one of the login nodes via an SSH connection using a terminal emulator. If you are on Linux or Mac OS, you can use the terminal applications that come pre-installed with your OS. If you are on windows you can use PuTTY.


SSH address:    username@palma.uni-muenster.de

If you need a short introduction on how to use a shell (terminal), please refer to this site.

Beginning with Friday, 2020-05-22, you will need an ssh-key to access PALMA from the outside. Please find  the documentation how to copy your key, here.

Filesystem

When you log in to the cluster for the first time, a directory /home/[a-z]/<your-username> is created for you. Please use this folder only to store your applications. Do not store your numerical results in here. Storage in /home is limited. For (numerical) results of your computations use the folder /scratch/tmp/<your-username>. The environment variable $WORK automatically points to the scratch folder.

DirectoryPurpose
/home/[a-z]/<username>Personal applications, binaries,  libraries etc.
/scratch/tmp/<username>Working directory, storage for numerical output
(start your applications from here!)
/mnt/beeondTemporary working filesystem, provided on a per-job basis

There is NO BACKUP of the home and working directory on PALMAII as these are not intended as an archive. We kindly ask you to remove your data there as soon as it is no longer needed.


Further information can be found at: Data & Storage



Datatransfer

To transfer data from and to the cluster, you can either use the scp command from a terminal (Linux & Mac OS) or use something like WinSCP (Windows) or FileZilla (all platforms).

An example using scp is given below:

TaskCommand
Copy a local file to your home folder on PALMA-IIscp MyLocaleFile username@palma.uni-muenster.de:/home/[a-z]/username/
Copy a local folder to your working directory on PALMA-IIscp -r MyLocalDir username@palma.uni-muenster.de:/scratch/tmp/username/
Copy a file on PALMA-II to your local computerscp username@palma.uni-muenster.de:/scratch/tmp/username/MyData/MyResult.dat /PATH/ON/YOUR/LOCAL/COMPUTER/

Loading available software

Common applications are provided through a module system and have to be loaded into you environment before usage. You can load and unload them depending on your needs and preferences. The most important commands are:

CommandDescription
module availList currently available modules (software)
module spider <NAME>Search for a module with a specific NAME
module load <NAME>Load a module into your environment
module unload <NAME>Un-load a module from your environment
module listList all currently loaded modules
module purgeUnload all module in your environment


Details and specifics can be found in the module system section.

Starting Jobs

Jobs can be started in three different ways using the the slurm workload manager:

  1. Interactive (giving you direct access to the compute nodes)
  2. Non-interactive (using slurm to directly queue a job)
  3. Batch system (submitting a job script)


TypeCommand
Interactivesrun <config options> --pty bash
Non-interactivesrun <config options> <my-applications-to-run>
Batchsbatch my_job_script.sh


Both methods will reserve a certain amount of CPU cores  or nodes for a given amount of time depending on your settings. Further information about the submission of jobs, configuration options, example job scripts can be found in the Job scheduling / submission section.

Workflow

An example of a typical workflow on PALMA-II is given below:

StepCommand

1. Connect to the login node of palma

ssh username@palma.uni-bremen.de
2. Navigate to your working directorycd /scratch/tmp/username/MySimulation/
3. Load the needed software into your environment

module load intel/2019a

module load GROMACS/2019.1

4. Start your simulation / computationsrun -N 4 -n 36 --partition normal --time 12:00:00 gmx mdrun -v --deffnm NPT

Monitoring

Please find the Ganglia Monitoring at https://palma.uni-muenster.de/ganglia for the usage of the nodes and OpenXDMod userstats at https://palma2b.uni-muenster.de/

  • Keine Stichwörter