Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 12 Nächste Version anzeigen »

This page gives you an overview of the most important aspect of the PALMA-II cluster, how to connect to it and what a typical workflow could look like. Specifics can be found in the subsections for each topic separately.

PALMA-II consist of several hundred compute nodes. Every node consists of 2 Intel Xeon Gold 6140 CPUs containing 18 CPU cores each. Users will first have to connect to so-called login nodes to gain access to the compute nodes themselves. Applications or compute jobs can then be started via an interactive session or via job/batch scripts using the slurm workload manager.


Access

To access the PALMA-II cluster you have to connect to one of the login nodes via an SSH connection using a terminal emulator. If you are on Linux or Mac OS, you can use the terminal applications that come pre-installed with your OS. If you are on windows you can use PuTTY.


SSH address:    username@palma.uni-muenster.de

If you need a short introduction on how to use a shell (terminal), please refer to this site.

If you need a short introduction on how to use a shell (terminal), please refer to this site.

Filesystem

When you log in to the cluster for the first time, a directory /home/[a-z]/<your-username> is created for you. Please use this folder only to store your applications. Do not store your numerical results in here. Storage in /home is limited. For (numerical) results of your computations, please create the folder /scratch/tmp/[a-z]/<your-username>.

DirectoryPurpose
/home/[a-z]/<username>Personal applications, binaries,  libraries etc.
/scratch/tmp/[a-z]/<username>Working directory, storage for numerical output
(start your applications from here!)

There is NO BACKUP of the home and working directory on PALMAII as these are not intended as an archive. We kindly ask you to remove your data there as soon as it is no longer needed.

Shell Access

Please find more information on how to use the command line in the HPC Wiki


Datatransfer

To transfer data from and to the cluster, you can either use the scp command from a terminal (Linux & Mac OS) or use something like WinSCP (Windows) or FileZilla (all platforms).

An example using scp is given below:

TaskCommand
Copy a local file to your home folder on PALMA-IIscp MyLocaleFile username@palma.uni-muenster.de:/home/[a-z]username/
Copy a local folder to your working directory on PALMA-IIscp -r MyLocalDir username@palma.uni-muenster.de:/scratch/tmp/[a-z]/username/
Copy a file on PALMA-II to your local computerscp username@palma.uni-muenster.de:/scratch/tmp/[a-z]/username/MyData/MyResult.dat /PATH/ON/YOUR/LOCAL/COMPUTER/

Loading available software

Common applications are provided through a module system and have to be loaded into you environment before usage. You can load and unload them depending on your needs and preferences. The most important commands are:

CommandDescription
module availList currently available modules (software)
module spider <NAME>Search for a module with a specific NAME
module load <NAME>Load a module into your environment
module unload <NAME>Un-load a module from your environment
module listList all currently loaded modules
module purgeUnload all module in your environment


Details and specifics can be found in the module system section.

Starting Jobs

Jobs can be started in three different ways using the the slurm workload manager:

  1. Interactive (giving you direct access to the compute nodes)
  2. Non-interactive (using slurm to directly queue a job)
  3. Batch system (submitting a job script)


TypeCommand
Interactivesrun <config options> --pty bash
Non-interactivesrun <config options> <my-applications-to-run>
Batchsbatch my_job_script.sh


Both methods will reserve a certain amount of CPU cores  or nodes for a given amount of time depending on your settings. Further information about the submission of jobs, configuration options, example job scripts can be found in the Job scheduling / submission section.

Workflow

An example of a typical workflow on PALMA-II is given below:

StepCommand

1. Connect to the login node of palma

ssh username@palma.uni-bremen.de
2. Navigate to your working directorycd /scratch/tmp/username/MySimulation/
3. Load the needed software into your environment

module load intel/2019a

module load GROMACS/2019.1

4. Start your simulation / computationsrun -N 4 -n 36 --partition normal --time 12:00:00 gmx mdrun -v --deffnm NPT

Visualization

For the visualization of bigger data sets, it is impractical to copy them to your local machine. We therefore offer two solutions to do the postprocessing on Palma II. Since the CPUs are quite fast, the rendering is done in software.

VNC

Prequisites: You need a local installation of TurboVNC.

  1. Log in to palma and call 

    ml vis/vnc
    vnc.sh
  2. Wait until the session has started and follow the instructions of the script (ssh to the compute node and start your local TurboVNC)

  3. Open a terminal in the VNC window and enter "module add intel Mesa" or "module add foss Mesa"
  4. Start an application with GUI

XPRA

XPRA runs in your web browser.

Xpra currently only works with Google's Chrome browser. Other browsers may have problems with the keyboard layout. 

  1. Log in to palma and call 

    ml vis/xpra
    xpra.sh
  2. Wait until the session has started and follow the instructions of the script (ssh to the compute node and point your browser to the given adress, select German keyboard layout in "Advanced Options" tab)

  3. Open a terminal in the XPRA window and enter "module add intel Mesa" or "module add foss Mesa"
  4. Start an application with GUI


  • Keine Stichwörter