Search the HPC-Wiki


MEMBER OF



Introduction

The CIT (Center for Information Technology) is providing HPC resources to aid researchers conduct computationally demanding tasks. For this purpose we are hosting the PALMA-II HPC cluster. It is meant to be used to run applications which are highly parallelized and can take advantage of hundreds or thousands of cores simultaneously and/or need very large amounts of memory.

The HPC team is maintaining the cluster and trying to keep it operational 24/7. We are also providing a default set of software to use on the cluster and try our best to help with installing or optimizing programs you want to run. However, time is limited and sometimes the applications you want to run on the cluster might just not be suited for this use-case. Please inform yourself about what resources you require and if you can actually make use of an HPC cluster.

What HPC is not

The cluster is not a very fast computer replacing your laptop or workstation. Programs on HPC clusters, normally, do not make use of any GUI and calculations are typically submitted to a batch system via job scripts. Users should be familiar with the Linux command line and should thoroughly read the information provided in this wiki.

Users who are completely new to the Linux commandline and HPC are strongly advised to take the Introduction to Linux in HPC tutorial.


Requirements

You have to be a member of a registered research group of the University of Münster. Detailed information about the registration process can be found here.

Publications

We kindly ask you to record any publication you made with the help of our HPC systems at the Research portal with an appropriate note. Instructions can be found here.

Furthermore, please consider acknowledging the use of the PALMA II cluster in your publications:

Acknowledgement

Calculations (or parts of them) for this publication were performed on the HPC cluster PALMA II of the University of Münster, subsidised by the DFG (INST 211/667-1).

News on HPC

Get important updates and news on HPC topics through our mailing list and have a look at the News-Archive page!

Join the HPC-Users channel on Mattermost.

Getting started

A first overview can be found at the Getting started section. More specifics can be found at the individual pages of the Wiki

Lectures

We are offering an Introduction to Linux for HPC one-week seminar. Users with and without prior knowledge in Linux and HPC can choose to participate the whole week or just on individual days on specific topics. For more information follow the provided link.

Contact & Support

If you have questions regarding High Performance Computing, you can write to hpc@uni-muenster.de (in English or German)

A few Do's and Dont's when writing:

  • Please use your uni-muenster.de e-mail address and specify your user account when contacting us
  • Don't send log files or submission scripts or the like. Better just send the path on the cluster to those files if possible.

HPC consulting hour

We are also offering an HPC consulting hour ("Sprechstunde") via Zoom, where you can directly talk to one of our support staff:

  • Every Tuesday from 11-12am
  • Zoom Link will be in the Mattermost HPC-Users channel (see above).

HPC.NRW

The WWU is part of the The North Rhine-Westphalia Competence Network for High Performance Computing (HPC.NRW).

It offers a competent first point of contact and central advisory hub with a broad knowledge base for HPC users in NRW.

A network of thematic clusters for low-threshold training, consulting and coaching services has been created within the framework of the competence network HPC.NRW. The aim is to make effective and efficient use of high-performance computing and storage facilities and to support scientific researchers of all levels. The existing resources and services that the state has to offer are also presented in a transparent way.

General HPC WIKI

In a joint effort of different German Universities, many useful tips and general information about HPC computing can be found at https://hpc-wiki.info.


ManufacturerMEGWARE
Cores16,272
Memory77,568 GB
Nodes444
ProcessorIntel Xeon Gold 6140 18C @ 2.30GHz (Skylake)
Interconnect100Gbit/s Intel Omni-Path
GPFS Storage

2,4 PB

Linpack Performance

Rmax: 800 TFlop/s

Rpeak: 1,277 TFlop/s

OSCentOS 7

  • Keine Stichwörter