Storage at PALMA has to be quite fast to handle data transfers from several hundred of nodes at the same time and is therefore quite expensive. Consequently, the user's scratch folder at PALMA is a place to handle and store simulation results only for an intermediate amount of time. It is not an archive! We therefore ask you to always remove old data from your scratch folder as soon as possible.

PATHPurposeAvailable onProtection against data loss
/home/[a-z]/<username>Storage for scripts, binaries, applications etc.Login and Compute NodesNightly snapshots to recover accidently deleted files (planned). No backup
/scratch/tmp/<username>Temporary storage for simulation input and resultsLogin and Compute NodesNo backup, no snapshots. 
/mnt/beeond/Temporary local storage on compute nodesCompute NodesData will be deleted after the end of the job.
/cloud/wwu1/<projectname>/<sharename>Cloud storage solution / archiveLogin Node (r/w), Compute Nodes (r)No backup, no snapshots. 



Recommended data storage policy

The recommended way to use the different storage is the following:

  • Put your own programs, scripts, etc in home.
  • We strongly suggest to use git in combination with the universities Gitlab for versioning.
  • During all of your simulations, only write data to scratch.
  • If you do not need your data on scratch any longer, delete it, copy it to local facilities or create a project in the cloud and archive your data there.

Cloud Storage

If transferring data to your local machine is not feasible take a look at our University Cloud Storage solutions. This OpenStack cloud can provide so called usershares (disk space in form of a network share) which you can access directly from PALMA but also from your local machine. To do this, you have to take the following steps:

  1. Apply for a University Cloud Project

  2. Once your project is available, create a usershare (Follow the steps as explained here):

    1. Login at openstack.wwu.de

    2. Go to Share → Shares and click on Create Share

    3. Fill in the appropriate information (Name, Protocol, Size, ...)

    4. Your usershare will usually be created within one minute and can be accessed from the login node at: /cloud/wwu1/<projectname>/<sharename>

  3. Transfer data to & from your scratch folder on PALMA to your cloud usershare folder


The cloud usershares are read/write mounted on the login node of PALMA. On the compute nodes, for technical reasons, you have read-only access, meaning you cannot write to the usershare from there.



Local node storage → BeeGFS on demand (BeeOND)

If you need fast storage and want to bypass the shared GPFS network storage (where your scratch folder resides), you can make use of storage directly attached to the node's motherboards.

SLURM is able  to provide you with an additional working filesystem, which can be used to store temporary data. This storage will only exist for the runtime of the current job and is discarded when the job finishes (don't use it to store anything you need after the job finishes, or be sure to move anything of importance before the job finishes).

Synchronize a folder

You can use beeond-cp stagein to stage your dataset onto BeeOND.

beeond-cp stagein -n /tmp/slurm_nodelist.$SLURM_JOB_ID -g /scratch/tmp/<username>/dataset -l /mnt/beeond

where /scratch/tmp/<username>/dataset is the path to your dataset. Only changes in your dataset will be copied. Everything will be completely synchronized, files deleted on BeeOND will also get deleted in your global dataset.

Use beeond-cp stageout to stage your dataset out of BeeOND.

beeond-cp stageout -n /tmp/slurm_nodelist.$SLURM_JOB_ID -g /scratch/tmp/<username>/dataset -l /mnt/beeond

Parallel copy

You can use beeond-cp copy to parallel copy directories recursively onto BeeOND.

beeond-cp copy -n /tmp/slurm_nodelist.$SLURM_JOB_ID dir_1 dir_2 /mnt/beeond

where dir_1 and dir_2 are the directories you want to copy.

  • Keine Stichwörter