Storage at PALMA has to be quite fast to handle data transfers from several hundred of nodes at the same time and is therefore quite expensive. Consequently, the user's scratch folder at PALMA is a place to handle and store simulation results only for an intermediate amount of time. It is not an archive! We therefore ask you to always remove old data from your scratch folder as soon as possible.

PATHPurposeAvailable on
/home/[a-z]/<username>Storage for scripts, binaries, applications etc.Login and Compute Nodes
/scratch/tmp/<username>Temporary storage for simulation input and resultsLogin and Compute Nodes
/mnt/beeond/Temporary local storage on compute nodesCompute Nodes
/cloud/wwu1/<projectname>/<sharename>Cloud storage solution / archiveLogin Node



Cloud Storage

If transferring data to your local machine is not feasible take a look at our WWU Cloud Storage solutions. This OpenStack cloud can provide so called usershares (disk space in form of a network share) which you can access directly from PALMA but also from your local machine. To do this, you have to take the following steps:

  1. Apply for a WWU Cloud Project (the short application for usershares is enough, if that is all you need)

  2. Once your project is available, create a usershare (Follow the steps as explained here):

    1. Login at openstack.wwu.de

    2. Go to Share → Shares and click on Create Share

    3. Fill in the appropriate informationn (Name, Protocol, Size, ...)

    4. Your usershare will usually be created within one minute and can be accessed from the login node at: /cloud/wwu1/<projectname>/<sharename>

  3. Transfer data to & from your scratch folder on PALMA to your cloud usershare folder


The cloud usershares are only mounted on the login node of PALMA. You do not have access to them from the compute nodes you are running simulations on. Therefore, you always have to transfer data to your scratch folder first if you need it for your simulations.



Local node storage → BeeGFS on demand (BeeOND)

If you need fast storage and want to bypass the shared GPFS network storage (where your scratch folder resides), you can make use of storage directly attached to the node's motherboards.

SLURM is able  to provide you with an additional working filesystem, which can be used to store temporary data. This storage will only exist for the runtime of the current job and is discarded when the job finishes (don't use it to store anything you need after the job finishes, or be sure to move anything of importance before the job finishes).

Synchronize a folder

You can use beeond-cp stagein to stage your dataset onto BeeOND.

beeond-cp stagein -n /tmp/slurm_nodelist.$SLURM_JOB_ID -g /scratch/tmp/<username>/dataset -l /mnt/beeond

where /scratch/tmp/<username>/dataset is the path to your dataset. Only changes in your dataset will be copied. Everything will be completely synchronized, files deleted on BeeOND will also get deleted in your global dataset.

Use beeond-cp stageout to stage your dataset out of BeeOND.

beeond-cp stageout -n /tmp/slurm_nodelist.$SLURM_JOB_ID -g /scratch/tmp/<username>/dataset -l /mnt/beeond

Parallel copy

You can use beeond-cp copy to parallel copy directories recursively onto BeeOND.

beeond-cp copy -n /tmp/slurm_nodelist.$SLURM_JOB_ID dir_1 dir_2 /mnt/beeond

where dir_1 and dir_2 are the directories you want to copy.

  • No labels