Storage at PALMA has to be quite fast to handle data transfers from several hundred of nodes at the same time and is therefore quite expensive. Consequently, the user's scratch folder at PALMA is a place to handle and store simulation results only for an intermediate amount of time. It is not an archive! We therefore ask you to always remove old data from your scratch folder as soon as possible.
|/home/[a-z]/<username>||Storage for scripts, binaries, applications etc.||Login and Compute Nodes|
|/scratch/tmp/<username>||Temporary storage for simulation input and results||Login and Compute Nodes|
|/mnt/beeond/||Temporary local storage on compute nodes||Compute Nodes|
|/cloud/wwu1/<projectname>/<sharename>||Cloud storage solution / archive||Login Node (r/w), Compute Nodes (r)|
If transferring data to your local machine is not feasible take a look at our WWU Cloud Storage solutions. This OpenStack cloud can provide so called usershares (disk space in form of a network share) which you can access directly from PALMA but also from your local machine. To do this, you have to take the following steps:
Apply for a WWU Cloud Project (the short application for usershares is enough, if that is all you need)
Once your project is available, create a usershare (Follow the steps as explained here):
Login at openstack.wwu.de
Go to Share → Shares and click on Create Share
Fill in the appropriate informationn (Name, Protocol, Size, ...)
Your usershare will usually be created within one minute and can be accessed from the login node at: /cloud/wwu1/<projectname>/<sharename>
Transfer data to & from your scratch folder on PALMA to your cloud usershare folder
The cloud usershares are read/write mounted on the login node of PALMA. On the compute nodes, for technical reasons, you have read-only access, meaning you cannot write to the usershare from there.
Local node storage → BeeGFS on demand (BeeOND)
If you need fast storage and want to bypass the shared GPFS network storage (where your scratch folder resides), you can make use of storage directly attached to the node's motherboards.
SLURM is able to provide you with an additional working filesystem, which can be used to store temporary data. This storage will only exist for the runtime of the current job and is discarded when the job finishes (don't use it to store anything you need after the job finishes, or be sure to move anything of importance before the job finishes).
- Each node is equipped with a 480GB 6GB/s SATA SSD
Mounting point is /mnt/beeond
BeeOND is only available for jobs that allocate full nodes
- Further information about BeeOND: https://www.beegfs.io/wiki/BeeOND
Synchronize a folder
You can use beeond-cp stagein to stage your dataset onto BeeOND.
beeond-cp stagein -n /tmp/slurm_nodelist.$SLURM_JOB_ID -g /scratch/tmp/<username>/dataset -l /mnt/beeond
where /scratch/tmp/<username>/dataset is the path to your dataset. Only changes in your dataset will be copied. Everything will be completely synchronized, files deleted on BeeOND will also get deleted in your global dataset.
Use beeond-cp stageout to stage your dataset out of BeeOND.
beeond-cp stageout -n /tmp/slurm_nodelist.$SLURM_JOB_ID -g /scratch/tmp/<username>/dataset -l /mnt/beeond
You can use beeond-cp copy to parallel copy directories recursively onto BeeOND.
beeond-cp copy -n /tmp/slurm_nodelist.$SLURM_JOB_ID dir_1 dir_2 /mnt/beeond
where dir_1 and dir_2 are the directories you want to copy.