Getting Started

In order to use any VASP installation on PALMA-II you need to have a valid VASP license!

Please note that all information on this page is based on VASP user's personal experience and may be limited to specific VASP versions (5.4.4) and applications. If you have made different experiences or have additional ideas, please feel free to add your information to this page.

Running VASP

VASP can run very efficiently on PALMA-II (CPU efficiency ~ 99.98 %). However, VASP can also run inefficiently if the user input is set in an unfavorable manner. Unfortunately, the best input parameters may depend on the type of calculation performed, the type of node used to carry out the calculation, and the way the VASP version was compiled...in short: individual testing may become necessary. Useful information can be found in the VASP wiki (https://www.vasp.at/wiki/index.php/The_VASP_Manual, especially INCAR tags NCORE and NSIM, and the lecture about performance).

You can check the CPU efficiency of a job either in the mail after the job has finished (if you have the line #SBATCH --mail-type=END in your jobscript), or by adding the  line seff $SLURM_JOBID  >>efficiency.txt  to the end of your jobscript. Please check your CPU efficiency before starting large amounts of long calculations and contact the support team if your efficiency is low.

An example jobscript for calculations submitted to the normal-queue can be found below. This jobscript can be efficiently combined with the following tags in the INCAR file: 

NSIM = 18              # efficient parallelization for the normal-queue nodes (module VASP/5.4.4-vtst, large systems with > 250 metal atoms)
NCORE = 18     # efficient parallelization for the normal-queue nodes (module VASP/5.4.4-vtst, large systems with > 250 metal atoms)
MAXMEM = 5139    # telling VASP that it may use a memory of 185000 MB/ 36 cores per node = 5139 MB per task. This applies for the normal queue nodes with 36 physical cores and 192 GB memory (we demand only 185 GB in the jobscript to avoid issues here....#SBATCH --mem=185GB)


#!/bin/bash

#SBATCH --nodes=1                              # Request one node for this job. If you wish to use parallelization over k-points (INCAR tag KPAR > 1), using one node per irreducible k-point (see IBZKPT file) is usually a good choice. 
#SBATCH --exclusive                            # ALWAYS set this flag when running VASP. VASP is not designed for sharing nodes and serious problems may occur if other jobs interfere with your calculation.    
#SBATCH --ntasks-per-node=36                   # Aadapt it if you submit jobs in different queues on different nodes (--ntasks-per-node=physical-cores-per-node)
#SBATCH --partition normal
#SBATCH --time=7-00:00:00
#SBATCH --job-name=VASP
#SBATCH --mail-type=FAIL,END
#SBATCH --output output.dat
#SBATCH --mail-user=your_name@uni-muenster.de
#SBATCH --mem=185GB                           # Note that there are also nodes with less than 192GB memory in the normal queue. Adapt this flag if you do not require high memory nodes. It is recommended to set --mem=(memory_per_node - approx. 5 GB/core)
#SBATCH --no-requeue                          # Always set this flag when performing anything other than single point calculations with VASP. Otherwise, VASP may restart the calculation from the beginning (and not from whereever the job was stopped).


ulimit -s unlimited

ml palma/2019a
ml intel/2019a
ml VASP/5.4.4-vtst  # load the VASP/5.4.4-vtst module   


# run VASP and pipe the console output in a file called "logfile" ("error_logfile" if the program crashes)
srun vasp_std >logfile 2>error_logfile 


seff $SLURM_JOBID  >>efficiency.txt   # optional: print the job efficiency in a file called "efficiency.txt"


Note that, especially for large systems, the VASP output (WAVECAR and CHGCAR files) can become quite large (several GB or even several 100 GB). Submitting a set of VASP calculations may therefore easily fill your /scratch/tmp/user directory. If you do not need these files (see https://www.vasp.at/wiki/index.php/WAVECAR and https://www.vasp.at/wiki/index.php/CHGCAR, both files can easily be re-generated in a single point calculation), you can set the flags LWAVE = .FALSE. and LCHARG=.FALSE. in your INCAR file to avoid writing them.



 



  • Keine Stichwörter