In order to decrease the redundancy of installations between different users and to establish a common hassle-free basis of availability commonly used software and libraries are installed under the username "ag-valenti".
|FPLO||Ubuntu 18.04: 18.00-52
Ubuntu 20.04: 18.00-57, 21.00-61
|VASP||5.4.4, 6.1.1||Only on Ubuntu 18.04 (clusters)|
|Quantum Espresso||6.4, 6.5, 6.8||OK|
|VESTA||3.5.2, 3.5.5, 3.5.7||OK|
|Intel Compiler||2021.2.0, 2019.3.199, 2019.0.117||OK|
|Intel MKL||2021.2.0, 2019.3.199, 2019.0.117||OK|
|Intel MPI||2021.2.0, 2019.3.199, 2019.0.117, 2020.1.217||OK|
|HPCX MPI||Ubuntu 18.04: 2.6
Ubuntu 20.04: 2.10
|Open MPI||4.0.1, 4.1.1||OK|
|GCC||Ubuntu 18.04: 7.5.0
Ubuntu 20.04: 9.3.0
|ALPSCore||2.3.0-rc.1-1||On Ubuntu 20.04|
|ALPSCore CT-HYB||1.0.3||On Ubuntu 20.04|
|Maxent||1.1.1||On Ubuntu 20.04|
Please email requests for installation of additional software and libraries, questions or contributions to this website to support.
To access the installation you have to be in the group
ag-valenti, otherwise you will get "permission denied" errors.
Check this with the terminal command
id. The output should look somewhat like this:
For running jobs on the group-only compute clusters (mallorca, barcelona, dfg-big, dfg-xeon, fplo) you need to be in the group
Please contact the ITP system administrator to be added to these groups.
There is no single path containing all binaries to avoid confusion and errors. Each user must request the activation of a desired program via a loader script that lies in the user home of ag-valenti. The script should be called like this:
This is the preferred method. While setting environment variables manually is possible, it will only lead to errors on the user side. Thus, the only supported and most convenient option is to use the activate script.
source /home/ag-valenti/activate <option>
Loading different programs can lead to problems in the dynamic linking stage. Please try to avoid unnecessary loading of software and do this in your job script or in separate terminal profiles.
Currently supported options:
For an up-to-date listing of available options please use the
Please do not add this to your .bashrc file as this may have consequences for the stability of other software. Only the default option is considered safe for loading in your .bashrc.
Description of options (activate help menu may be more up to date):
|default||Loads several useful common programs: VESTA, w2k_machines_setup, cif2fplo, clean_fplo, ssubmit, sterminate, asbatch|
|wien||Loads WIEN2k executables init_lapw, lapw0, etc.||19.1, 21.1|
|fplo||Loads FPLO executables fedit, fplo, dirac, xfbp, xfplo||18.00-52, 18.00-57, 21.00-61|
|vasp||Loads VASP executables vasp_std, vasp_ncl, vasp_gam||5.4.4, 6.1.1|
The ag-valenti program loader is designed to host different versions for each program. It is intended to maintain old versions and add new versions rather than performing in-place upgrades. Each user has the freedom to choose from the installed versions.
Usually, the newest installed version is loaded automatically. If a specific version is desired an argument has to be specified:
Versions typically have the format
source /home/ag-valenti/activate <name>@<version>
a.b.c. Available versions for a specific program are listed by
source /home/ag-valenti/activate --help
Some programs will be available in multiple platform optimized binaries. In this case the correct binaries will be selected by the loader automatically based on the platform the request is coming from. There is no way to request binaries for a specific platform.
In addition to numerical tools we also host commonly used developer tools and libraries that may be more up-to-date than the system default.
In order to use these type
Available versions for a specific program are listed by
source /home/ag-valenti/activate_dev <option>
For example, to use Eigen do
source /home/ag-valenti/activate_dev eigen g++ your_code.cpp -o your_executable
Some programs come with documentations embedded in the source tree. In this case we provide a symbolic link to the respective directory
/home/ag-valenti/docs to abstract the installation details from the user. Use
ls /home/ag-valenti/docs to
list all available documentations. Note that these are provided "as is".
In case one uses e.g. WIEN2k and VASP very often in an interactive terminal session one might be tempted to load
both programs in the default
.bashrc file. This can lead to problems if the programs use different versions
of the same dynamic library. In this case it can be helpful to define custom terminal profiles.
alias loadvasp="source /home/ag-valenti/activate vasp"
#!/bin/bash gnome-terminal --rcfile=.vaspbashrc
An executable can be created like this:
File: abctermDon't forget
#!/bin/bash gnome-terminal --window-with-profile=<profile_name>
chmod u+x abcterm.
Having keyboard shortcuts for often used tasks is very handy. This is how it is done:
A typical job script looks like this:
Description of options:
#!/bin/bash #SBATCH --partition=PARTITION #SBATCH --ntasks=NTASKS #SBATCH --time=dd-hh:mm:ss #SBATCH --job-name=JOBNAME # your command to be executed
|partition||name of the partition: itp, itp-big, barcelona, dfg-xeon dfg-big, fplo|
|ntasks||Number of tasks. This will allocate ntasks processors unless
|cpus-per-task||Request a number of CPU cores per task. The total number of processor cores allocated is then ntasks*cpus-per-task. Useful for shared memory programs, because a single task is guaranteed to run on one node. Groups of processors belonging to the same task will also sit on the same node.|
|time||Time limit for the allocation in the format
|job-name||Name of your job|
|mem||Requests a specific amount of memory (in MB). This limit can be temporarily exceeded, but will fail the job eventually. Use 5120 or 5G for 5GB.|
|mail-type||Control which status mails will be sent to your email address. NONE, BEGIN, END, FAIL, ALL are common values.|
|nodelist||Request a specific list of nodes. The job will contain all of these nodes and possibly additional hosts as needed to satisfy resource requirements. The list may be specified as a comma-separated list of hosts, a range of hosts (host[1-5,7,...] for example), or a filename. The node list will be assumed to be a filename if it contains a "/" character. If you specify a minimum node or processor count larger than can be satisfied by the supplied node list, additional resources will be allocated on other nodes as needed. Duplicate node names in the list will be ignored. The order of the node names in the list is not important; the node names will be sorted by Slurm. This is very useful to restrict your calculations on specific nodes, to avoid spaming them over the whole cluster. The node names are identical to the ssh names for the indivdual node.|
squeueto display the current job queue. A few notable options:
||show only the jobs belonging to the given user|
||show only the jobs running and queueing on the given partition|
save_lapw -d <directory> to store all input files and all necessary
files for restoring a calculation to
For more information use the option
After changing input files or after another calculation the previous run can be restored with
restore_lapw -d <directory> if
save_lapw has been
Once your calculation is completed clean up the directory with
This keeps only the important files and deletes the large files. Should those be needed
later on they can be recovered quickly. For this, check which subprogram creates which
file in the user guide.
Tools that help with common WIEN2k related tasks are collected in this suite. Currently it contains:
Cleans large files (like
Generates a valid
|fix_wannier90_hopping||Converts Wannier90 hopping file to a format without degeneracies.|
A valid job script for parallel jobs looks like this:
#!/bin/bash #SBATCH --partition=barcelona #SBATCH --ntasks=40 #SBATCH --time=00-10:00:00 #SBATCH --job-name=my_job_name . /home/ag-valenti/activate wien w2k_machines_setup run_lapw -p -e 0.0001 -c 0.0001
k-parallelization uses process communication via SSH. Be sure to setup an unprotected SSH public-private key pair via
add the public key to your
Otherwise you will face "Permission denied" errors.
Unfortunately, due to the way k-point parallelization works, it is necessary to have the activate line in the
file if you are using the
FPLO executables ship with the default naming convention
a.b-c is the version number. This allows for the parallel installation of different
versions. Since we use a different version management, which allows to load a specific version only
this tedious naming scheme is unnecessary. Therefore, in addition to the default binaries we provide
convenient symbolic links
dirac, which can be
used independently of the specific version of FPLO that is used.
A sample job script is provided below:
#!/bin/bash #SBATCH --partition=dfg-fplo #SBATCH --ntasks=1 #SBATCH --mem=6G #SBATCH --time=00-01:00:00 #SBATCH --job-name=my_job_name . /home/ag-valenti/activate fplo fplo
Note that the version can be changed in the argument to the activate script.
Since FPLO binaries are statically linked one can safely load FPLO along with any other Program, even other versions of FPLO. In this case the shortened executable names become ill-defined and the longer standard names should be used instead.
The FPLO input file
=.in is created by the editor
fedit. Provided as part
of the default set of programs
cif2fplo, originally written by Milan Tomić,
allows to automatically convert a CIF file to FPLO input, which can then be edited with
Usage is kept rather simple:
cif2fplo filename, where
the name or path to an existing CIF file. The
=.in file will be created in the current
working directory of the shell.
FPLO files follow a very annoying naming convention. Annoying because they start with
and therefore have to be typed in quotes. We provide a very simple tool to delete all or a selection
-h for a list of available options. This is especially
useful if you've accidentally opened
fedit in e.g. your home directory and want to
get rid of the files it created.
FPLO ships with a very powerful and handy Python library pyfplo. When loading FPLO you will also have access to pyfplo. With version 21.00-61, one can finally use PyFPLO with Python3!
VASP requires specific input files that are distributed along with the software.
You can find the files at
A sample job script is provided below:
#!/bin/bash #SBATCH --partition=dfg-xeon #SBATCH --ntasks=16 #SBATCH --mem=100G #SBATCH --time=00-01:00:00 #SBATCH --job-name=my_job_name . /home/ag-valenti/activate vasp mpirun -np 16 vasp_std
Access the Pearson Crystal Database setup on a virtual machine:
A connection can only be established from within the ITP network. Use a VPN otherwise.
Since this is a Windows virtual machine only one login is allowed at a time. Please log out after you are done!
Additional help regarding the useage of the software can be found on the official website.
There is a shared Owncloud folder
slides from the group seminar can be shared (only) with other members of the group.
Files or folders therein should follow the naming convention:
This way it is easier to keep on top of everything. If you have more than one file to share make a folder, otherwise just upload the file.
To access this folder your ITP account needs to be in the Linux group
This section is still up for discussion! Input is still welcome until these measures are put in place.
Here we provide some guidlines as to how data is supposed to be handled. In the following data refers to published work only.
In order to assure that data is reproducible and available independent of the current staff all information has to be collected in a general repository.
A data set consists of the following elements:
Source code of the program used to produce the data. In case of licensed
software the version number is sufficient. Add a
|Input files||All input files corresponding to the particular calculation. This includes a script to run the program. Add metadata (parameters used).|
Files produced by the program. Only those important for the discussion
in the paper are necessary, e.g. files containing the band structure. If a script
produced the final data file from a program output file add that here. Please try to combine post processing
scripts into one or add an explanatory
|Plot scripts||Scripts that generate the exact plot found in the paper using data from a specific data set. Make sure that it is clear which data is used. If raw plots have been post processed add a note containing a short explanation of what was changed.|
|LaTeX source||The complete LaTeX source for the reproduction of the manuscript. This includes the figures. You can add additional notes that did not end up in the paper in a directory separated from the paper.|
For different codes, please provide at least the following files.
|General||README, Source code|
|Input files||Configuration files|
|Output files||Program output (if not too big, at least description in README)|
|Input files||case.in*, case.struct, case.klist||case.insp, case.klist_band||case.int||self generated input files|
|Output files||case.dayfile, :log, case.scf (if not too big)||case.band*.agr (if not too big)||case.dosev*||respective output file|
|Input files||=.in||=.in||=.in||=.in, respective input files|
|Output files||version number, out (if not too big)||+band / +bweights (if not too big)||+*dos* (if not too big)||respective output files|
|Input files||INCAR, POSCAR, KPOINTS,
filenames of POTCAR
|INCAR, POSCAR, KPOINTS,
filenames of POTCAR
|INCAR, POSCAR, KPOINTS,
filenames of POTCAR
|INCAR, POSCAR, KPOINTS,
filenames of POTCAR
|Respective input files|
|Output files||CONTCAR, OUTCAR (if not too big)||Version number, OUTCAR (if not too big)||band data (if not too big)||dos data (if not too big)||Respective output files|
The name of the repository should have the format arxivID-short-name-of-paper.
Below an exemplary repository structure is provided. Click to expand.
After structuring the data, the corresponding folder has be stored in
Make sure that it has access permissions
drwxrwxr-x (chmod 775 -R folder_name)
On the Goethe-HLR things are a bit different. Currently no DFT codes are set up. Users who run their own codes are asked to follow these rules:
.bashrcas clean as possible and try to avoid modifying
Modules are managed via the Modules utility. You only need the command
module. Below we list common usage:
||Shows all available modules|
||Load the specified module into the environment|
||Unload the specified module from the environment|
Per default only global modules will be shown. This may be enough. However, it is recommended that you add the following to your
This will add more group-wide installed modules to your list. Typically you will only need to load the Intel compiler (if preferred) and an MPI
implementation. We recommend
mpi/intel/2019.5 with the Intel compiler
mpi/openmpi/3.1.2-gcc-8.2.0 with the GNU compiler. If in doubt feel free to ask.
/home/compmatsc/public. This will affect all users of the group and you may involuntarily break something. Email support instead.