Difference between revisions of "Wien2k"

From IT Service Wiki
Jump to: navigation, search
(New page: This page is indent to give some instructions, how to run the [http://wien2k.at wien2k package] in our Cluster.<br> == Access<br> == The official build is installed in the user account...)
 
Line 11: Line 11:
 
== Parallel runs in the Grid Engine  ==
 
== Parallel runs in the Grid Engine  ==
  
To access the computer nodes it is necessary that you use the batch system. Please make sure that the sun grid engine is configured correctly for your account.
+
To access the compute nodes it is necessary that you use the batch system. Please make sure that the [[Sun Grid Engine|'''SUN grid engine''']] is configured correctly for your account. See the [[Sun Grid Engine|wiki page]] for further instruction.<br>
 +
 
 +
Up to now, only the k-point parallelization is working. Here is a annotated example script:
 +
<pre>#! /bin/bash
 +
#
 +
# Sample wien2k script for use with sge in ITP
 +
# adopted from the tcsh version in wien2k/qsub-job0-sge
 +
#
 +
#  $NSLOTS
 +
#      the number of tasks to be used
 +
#  $TMPDIR/machines
 +
#      a valid machine file to be passed to mpirun
 +
#
 +
# Options passed to qsub (denoted by #$)&nbsp;:
 +
#
 +
#
 +
# Run in current working directory (in most cases a good idea)
 +
#$ -cwd
 +
# Rename the STDOUT und STDERR Stream to an friendly name
 +
#$ -o job.out
 +
#$ -e job.err
 +
 
 +
# select a queue
 +
#$ -q dwarfs
 +
# How many resources do I need (per slot)
 +
#$ -l h_data=2G
 +
#
 +
# Selected parallel environment and number of slots/processes
 +
# mpi is needed, although we do not start a mpi job
 +
#$ -pe mpi 6
 +
 
 +
# define the environment, eventually not needed
 +
export WIENROOT="/home/wien2k/wien2k"
 +
export PATH="$WIENROOT:$PATH"
 +
export SCRATCH="/tmp"
 +
 
 +
# Set internal parallelization code in mkl to only use
 +
# on thread per process.
 +
export OMP_NUM_THREADS=1
 +
 
 +
# some information
 +
echo "Got $NSLOTS slots." &gt;&gt; job.out
 +
echo "Got $NSLOTS slots." &gt;&gt; job.err
 +
 
 +
# read the mpi machines files (generated by the sge)
 +
proclist=`cat $TMPDIR/machines`
 +
nproc=$NSLOTS
 +
echo $nproc nodes for this job: $proclist
 +
 
 +
rm .machines
 +
 
 +
# Convert proclist to one line per slot/k-point.
 +
# In a single queue all nodes have equal performance.
 +
for a in $proclist; do
 +
    echo 1:$a &gt;&gt; .machines
 +
done
 +
 
 +
#This line would force the mpi version
 +
#echo 1:$proclist  &gt;&gt; .machines
 +
 
 +
echo 'granularity:1' &gt;&gt;.machines
 +
echo 'extrafine:1' &gt;&gt;.machines
 +
 
 +
# Run your caclulation
 +
x lapw1 -p
 +
 
 +
</pre>

Revision as of 15:16, 22 January 2009

This page is indent to give some instructions, how to run the wien2k package in our Cluster.

Access

The official build is installed in the user account "wien2k". Access to this account is restricted to users who are member of the unix group "wien2k". The latest version is always linked to the directory /home/wien2k/wien2k. Your setup in your .bashrc could look lile:

export WIENROOT="/home/wien2k/wien2k"
export PATH="$WIENROOT:$PATH"
export SCRATCH="/tmp"


Parallel runs in the Grid Engine

To access the compute nodes it is necessary that you use the batch system. Please make sure that the SUN grid engine is configured correctly for your account. See the wiki page for further instruction.

Up to now, only the k-point parallelization is working. Here is a annotated example script:

#! /bin/bash
# 
# Sample wien2k script for use with sge in ITP
# adopted from the tcsh version in wien2k/qsub-job0-sge
#
#   $NSLOTS
#       the number of tasks to be used
#   $TMPDIR/machines
#       a valid machine file to be passed to mpirun
# 
# Options passed to qsub (denoted by #$) :
#
# 
# Run in current working directory (in most cases a good idea)
#$ -cwd
# Rename the STDOUT und STDERR Stream to an friendly name
#$ -o job.out
#$ -e job.err

# select a queue
#$ -q dwarfs
# How many resources do I need (per slot)
#$ -l h_data=2G
#
# Selected parallel environment and number of slots/processes
# mpi is needed, although we do not start a mpi job
#$ -pe mpi 6 

# define the environment, eventually not needed 
export WIENROOT="/home/wien2k/wien2k"
export PATH="$WIENROOT:$PATH"
export SCRATCH="/tmp"

# Set internal parallelization code in mkl to only use 
# on thread per process. 
export OMP_NUM_THREADS=1

# some information
echo "Got $NSLOTS slots." >> job.out
echo "Got $NSLOTS slots." >> job.err

# read the mpi machines files (generated by the sge)
proclist=`cat $TMPDIR/machines`
nproc=$NSLOTS
echo $nproc nodes for this job: $proclist

rm .machines

# Convert proclist to one line per slot/k-point.
# In a single queue all nodes have equal performance.
for a in $proclist; do
    echo 1:$a >> .machines
done

#This line would force the mpi version
#echo 1:$proclist  >> .machines

echo 'granularity:1' >>.machines
echo 'extrafine:1' >>.machines

# Run your caclulation
x lapw1 -p