ORCA 2016

From HPC users
Jump to navigationJump to search

Introduction

The program ORCA is a modern electronic structure program package that is able to carry out geometry optimizations and to predict a large number of spectroscopic parameters at different levels of theory. Besides the use of Hartee Fock theory, density functional theory (DFT) and semiempirical methods, high level ab initio quantum chemical methods, based on the configuration interaction and coupled cluster methods, are included into ORCA to an increasing degree.

For more details please refer the offical home of ORCA where you can also find a thorough documentation on using the program. Note that ORCA is free of charge for non-commercial use and by using ORCA on the cluster you are accepting the ORCA license. In particular, any scientific work using ORCA should at least cite

F. Neese: The ORCA program system (WIREs Comput Mol Sci 2012, 2: 73-78)

as well as other related works as apropriate.

Below, a short introduction to using ORCA on the cluster is given.

Installed version

The currently installed versions of ORCA are 3.0.3 and 4.0.0:

You can always check that out by yourself, you only need to use the command "module av ORCA":

$ module av ORCA

------------------------ /cm/shared/uniol/modules/chem -------------------------
   ORCA/3.0.3    ORCA/4.0.0 (D)

  Where:
   D:  Default Module

Using ORCA on the HPC cluster

Since there many people working with the HPC cluster, its important that everyone has an equal chance to do so. Therefore, every job should be processed by SLURM.

For this reason, you have to create a jobscript for your tasks.

Serial run

#!/bin/bash 

#SBATCH --partition=carl.p
#SBATCH --time=1-00:00:00
#SBATCH --mem=2G 
#SBATCH --job-name ORCA-SERIAL-TEST
#SBATCH --output=orca-serial-test-%j.out
#SBATCH --error=orca-serial-test-%j.err 

module load ORCA

MODEL=TiF3

ORCAEXE=`which orca`
INPUTEXT="inp xyz"
$ORCAEXE ${MODEL}.inp > ${MODEL}.out

#preparing $TMPDIR for run by copying file
for ext in $INPUTEXT
do
   if [ -e $MODEL.$ext ]
   then
      echo "Copying $MODEL.$ext to TMPDIR"
      cp $MODEL.$ext $TMPDIR/${MODEL}_${JOB_ID}.$ext
   fi
done

#change to $TMPDIR for running ORCA
cd $TMPDIR
 
#run ORCA
$ORCAEXE ${MODEL}_${JOB_ID}.inp > $WORK/${MODEL}_${JOB_ID}.out
 
#saving files from $TMPDIR
for ext in $OUTPUTEXT
do
   if [ -e ${MODEL}_${JOB_ID}.$ext ]
   then
      echo "Copying $MODEL.$ext to $WORK"
      cp ${MODEL}_${JOB_ID}.$ext $WORK
   fi
done

The job script requires additional input files for ORCA, in this case TiF3.inp and TiF3.xyz and all three files have to be placed in the same directory. Note: all downloads have to be unzipped first.

Once the job script and your input files are ready, a job can be submitted as usual with the command:

sbatch orca_serial_test.job

The job script works roughly in the following way

  1. the ORCA module is loaded and the name of the model is set (must be identical to the name of the .inp file)
  2. all input files (identified by the model name and given extensions) are copied to $TMPDIR, more files can be included by adding their extensions to the variable INPUTEXT
  3. the directory is changed to $TMPDIR and the run is started, a log file for the run (extension .out) is written to the directory from where the job was submitted
  4. all other files are created in $TMPDIR, which is automatically deleted after the job; if additional files need to be saved, they need to be copied (not yet implemented in the job script)

Parallel run

If you have managed to run a serial ORCA job on the cluster, you are already pretty close to know how to run a parallel ORCA job. You have to change following parts of your jobscript:

  1. add "#SBATCH --ntasks=16" (We've used 16 cores for the test file, you might lower or higher the amount to match the needs of your actual job) to your jobscript file.
  2. change the "MODEL"-variable in the jobscript file. For this example you have to change the variable to "silole_rad_zora_epr_pbe0".
  3. add the following lines of code to your jobscript:
# modify inputfile to match the number of available slots
SETNPROCS=`echo "%pal nprocs $NSLOTS"`
OPAL=`grep %pal $MODEL.inp`
sed -i "/^%pal/c$SETNPROCS" $MODEL.inp
NPAL=`grep %pal $MODEL.inp`
echo "changed $OPAL to $NPAL in $MODEL.inp" 

Troubleshooting

In case of problems the following hints may help you to identify the cause:

  1. check the log files from the SGE (<job-name>.x<job-id> where x is e, o, pe, and/or po) as well as the ORCA log file (<model>.out) for error messages.
  2. check the exit status of the job by using
sacct -j <job-id>

The last command should show a number of lines, including the exit code, which could look like this:

failed       100 : assumedly after job
exit_status  137                 

This indicates that a resource (memory, run time, file size) was over-used.

If you need help to identify the problem you can contact the Scientific Computing. Please include the job-id in your request.

Documentation

The full documentation of the most recent version of ORCA (currently 4.0.0) can be found here (PDF viewer required).