MPI-P Linux documentation  
Chemical Software  

Imprint (en) | Privacy Policy (en)

Chemical Software

PDF-Version

Gaussian / GaussView

Please note: to use Gaussian, your account has to be a member of the “gaussian” group. Please mail helpdesk (helpdesk@mpip-mainz.mpg.de) to add you to it.

Gaussian 16

Gaussian 16 binaries have been installed in /sw/linux/gaussian/g16. To be able to use it on the thinc cluster please set the following environment variables in your Job script:

export g16root=/sw/linux/gaussian/g16
. $g16root/g16/bsd/g16.profile

export GAUSS_SCRDIR=/usr/scratch/$LOGNAME

For interactive use, set g16root in your .profile or .bashrc and run the rest in the terminal you're running Gaussian in (or script you're using). Please remember to log off and on again, after changing .profile and/or .bashrc.

The g16 initialization script sets up an alias “gv” which points to GaussView.

Example SLURM job script

This is an example of a job script for gaussian. Partially half-inched from the documentation for the “stallo” cluster of UiT The Arctic University of Norway.

#!/bin/bash -l

#                  (•_•)
#  SLURM Options   <) )>
################### / \

# Define the partition on whoch the job will run.  Defaults to
# CPU_Std32 if omitted.
# Partitions currently (December 2023) are:
# - CPU_Std20
# - CPU_Std32
# - CPU_IBm32
# - GPU_Std16
#SBATCH --partition=CPU_Std20


# Define, how many nodes you need. Here, we ask for 1 node.
# Only the COU_IBm32 partition ca use mor then one node
#SBATCH --nodes=1


# Number of cores (i.e. `rank' in MPI) (defaults to 1, if omitted):
#SBATCH --ntasks=20


# mails? and to whom
#SBATCH --mail-type=END,FAIL
# SBATCH --mail-user=YOUR_ACCOUNT_NAME@mpip-mainz.mpg.de

###########################################################################
# no bash commands above this line
# all sbatch directives need to be placed before the first bash command


# name of the input file without the .com extention
input=example


## get path to submission directory:
submission_directory=$(scontrol show job "$SLURM_JOB_ID" \
                           | grep -E '^[[:blank:]]*Command=' \
                           | cut -d= -f2 | xargs dirname)


## don't change this:
scratch_directory_base="/usr/scratch/${LOGNAME}"

## use the job id for temporary directory
working_directory="/${scratch_directory_base}/${SLURM_JOB_ID}"


## initialize gaussian
export g16root="/sw/linux/gaussian/g16"
. ${g16root}/g16/bsd/g16.profile


printf "Hi, I am job %s on %s in %s\n" "${SLURM_JOB_ID}" "${HOSTNAME}" "${PWD}"


## Creating working directory where the job will run and produce data:
if [ ! -d "${working_directory}" ];
then
        mkdir -p "${working_directory}"
fi
## let Gaussian put temporary data into the working directory
export GAUSS_SCRDIR="${working_directory}"


## copy data to directory -- if you use checkpoint files and don't
## name them differently on purpose, copy the checkpoint file, too.
cp ${submission_directory}/${input}.com ${working_directory}
if [ -f "${submission_directory}/${input}.chk" ]; then
    cp ${submission_directory}/${input}.chk ${working_directory}
fi


printf "Starting simulation in %s\n" "${PWD}"
cd "${working_directory}"
${g16root}/g16/g16 <${input}.com >${input}.out


# copy output and checkpoint file back
cp ${input}.out ${submission_directory}
if [ -f "${input}.chk" ]; then
    cp ${input}.chk ${submission_directory}
fi

## clean up behind yourself
rm -rf "${working_directory}"

Gaussian 09

Gaussian 09 binaries have been installed in /sw/linux/gaussian/g09. To be able to use it on the thinc cluster please set the following environment variables in your Job script:.

For interactive use, if you use bash add the following lines to your ~/.profile:

export g09root=/sw/linux/gaussian
ulimit -s 65536
. $g09root/g09/bsd/g09.profile

export GAUSS_SCRDIR=/usr/scratch/$LOGNAME

If you interactively still use legacy tcsh add those lines to your ~/.login:

setenv g09root /sw/linux/gaussian
limit stacksize 65536
source $g09root/g09/bsd/g09.login

setenv GAUSS_SCRDIR /usr/scratch/$LOGNAME

GAUSS_SCRDIR is the scratch directory Gaussian will use.

Please make sure

  • your jobscript contains the above lines for initialization (your job will probably not run in a login shell)
  • that the GAUSSDIR_SRCDIR directory exists before calling Gaussian
  • to delete all the Gaussian scratch files after your job finishes

(The Gaussian 09 Manuals are currently in the hands of Denis or members of his group)

Turbomole

Please note: to use Turbomole, your account has to be a member of the “turbomole” group. Please mail helpdesk ((helpdesk@mpip-mainz.mpg.de) to add you to it.

Run the following commands before using turbomole:

  • Turbomole 7.3 and 6.5 (set TURBOVERSION to “6.5” to use version 6.5)

    TURBOVERSION=7.3
    export TURBODIR=/sw/linux/turbomole/$TURBOVERSION
    . $TURBODIR/Config_turbo_env
    
  • Turbomole 6.3 and 6.2 (set COSMOVERSION to “10” to use version 6.2)

    COSMOVERSION=11
    TURBODIR=/sw/linux/COSMOlogic${COSMOVERSION}/TURBOMOLE
    PATH=$PATH:$TURBODIR/scripts
    PATH=$PATH:$TURBODIR/bin/$(sysname)
    export TURBODIR PATH