Linux Binary Gaussian 09 Installation Instructions

Taken and modified from the README.BIN for my environment. This deserve hightlight for adminstrators to setup it quickly.

  1. Check that you have the correct versions of the OS, and libraries for your machine, as listed in the website G09 platform list
  2. Select or create a group (e.g. g09) which will own the Gaussian files inside the /etc/group. Users who will run Gaussian should either already be in this group, or should have this added to their list of groups.
  3. Create a Directory to place g09 and gv (For example gaussian). You can do it by using a command
    mkdir gaussian
  4. Mount the Gaussian CD  using commands like this one
    mount /mnt/cdrom 
  5. Within the CD, you can copy the gaussian binary contents (E64_930N.TGZ) out into your newly created gaussian directory.
  6. Untar it by using the command
    tar -zxvf E64_930N.TGZ
  7. Change ownership for the newly created g09 directory from step 6.
    chgrp -Rv g09 g09
  8. Install
    cd g09
    ./bsd/install
  9. Set the environment for users login
    touch .login

    Place the below contents into the .login

    g09root=/usr/local/gaussian/
    GAUSS_SCRDIR=/scratch/$USER
    export g09root GAUSS_SCRDIR
    . $g09root/g09/bsd/g09.profile
  10. Put it in your .bash_profile
    source .login

Manual setup of TCP LINDA for Gaussian
To configure for TCP Linda for Gaussian to run Parallel on Nodes, all you need is to tweak the ntsnet and LindaLauncher file found at g09 directory. For TCP Linda to work in Gaussian, just make sure the LINDA_PATH is correct.

  1. ntsnet is found $g09root/ntsnet (where $g09root = /usr/local/gaussian/g09 in my installation)
  2. LindaLauncher is found in $g09root/linda8.2/opteron-linux/bin/LindaLauncher (where $g09root = /usr/local/gaussian/g09 in my installation)
  3. flc is found at $g09root/opteron-linux/bin/flc
  4. pmbuild is found at $g09root/opteron-linux/bin/pmbuild
  5. vntsnet is found at $g09root/opteron-linux/bin/vntsnet
LINDA_PATH=/usr/local/gaussian/g09/linda8.2/opteron-linux/

Auto-Install for Gaussian. This can also be found at Gaussian Installation Notes

# cd /usr/local/gaussian/g09
# ./bsd/install

Put the .tsnet.config in your home directory.

# touch .tsnet.config
Tsnet.Appl.nodelist:
Tsnet.Appl.verbose: True
Tsnet.Appl.veryverbose: True
Tsnet.Node.lindarsharg: ssh
Tsnet.Appl.useglobalconfig: True

Installing NWChem 5 with OpenMPI, Intel Compilers and MKL and CentOS 5.x

With much credit to Vallard Land’s Blog on Compiling NWChem and information on NwChem Build Notes from CSE Wiki. I was able to install NwChem on my GE-interconnect cluster with minimal modification. First install the prerequistics, that is Intel Compilers and MKL and of course OpenMPI. I’m using CentOS 5.4 x86-64

  1. If you are eligible for the Intel Compiler Free Download. Download the Free Non-Commercial Intel Compiler Download
  2. Build OpenMPI with Intel Compiler

Finally, the most important, the installation of NWChem. First go to NWChem, read the terms and conditions and request for a login and password. Once you have obtained the tar copy of NwChem. At this point in time, download “nwchem-5.1.1.tar.tar”

# tar -xvf nwchem-5.1.1.tar.tar
# cd nwchem-5.1.1

Create a script so that all these “export” parameter can be typed once only and kept. The script I called it compile_nwchem.sh. Make sure that the ssh key are exchanged between the nodes. To have idea an of SSH key exchange, see blog entry Auto SSH Login without Password

export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/home/melvin/nwchem-5.1.1/   (installation path)
export NWCHEM_TARGET=LINUX64
export USE_MPI=y
export USE_MPIF=y
export MPI_LOC=/usr/local/
export MPI_LIB=$MPI_LOC/lib
export LIBMPI="-L $MPI_LIB -lmpi -lopen-pal -lopen-rte -lmpi_f90 -lmpi_f77"
export MPI_INCLUDE=$MPI_LOC/include
# export ARMCI_NETWORK=OPENIB (if you using IB)
export LARGE_FILES=TRUE
export NWCHEM_MODULES=all
export FC=ifort
export CC=icc

cd $NWCHEM_TOP/src
make CC=icc FC=ifort -j8

it should compiled well without issue. You should have nwchem executable Do note that NWCHEM is the final binary path of usage. NWCHEM_TOP is

# export NWCHEM=/usr/local/nwchem-5.1.1
# export NWCHEM_TOP=/home/melvin/nwchem-5.1.1/

# mkdir $NWCHEM/bin $NWCHEM/data
# cp /home/melvin/nwchem-5.1.1/bin/LINUX64/nwchem $NWCHEM/bin
# cp /home/melvin/nwchem-5.1.1/bin/LINUX64/depend.x $NWCHEM/bin/
# cd $NWCHEM_TOP/src/basis
# cp -r libraries $NWCHEM/data/
# cd $NWCHEM_TOP/src/
# cp -r data $NWCHEM
# cd $NWCHEM_TOP/src/nwpw/libraryps
# cp -r pspw_default $NWCHEM/data/
# cp -r paw_default/ $NWCHEM/data/
# cp -r TM $NWCHEM/data/
# cp -r HGH_LDA $NWCHEM/data/

This should put complete. Make sure the $NWCHEM directory is made available to the rest of the cluster

Finally, copy the src to the

# cp -r /home/melvin/nwchem-5.1.1/bin/LINUX64/nwchem/src $NWCHEM/src

Another good resource can be seen How to build Nwchem-5.1.1 on Intel Westmere with Infiniband network

Building the GAMESS with Intel® Compilers, Intel® MKL and OpenMPI on Linux

Modified from the excellent tutorial Building the GAMESS with Intel® Compilers, Intel® MKL and Intel® MPI on Linux for OpenMPI which is

The prerequisites Software

  1. Intel® C++ Compiler for LINUX,
  2. Intel® Fortran Compiler for LINUX,
  3. Intel® MKL,
  4. OpenMPI for Linux.

Platform:

  1. IA64/x86_64.

Installing the Prerequisites

  1. If you are eligible for the Intel Compiler Free Download. Download the Free Non-Commercial Intel Compiler Download
  2. Compile Intel Compilers with OpenMPI. See Building OpenMPI with Intel Compiler. Make sure your pathing are properly written and sourced.

Intel Environment setup
I created a intel.sh script inside /etc/profile.d/ and put the following information inside

# cd /etc/profile.d
# touch intel.sh
# vim intel.sh

Edit the following

export INTEL_COMPILER_TOPDIR="/opt/intel/Compiler/11.1/069"
. $INTEL_COMPILER_TOPDIR/bin/intel64/ifortvars_intel64.sh
. $INTEL_COMPILER_TOPDIR/bin/intel64/iccvars_intel64.sh

Building the Application

1. Copy/move tar file gamess-current.tar.gz to the directory /opt

2 .Uncompress the tar file

# tar -zxvf gamess-current.tar.tar

3. Go to the gamess directory

# cd gamess

4. Creating actvte.x file

# cd tools
# cp actvte.code actvte.f
# Replace all "*UNX" by " "(4 spaces with out " ") in the file actvte.f
# ifort -o actvte.x actvte.f
# rm actvte.f
# cd ..

5. Building the Distributed Data Interface(DDI) with OpenMPI:

# cd ddi
# vim compddi

 
5a. Editing the compddi file

## Set machine type (approximately line 18): ##
set TARGET=linux-ia64

## Set MPI communication layer (approximately line 48): ##
set COMM = mpi

## Set include directory for OpenMPI (approximately line 105): ##
## where is mpi header "mpi.h" is located ##
set MPI_INCLUDE_PATH = '-I/usr/mpi/intel/include'

5b. Compile compddi with OpenMPI

## Build DDI with OpenMPI ##
# ./compddi
# cd ..

If building completed successfully then library libddi.a will appear. Otherwise check compddi.log for errors.

6. Compiling the GAMESS:

6a. Editing file comp

vim comp
## Set machine type (approximately line 15): ##
set TARGET=linux-ia64

## Set the GAMESS root directory (approximately line 16): ##
chdir /opt/gamess

## Uncomment (approximately line 1461): ##
setenv MKL_SERIAL YES

6b Editing file compall

## Set machine type (approximately line 16): ##
set TARGET=linux-ia64

## Set the GAMESS root directory (approximately line 17): ##
chdir /opt/gamess

## Set to use Intel® C++ Compiler (approximately line 70): ##
if ($TARGET == linux-ia64) set CCOMP='icc'

6c Compiling the GAMESS:

# ./compall
# cd ..

7. Liniking the GAMESS with Intel® Software products:

7a Edit the file lked

## Set machine type (approximately line 18): ##
set TARGET=linux-ia64

## Set the GAMESS root directory (approximately line 19): ##
chdir /opt/games

## Check the  MKL environment (approximately line 511) is correct: for (x86_64)##
setenv  setenv MKLPATH `ls -d /opt/intel/mkl/*/lib/em64t`
set mklver=`ls /opt/intel/mkl`

## Set the message passing libraries in a single line (approximately line 710): ##
set MSG_LIBRARIES='../ddi/libddi.a -L/usr/local/lib -lmpi -lpthread'

7b Link the GAMESS

# ./lked

If linking completed successfully then executable file gamess.00.x will appear

8. Running the Application

8 Running the Application :
This section below describes how to execute GAMESS with Intel and OpenMPI. For further information check file ./ddi/readme.ddi.
For the testing GAMESS will be used script rungms as the base.

8a

## Set the target for execution to mpi (line 59): ##
set TARGET=mpi

## Set a directory SCR where large temporary files can reside(line 60): ##
set SCR=/scratch

## Correct the setting environment variables ERICFMT and MCPPATH (lines 127and 128): ##
setenv ERICFMT /opt/gamess/ericfmt.dat
setenv MCPPATH /opt/gamess/mcpdata

## Replace all “~$USER” by “/opt/gamess/tests”. Or by other directory. ##
## NOTE: Directory /scratch should exist. If no then create it. ##
## Replace all “/home/mike/gamess” by “/opt/gamess”. ##

## Correct the environment variables for Intel® MKL and OpenMPI (lines 948 and 953): ##
setenv LD_LIBRARY_PATH /opt/intel/mkl/10.2.4.032/lib/em64t $LD_LIBRARY_PATH
setenv LD_LIBRARY_PATH /usr/local/lib:$LD_LIBRARY_PATH.

## Correct setting environment variables to execution OpenMPI path (line 954): ##
set path=(/usr/local/bin $path)

Now choose the testcase from directory ./tests and run GAMESS.
$./rungms exam08
The output data will be stored in the directory /scratch.

To execute GAMESS on 2 or more processes on 1 node:
$ ./rungms exam08 00 2