Compiling Eigen

Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.. The current version is 3.1.3

To use the Eigen libraries, just untar and enter the source the directory

# tar -xvf eigen-3.1.3.tar

Copy the whole Eigen directory found in “eigen-eigen-2249f9c22fe8” folder to the /usr/local/include directory

# cd eigen-eigen-2249f9c22fe8/
# cp -Rv Eigen /usr/local/include

Compiling SamTools

Taken from Cufflinks – Getting Started

SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments. SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.

SAMTools (0.1.19)

Step 1: – Download SAMTools

Step 2: – Unpack and compile the SAMTools

# tar -zxvf samtools-0.1.19
# make

Step 3: – Copy the compiled libraries to /usr/local/lib

# cp $SAMTOOLS_HOME/libbam.a /usr/local/lib

Step 4: Copy headers to /usr/local/include

# mkdir /usr/local/include/bam
# cp $SAMTOOLS_HOME/*.h /usr/local/include/bam

Compiling and Installing Boost 1.53

Referenced from Cufflinks – Get started

Step 1 – Download Boost from Boost Website

Step 2 – Unpack the boost

# tar -zxvf boost_1_53_0.tar.gz

Step 3 – Run Bootstrap.sh

# ./bootstrap.sh
Building Boost.Build engine with toolset gcc...
.....

Step 4 – Run the binary b2

# ./b2 install --prefix=/usr/local/boost

It will take a while……

Step 5: Test that the compilation is working.

Compile the test case.

# cd $BOOST_ROOT/boost_1_53_0/tools/build/v2/example/hello
# $BOOST_ROOT/b2

You should see a binary file “hello” at

# $BOOST_ROOT/boost_1_53_0/tools/build/v2/example/hello/bin/gcc-4.4.6/debug

Compiling and Installing GAP System for Computational Discrete Algebra

GAP is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic The objects.

The information is taken from Compilation

Step 1: Download the GAP Software

Download the GAP Software at http://www.gap-system.org/Releases/index.html Current version at point of writing is 4.5.6

# tar -zxvf gap4r5p6_2012_11_04-18_46.tar.gz
# cd gap4r5p6

Step 2:  Configure and Install (Default Installation)

#./configure
# make

Step 3: Optional Installation – GMP packages.
If you wish to use the GAP internal GMP Packges, then the version of GMP bundled with this GAP will be used. This is the default.

# ./configure --with-gmp=yes|no|system|"path"

Step 4: Optional Installation – Readline Editing for better command-line Editing.
If the argument you supply is yes, then GAP will look in standard locations for a Readline installed on your system. Or you can specify a path to a Readline installation.

# ./configure --with-readline=yes|no|"path"

For more information,
See INSTALL when you unpacked the GAP.

Compiling AMBER 10 with Intel XE Compiler and Intel MKL 10

If you wish to compile AMBER 10 with the latest version of Intel XE and Intel and MKL 10, you will need to do a bit of tweaking of the

Standard installation of AMBER10

A. Setup of AmberTools environment.

1. Include AMBERHOME in your .bashrc

# vim .bashrc
export AMBERHOME=/usr/local/amber10

2. Configure the system for AmberTools. Assuming you are configuring for mpi, intel compiler, you will probably use the command

# cd $AMBERHOME/src
# ./configure_at mpi icc

3. Make the file Makefile_at

# make -f Makefile_at

You will have something like for the output.

Completed installation of AmberTools, version 1.1

B. Setting up the basic AMBER distribution for OpenMPI, Intel

Here is where if you are using the more up-to-date MKL (version 10), things will fail to compile. It took me a while to solve the problem. But basically you have to replace the  old dynamic linking flags with the new ones. According to the versions 10.x of Intel® MKL, Intel has re-architected Intel MKL and physically separated the interface, threading and computational components of the product.

1. Retrieve the latest bug fixes for AMBER 10 from (http://ambermd.org/bugfixes.html)

# cd $AMBERHOME
# chmod 700 apply_bugfix_all.x
# ./apply_bugfix_all.x bugfix.all

2. Edit the configure_amber file to match Intel MKL 10.x linking libraries. Fore information, do look at Linking Applications with Intel MKL version 10

# vim $AMBERHOME/src/configure_amber

Go to line 464, replace the EM64T dynamic linking parameters

# EM64T dynamic linking of double-precision-LAPACK and kernels
# loadlib="$loadlib -L$mkll -lvml -lmkl_lapack -lmkl -lguide -lpthread"
loadlib="$loadlib -L$mkll  -lguide -lpthread -lguide -lpthread -lmkl_intel_lp64 
-lmkl_intel_thread -lmkl_core"

Go to line 617, replace the EM64T dynamic linking parameters, do the same

#loadlib="$loadlib -L$mkll -lvml -lmkl_lapack -lmkl -lguide -lpthread"
loadlib="$loadlib -L$mkll  -lguide -lpthread -lmkl_intel_lp64 
-lmkl_intel_thread -lmkl_core"

3. Ensure the environmental varaiables are correct. You should have at least have the following. I’m also assuming you have compiled OpenMPI and has but it in the PATH

export AMBERHOME=/usr/local/amber10
export MPI_HOME=/usr/local/mpi/intel
export MKL_HOME=/opt/intel/mkl/10.2.6.038

4. Compile the  AMBER for parallel

# ./configure_amber -openmpi ifort
------   Configuring the netCDF libraries:   --------

Configuring netcdf; (may be time-consuming) NETCDF configure succeeded. 
MPI_HOME is set to /usr/local/mpi/intel
The configuration file, config_amber.h, was successfully created.

Building SCALAPACK 2.0.1 with Intel Compiler

SCALAPACK requires BLAS and LAPACK, please read the tutorial

  1. Building BLAS Library using Intel and GNU Compiler and
  2. Building LAPACK 3.4 with Intel and GNU Compiler

To compile the SCALAPACK,

# mkdir -p ~/src
# wget http://www.netlib.org/scalapack/scalapack-2.0.1.tgz
# tar -zxvf scalapack-2.0.1.tgz
# cd scalapack-2.0.1
# cp  SLmake.inc.example SLmake.inc

Edit the scalapack SLmake.inc file. At line 58 and line 59

BLASLIB       = -lblas -L/usr/local/blas
LAPACKLIB     = -llapack -L/usr/local/lapack/lib

At the Linux Console again

# make
# mv scalapack-2.0.1 /usr/local/

Update ane export your LD_LIBRARY_PATH

Building LAPACK 3.4 with Intel and GNU Compiler

The reference resource can be found from Building LAPACK library from Netlib. The current latest version of LAPACK is dated 11th November 2011. The current latest version is lapack-3.4.0.tgz.

LAPACK relied on BLAS. See Building BLAS Library using Intel and GNU Compiler

# mkdir -p ~/src
# wget http://www.netlib.org/lapack/lapack-3.4.0.tgz
# tar -zxvf lapack-3.4.0.tgz
# cd lapack-3.4.0.tgz
# cp INSTALL/make.inc.ifort make.inc
# make lapacklib
# make clean
# mkdir -p /usr/local/lapack
# mv liblapack.a /usr/local/lapack/
# export LAPACK=/usr/local/lapack/liblapack.a

For gfortran 64-bits compiler

# cp INSTALL/make.inc.gfortran make.inc

Edit the make.inc

PLAT = _LINUX
OPTS = -O2 -m64 -fPIC
NOOPT = -m64 -fPIC
# make lapacklib
# make clean
# mkdir -p /usr/local/lapack
# mv liblapack.a /usr/local/lapack/
# export LAPACK=/usr/local/lapack/liblapack.a

For more information on LAPACK, see LAPACK — Linear Algebra PACKage

Building BLAS Library using Intel and GNU Compiler

The reference resource can be found from Building BLAS library from Netlib. The current latest version of BLAS is dated 14th April 2011.

1. For Intel XE Compiler

# mkdir -p ~/src/
# cd ~/src/
# wget http://www.netlib.org/blas/blas.tgz
# tar -zxvf blas.tgz
# cd BLAS
# ifort -FI -w90 -w95 -cm -O3 -unroll -c *.f
# ar r libfblas.a *.o
# ranlib libfblas.a
# rm -rf *.o
# export BLAS=~/src/BLAS/libfblas.a
# ln -s libfblas.a libblas.a
# mv ~/src/BLAS /usr/local/

2. For 64-bits gfortran Compiler. Replace ” ifort -FI -w90 -w95 -cm -O3 -unroll -c *.f ” with

.........
# gfortran -O3 -std=legacy -m64 -fno-second-underscore -fPIC -c *.f
.........

The rest remains the same.

3. For 64-bits g77 Compiler. Replace ” ifort -FI -w90 -w95 -cm -O3 -unroll -c *.f ” with

............
# g77 -O3 -m64 -fno-second-underscore -fPIC -c *.f
............

The rest remains the same.

Installing packages for ALPS on CentOS 6

This tutorial is an extension of Installing ALPS 2.0 from source on CentOS 5 The installation can apply on CentOS 6 as well. For this tutorial, we will be installing.

  1. python 2.6 and python 2.6-devel (Assumed installed already)
  2. python-setuptools and python-setuptools-devel (Assumed installed already)
  3. blas and lapack
  4. numpy and numpy-f2py and python-matplotlib
  5. h5py,
  6. scipy

I’m trying to refrain for installing as much by source compiling and rely on repository for this tutorial. As such the packages will be behind

Step 1: Install blas and lapack packages from CentOS Base Repositories

# yum install lapack* blas*
================================================================================
 Package               Arch            Version              Repository     Size
================================================================================
Installing:
 blas                  x86_64          3.2.1-4.el6          base          321 k
 blas-devel            x86_64          3.2.1-4.el6          base          133 k
 lapack                x86_64          3.2.1-4.el6          base          4.3 M
 lapack-devel          x86_64          3.2.1-4.el6          base          4.5 M
Transaction Summary
================================================================================
Install       4 Package(s)

Total download size: 9.2 M
Installed size: 26 M
Is this ok [y/N]: y

Step 2: Install numpy numpy-f2py python-matplotlib

# yum install numpy numpy-f2py python-matplotlib
================================================================================
 Package                  Arch          Version               Repository   Size
================================================================================
Installing:
 numpy                    x86_64        1.3.0-6.2.el6         base        1.6 M
 numpy-f2py               x86_64        1.3.0-6.2.el6         base        430 k
 python-matplotlib        x86_64        0.99.1.2-1.el6        base        3.2 M

Transaction Summary
================================================================================
Install       3 Package(s)

Total download size: 5.3 M
Installed size: 22 M
Is this ok [y/N]: y

Step 3: Install h5py

# yum install h5py
================================================================================
 Package            Arch          Version                     Repository   Size
================================================================================
Installing:
 h5py               x86_64        1.3.1-6.el6                 epel        650 k
Installing for dependencies:
 hdf5-mpich2        x86_64        1.8.5.patch1-7.el6          epel        1.4 M
 liblzf             x86_64        3.6-2.el6                   epel         20 k
 mpich2             x86_64        1.2.1-2.3.el6               base        3.7 M

Transaction Summary
================================================================================
Install       4 Package(s)

Total download size: 5.7 M
Installed size: 17 M
Is this ok [y/N]: y

Step 4: Install scipy

# yum install scipy
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
scipy x86_64 0.7.2-5.el6 epel 5.8 M
Installing for dependencies:
suitesparse x86_64 3.4.0-2.el6 epel 782 k

Transaction Summary
================================================================================
Install 2 Package(s)

Total download size: 6.5 M
Installed size: 29 M
Is this ok [y/N]: y

Installing NWChem 6 with OpenMPI, Intel Compilers and Intel MKL on CentOS 5

Here is a write-up of my computing platform and applications:

  1. NWChem 6.1 (Feb 2012)
  2. OpenMPI (version 1.4.3)
  3. Intel Compilers 2011 XE (version 12.0.2)
  4. Intel MKL (10.2.4.032)
  5. Infiniband Inteconnect (OFED 1.5.3)
  6. CentOS 5.4 (x86_64)

First thing first, just make sure your cluster has the necessary components. Here are some of the preliminary you may want to take a look

  1. If you are eligible for the Intel Compiler Free Download. Download the Free Non-Commercial Intel Compiler Download
  2. Build OpenMPI with Intel Compiler
  3. Installing Voltaire QDR Infiniband Drivers for CentOS 5.4

Assuming you are done, you may want to download the NWChem 6.1 from NWChem Website. You may also want to take a look at instruction set for Compiling NWChem

# tar -zxvf Nwchem-6.1-2012-Feb-10.tar.gz
# cd nwchem-6.1

Create a script so that all these “export” parameter can be typed once only and kept. The script I called it nwchem_script_Feb2012.sh. Make sure that the ssh key are exchanged between the nodes. To have idea an of SSH key exchange, see blog entry Auto SSH Login without Password

Here is my nwchem_script_Feb2012.sh. For more details information, see Compiling NWChem for details on some of the parameters

export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/root/nwchem-6.1
export NWCHEM_TARGET=LINUX64
export ARMCI_NETWORK=OPENIB
export IB_INCLUDE=/usr/include
export IB_LIB=/usr/lib64
export IB_LIB_NAME="-libumad -libverbs -lpthread"
export MSG_COMMS=MPI
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/usr/local/mpi/intel
export MPI_LIB=$MPI_LOC/lib
export MPI_INCLUDE=$MPI_LOC/include
export LIBMPI="-L/usr/local/mpi/intel/lib -lmpi_f90 -lmpi_f77 -lmpi -lpthread"
export NWCHEM_MODULES=all
export LARGE_FILES=TRUE
export FC=ifort
export CC=icc
cd $NWCHEM_TOP/src
make clean
make 64_to_32
make USE_64TO32=y HAS_BLAS=yes BLASOPT="-L/opt/intel/mkl/10.2.4.032/lib/em64t -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread"
make FC=ifort CC=icc nwchem_config >& make.log

Do note that if you are compiling with proprietary BLAS libraries like MKL, note the instruction from Compiling NWChem

WARNING: In the case of 64-bit platforms, most vendors optimized BLAS libraries cannot be used. This is due to the fact that while NWChem uses 64-bit integers (i.e. integer*8) on 64-bit platforms, most of the vendors optimized BLAS libraries used 32-bit integers. BLAS libraries not supporting 64-bit integers (at least in their default options/installations) include CXML (DECOSF), ESSL (LAPI64), MKL (LINUX64/ia64 and x86_64), ACML(LINUX64/x86_64), and GotoBLAS2(LINUX64). The same holds for the ScaLAPACK libraries, which internally use 32-bit integers.

cd $NWCHEM_TOP/src
make clean
make 64_to_32
make USE_64TO32=y HAS_BLAS=yes BLASOPT="-L/opt/intel/mkl/10.2.4.032/lib/em64t -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread"

General Site Installation

Determine the local storage path for the install files. (e.g., /usr/local/NWChem).
Make directories

# mkdir /usr/local/nwchem-6.1
# mkdir /usr/local/nwchem-6.1/bin
# mkdir /usr/local/nwchem-6.1/data

Copy binary

# cp $NWCHEM_TOP/bin/${NWCHEM_TARGET}/nwchem /usr/local/nwchem-6.1/bin
# cd /usr/local/nwchem-6.1/bin
# chmod 755 nwchem

Copy libraries

# cd $NWCHEM_TOP/src/basis
# cp -r libraries /usr/local/nwchem-6.1/data

# cd $NWCHEM_TOP/src/
# cp -r data /usr/local/nwchem-6.1

# cd $NWCHEM_TOP/src/nwpw
# cp -r libraryps /usr/local/nwchem-6.1/data

The Final Lap (From Compiling NWChem)

Each user will need a .nwchemrc file to point to these default data files. A global one could be put in /usr/local/nwchem-6.1/data and a symbolic link made in each users $HOME directory is probably the best plan for new installs. Users would have to issue the following command prior to using NWChem: ln -s /usr/local/nwchem-6.1/data/default.nwchemrc $HOME/.nwchemrc

Contents of the default.nwchemrc file based on the above information should be:

nwchem_basis_library /usr/local/nwchem-6.1/data/libraries/
nwchem_nwpw_library /usr/local/nwchem-6.1/data/libraryps/
ffield amber
amber_1 /usr/local/nwchem-6.1/data/amber_s/
amber_2 /usr/local/nwchem-6.1/data/amber_q/
amber_3 /usr/local/nwchem-6.1/data/amber_x/
amber_4 /usr/local/nwchem-6.1/data/amber_u/
spce    /usr/local/nwchem-6.1/data/solvents/spce.rst
charmm_s /usr/local/nwchem-6.1/data/charmm_s/
charmm_x /usr/local/nwchem-6.1/data/charmm_x/