Using Kernel Samepage Merging with KVM

For the original and writeup of the article, do look at Using KSM (Kernel Samepage Merging) with KVM. There is a correponding pdf article Increasing Virtual Machine Density with KSM (pdf) by QUMRANET

In short, from the article

Kernel SamePage Merging is a recent linux kernel feature which combines identical memory pages from multiple processes into one copy on write memory region. Because kvm guest virtual machines run as processes
under linux, this feature provides the memory overcommit feature to kvm so important to hypervisors for more efficient use of memory……

Pointer 1. Verifying Kernel KSM Support

# grep KSM /boot/config -'uname -r'

You should see something like this if KSM is enabled


You should also see a directory for KSM in

Pix taken from Linux-KVM
Pointer 2: By default, KSM is limited to 2000 kernel pages.

To verify, type the following command

# cat /sys/kernel/mm/ksm/max_kernel_pages
You should see

Pointer 3: Verifying KVM Support for Samepage Merging

From the article…..
In order for your KVM guests to take advantage of KSM, your version of qemu-kvm must explicitly request from the kernel that identical pages be merged using the new madvise interface. The patch for this feature was added to the kvm development tree just recently following the kvm-88 release. If you’re compiling kvm yourself you can verify whether your version of kvm will support KSM by inspecting exec.c source file for the following lines of code

If you don’t see these lines in your exec.c file then your kvm process will still run fine but but it won’t take advantage of KSM.

        madvise(new_block->host, size, MADV_MERGEABLE);

Pointer 4 – Run multiple simiar guests

…….With multiple virtual machines running, you can verify that KSM is working by inspecting the following file to see how many pages are being shared between your kvm guests.

If the value is greateer than zero, KSM is used

# cat /sys/kernel/mm/KSM/pages_sharing

Installing NWChem 6 with OpenMPI, Intel Compilers and Intel MKL on CentOS 5

Here is a write-up of my computing platform and applications:

  1. NWChem 6.1 (Feb 2012)
  2. OpenMPI (version 1.4.3)
  3. Intel Compilers 2011 XE (version 12.0.2)
  4. Intel MKL (
  5. Infiniband Inteconnect (OFED 1.5.3)
  6. CentOS 5.4 (x86_64)

First thing first, just make sure your cluster has the necessary components. Here are some of the preliminary you may want to take a look

  1. If you are eligible for the Intel Compiler Free Download. Download the Free Non-Commercial Intel Compiler Download
  2. Build OpenMPI with Intel Compiler
  3. Installing Voltaire QDR Infiniband Drivers for CentOS 5.4

Assuming you are done, you may want to download the NWChem 6.1 from NWChem Website. You may also want to take a look at instruction set for Compiling NWChem

# tar -zxvf Nwchem-6.1-2012-Feb-10.tar.gz
# cd nwchem-6.1

Create a script so that all these “export” parameter can be typed once only and kept. The script I called it Make sure that the ssh key are exchanged between the nodes. To have idea an of SSH key exchange, see blog entry Auto SSH Login without Password

Here is my For more details information, see Compiling NWChem for details on some of the parameters

export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=/root/nwchem-6.1
export IB_INCLUDE=/usr/include
export IB_LIB=/usr/lib64
export IB_LIB_NAME="-libumad -libverbs -lpthread"
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/usr/local/mpi/intel
export MPI_LIB=$MPI_LOC/lib
export MPI_INCLUDE=$MPI_LOC/include
export LIBMPI="-L/usr/local/mpi/intel/lib -lmpi_f90 -lmpi_f77 -lmpi -lpthread"
export FC=ifort
export CC=icc
cd $NWCHEM_TOP/src
make clean
make 64_to_32
make USE_64TO32=y HAS_BLAS=yes BLASOPT="-L/opt/intel/mkl/ -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread"
make FC=ifort CC=icc nwchem_config >& make.log

Do note that if you are compiling with proprietary BLAS libraries like MKL, note the instruction from Compiling NWChem

WARNING: In the case of 64-bit platforms, most vendors optimized BLAS libraries cannot be used. This is due to the fact that while NWChem uses 64-bit integers (i.e. integer*8) on 64-bit platforms, most of the vendors optimized BLAS libraries used 32-bit integers. BLAS libraries not supporting 64-bit integers (at least in their default options/installations) include CXML (DECOSF), ESSL (LAPI64), MKL (LINUX64/ia64 and x86_64), ACML(LINUX64/x86_64), and GotoBLAS2(LINUX64). The same holds for the ScaLAPACK libraries, which internally use 32-bit integers.

cd $NWCHEM_TOP/src
make clean
make 64_to_32
make USE_64TO32=y HAS_BLAS=yes BLASOPT="-L/opt/intel/mkl/ -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread"

General Site Installation

Determine the local storage path for the install files. (e.g., /usr/local/NWChem).
Make directories

# mkdir /usr/local/nwchem-6.1
# mkdir /usr/local/nwchem-6.1/bin
# mkdir /usr/local/nwchem-6.1/data

Copy binary

# cp $NWCHEM_TOP/bin/${NWCHEM_TARGET}/nwchem /usr/local/nwchem-6.1/bin
# cd /usr/local/nwchem-6.1/bin
# chmod 755 nwchem

Copy libraries

# cd $NWCHEM_TOP/src/basis
# cp -r libraries /usr/local/nwchem-6.1/data

# cd $NWCHEM_TOP/src/
# cp -r data /usr/local/nwchem-6.1

# cd $NWCHEM_TOP/src/nwpw
# cp -r libraryps /usr/local/nwchem-6.1/data

The Final Lap (From Compiling NWChem)

Each user will need a .nwchemrc file to point to these default data files. A global one could be put in /usr/local/nwchem-6.1/data and a symbolic link made in each users $HOME directory is probably the best plan for new installs. Users would have to issue the following command prior to using NWChem: ln -s /usr/local/nwchem-6.1/data/default.nwchemrc $HOME/.nwchemrc

Contents of the default.nwchemrc file based on the above information should be:

nwchem_basis_library /usr/local/nwchem-6.1/data/libraries/
nwchem_nwpw_library /usr/local/nwchem-6.1/data/libraryps/
ffield amber
amber_1 /usr/local/nwchem-6.1/data/amber_s/
amber_2 /usr/local/nwchem-6.1/data/amber_q/
amber_3 /usr/local/nwchem-6.1/data/amber_x/
amber_4 /usr/local/nwchem-6.1/data/amber_u/
spce    /usr/local/nwchem-6.1/data/solvents/spce.rst
charmm_s /usr/local/nwchem-6.1/data/charmm_s/
charmm_x /usr/local/nwchem-6.1/data/charmm_x/