Using Python 2 on JupyterHub

By default, JupyterHub uses Python 3.3. However you may want to use Python-2 on JuypterHub. You may want to take a look at Basic Setup and Configuration of JupyterHub with Python-3.4.3

Step 1: Install latest version of Python-2

You may want to see Installing and Compiling Python 2.7.8 on CentOS 5. You can apply this for CentOS 6

Step 2: Remember to install iPython2 and iPython[notebook] on Python-2

Do take a look at Installing scipy and other scientific packages using pip3 for Python 3.4.1 for some similar ideas

 Step 3: Install Python KernelSpec for Python 2

# /usr/local/python-2.7.10/bin/python2 -m IPython kernelspec install-self
# /usr/local/python-3.4.3/bin/python3 -m IPython kernelspec install-self

Step 4: Restart JupytHub

# juypterHub

Basic Setup and Configuration of JupyterHub with Python-3.4.3

This is a basic setup and configuration of JupyterHub from

Prerequisites:

  • JupyterHub requires IPython >= 3.0 (current master) and Python >= 3.3
  • Need to install nodejs/npm

Step 1: Install Python-3.4.3
You can use the tutorial to learn how to setup Python 3 (Compiling and Configuring Python 3.4.1 on CentOS)

Step 2: Install Nodejs and npm and Javascript Dependencies. You will need to install and enable EPEL repository

# yum install nodejs npm
# npm install -g configurable-http-proxy

Step 3a: Installation of JupyerHub

# pip3 install "ipython[notebook]"
# pip3 install jupyterhub
# git clone https://github.com/jupyter/jupyterhub.git
# cd jupyterHub
# pip3 install -r dev-requirements.txt -e .

Step 3b: Update Javascript

# python3 setup.py js
# python3 setup.py css

Step 4a: Update .bashrc or /etc/profile.d for python-3 path if you wish to affect global settings

# export PATH=/usr/local/python-3.4.3/bin:$PATH

Step 4b: Launch the JupyterHub Server

# jupyterhub

and then visit `http://localhost:8000`, and sign in with your unix credentials. If it does not work, no worry, just read on

Step 5: Generate a default config file:

# jupyterhub --generate-config

Step 6: Create Group shadow and put users into the group

The intention is to allow users to read the /etc/shadow file which is a requirements for jupytehub

# groupadd shadow
# chown root.shadow /etc/shadow
# usermod -G shadow user1
# chmod 040 /etc/shadow

If you are using DNS name instead of localhost, you would have to modify the jupyterhub_config.py found in /usr/local/juypterhub.
At approximately line 46, 181, modify localhost to your public-facing IP Address for the c.JupyterHub.hub_ip

c.JupyterHub.hub_ip = '10.10.10.10'

Step 6: Using sudo to run JupyterHub without root privilege

Do read the important document from JupyterHub wiki Using sudo to run JupyterHub without root privilege

Launch jupyterhub again.

# jupyterhub

 

Installing scipy and other scientific packages using pip3 for Python 3.4.1

I wanted to install the packages using pip3. Before you can successfully install the python packages, do note that you have to make sure the following packages are found in your CentOS 6.

# yum install blas blas-devel lapack lapack-devel numpy

After you install according to Compiling and Configuring Python 3.4.1 on CentOS

The packages that I want to install are numpy scipy matplotlib ipython ipython[notebook] pandas sympy nose

# /usr/local/python-3.4.1/bin/pip install numpy
# /usr/local/python-3.4.1/bin/pip install scipy
# /usr/local/python-3.4.1/bin/pip install matplotlib
# /usr/local/python-3.4.1/bin/pip install ipython
# /usr/local/python-3.4.1/bin/pip install ipython[notebook]
# /usr/local/python-3.4.1/bin/pip install pandas
# /usr/local/python-3.4.1/bin/pip install sympy
# /usr/local/python-3.4.1/bin/pip install nose

 

Basic Configuration of Octopus 4.1.2 with OpenMPI on CentOS 6

Octopus

Octopus is a scientific program aimed at the ab initio virtual experimentation on a hopefully ever-increasing range of system types. Electrons are described quantum-mechanically within density-functional theory (DFT), in its time-dependent form (TDDFT) when doing simulations in time. Nuclei are described classically as point particles. Electron-nucleus interaction is described within the pseudopotential approximation.

Requirements: (Taken from Octopus Installation Wiki)

In a nutshell, this is what you need. Do look at Octopus Wiki for more details

  1. make
  2. cpp
  3. LibXC – Octopus 4.1.2 requires version 2.0.x or 2.1.x, and won’t compile with 2.2.x. (Due to bugfixes from libxc version 2.0 to 2.1, there will be small discrepancies in the testsuite for functionals/03-xc.gga_x_pbea.inp and periodic_systems/07-tb09.test). Octopus 4.2.0 will support libxc version 2.2.x also.
  4. FFTW
  5. LAPACK/BLAS
  6. GSL – Version 4.0 of Octopus (and earlier) can only use GSL 1.14 (and earlier). A few tests will fail if you use GSL 1.15 or later.
  7. Perl

Step 1: Compile libXC. You can download libxc.2.0.0 from Octopus
Compile libXC. After untaring, do take a look at the INSTALL

# tar -zxvf libxc-2.0.0
# cd libxc-2.0.0
# ./configure --prefix=/usr/local/libxc-2.0.0/ CC=gcc CXX=g++
# make - j 8
# make install

Step 2: Compile gsl-1.14
Do take a look at Compiling GNU Scientific Library (GSL) gsl-1.16 on CentOS 6. Do look at ftp://ftp.gnu.org/gnu/gsl/

Step 3: Update your .bashrc

.......
export FC=mpif90
export CC=mpicc
export FCFLAGS="-O3"
export CFLAGS="-O3"
export PATH=$PATH:/usr/local/openmpi-1.6.4-gnu/bin:...........
export LD_LIBRARY_PATH: $LD_LIBRARY_PATH: /usr/local/openmpi-1.6.4-gnu/lib:
/usr/local/fftw-3.3.3-single/lib:/usr/local/libxc-2.0.0/lib...................

 

Step 4: Configure the Octopus-4.1.2

# ./configure 
--prefix=/usr/local/octopus-4.1.2  \
--with-libxc-prefix=/usr/local/libxc-2.0.0 \
--with-libxc-include=/usr/local/libxc-2.0.0/include \
--with-gsl-prefix=/usr/local/gsl-1.14 \
--with-blas=/usr/lib64/libblas.so.3.2.1 
--with-arpack=/usr/lib64/libarpack.so.2 \ 
--with-fft-include=/usr/local/fftw-3.3.3-single\include \ 
--enable-mpi
# make -j 8
# make install

 

Compiling Gromacs 5.0.4 on CentOS 6

Compiling Gromacs has never been easier using the cmake. There are a few assumptions.

  1. Use MKL and Intel Compilers
  2. Use OpenMPI  as the MPI-of-choice. The necessary PATH and LD_LIBRARY_PATH have been placed in .bashrc
  3. We will use SINGLE precision for speed used MDRUN and MPI Flags

Here is my configuration file using Intel Compilers

# tar xfz gromacs-5.0.4.tar.gz
# cd gromacs-5.0.4
# mkdir build
# cd build
# /usr/local/cmake-3.1.3/bin/cmake 
-DGMX_BUILD_OWN_FFTW=ON \
-DREGRESSIONTEST_DOWNLOAD=OFF \ 
-DCMAKE_INSTALL_PREFIX=/usr/local/gromacs-5.0.4 \
-DGMX_MPI=on \
-DGMX_FFT_LIBRARY=mkl \
-DMKL_LIBRARIES="/usr/local/intel/mkl/lib/intel64" \
-DMKL_INCLUDE_DIR="/usr/local/intel/mkl/include" \
-DGMX_DOUBLE=on \
-DGMX_BUILD_MDRUN_ONLY=off \ 
-DCMAKE_C_COMPILER=mpicc \ 
-DCMAKE_CXX_COMPILER=mpicxx
# make
# make check
# sudo make install
# source /usr/local/gromacs/bin/GMXRC

Installation Flags

–   -DCMAKE_C_COMPILER=xxx equal to the name of the C99 compiler you
    wish to use (or the environment variable CC)
–   -DCMAKE_CXX_COMPILER=xxx equal to the name of the C++98 compiler you
    wish to use (or the environment variable CXX)
–   -DGMX_MPI=on to build using an MPI wrapper compiler
–   -DGMX_GPU=on to build using nvcc to run with an NVIDIA GPU
–   -DGMX_SIMD=xxx to specify the level of SIMD support of the node on
    which mdrun will run
–   -DGMX_BUILD_MDRUN_ONLY=on to build only the mdrun binary, e.g. for
    compute cluster back-end nodes
–   -DGMX_DOUBLE=on to run GROMACS in double precision (slower, and not
    normally useful)
–   -DCMAKE_PREFIX_PATH=xxx to add a non-standard location for CMake to
    search for libraries
–   -DCMAKE_INSTALL_PREFIX=xxx to install GROMACS to a non-standard
    location (default /usr/local/gromacs)
–   -DBUILD_SHARED_LIBS=off to turn off the building of shared libraries
–   -DGMX_FFT_LIBRARY=xxx to select whether to use fftw, mkl or fftpack
    libraries for FFT support
–   -DCMAKE_BUILD_TYPE=Debug to build GROMACS in debug mode

Compiling and Configuring Python 3.4.1 on CentOS

Step 1: Remember to turn on RPMForge and EPEL Repository.

For more information on repository, see Repository of CentOS 6 and Scientific Linux 6 

Step 2: Download Python-3.4.1 from the Python Download Page

Step 3: Install Prerequisite Software

# yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel

 Step 4: Configure and Build

# cd /installation_home/Python-3.4.1
# ./configure --prefix=/usr/local/python-3.4.1
# make
# make install

Step 5: Check that scripts query the correct interpreter:

#/usr/local/python-3.4.1/bin/python3

Step 6: Run setup.py from the Installation Directory of Python

# python setup.py install

Step 7: Install Python Modules (whatever you need. Here is an example)
You can use pip install to install packages using pip3. See Using pip to install python packages

# /usr/local/python-3.4.1/bin/pip3 install networkx

Compling ANTLR 2.7.7 on CentOS 6

What is ANTLR?

ANTLR, ANother Tool for Language Recognition, (formerly PCCTS) is a language tool that provides a framework for constructing recognizers, compilers, and translators from grammatical descriptions containing Java, C#, C++, or Python actions. ANTLR provides excellent support for tree construction, tree walking, and translation. There are

Step 1: Download ANTLR 2.7.7

Step 2: Untar ANTLR-2.7.7

# tar -zxvf antlr-2.7.7
# antlr-2.7.7

Step 3: For RHEL and CentOS, edit the source file /root/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp

# vim /root/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp

Add the following into the CharScanner.hpp file
antlr

Step 4: Compile the antlr-2.7.7

# ./configure --prefix=/usr/local/antlr2.7.7 --disable-examples
# make -j 8
# make install

References:

  1. http://sourceforge.net/p/nco/discussion/9830/thread/08ae0201

Compiling VASP 5.3.5 with OpenMPI 1.6.5 and Intel 12.1.5

Do take a look Compiling VASP 5.3.3 with OpenMPI 1.6.5 and Intel 12.1.5 for the step by step approach to compiling. Instead I will put up my make file here for your evaluation

 

Step 1: Makefile for vasp.5.lib

.SUFFIXES: .inc .f .F
#-----------------------------------------------------------------------
# Makefile for LINUX NAG f90
#-----------------------------------------------------------------------
# fortran compiler
FC=ifort

# C-preprocessor
#CPP = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C $*.F >$*.f
CPP = gcc -E -P -C -DLONGCHAR $*.F >$*.f

CFLAGS = -O
FFLAGS = -Os -FI
FREE = -FR

DOBJ = preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o

#-----------------------------------------------------------------------
# general rules
#-----------------------------------------------------------------------

libdmy.a: $(DOBJ) linpack_double.o
-rm libdmy.a
ar vq libdmy.a $(DOBJ)

linpack_double.o: linpack_double.f
$(FC) $(FFLAGS) $(NOFREE) -c linpack_double.f

# files which do not require autodouble
lapack_double.o: lapack_double.f
$(FC) $(FFLAGS) $(NOFREE) -c lapack_double.f
lapack_single.o: lapack_single.f
$(FC) $(FFLAGS) $(NOFREE) -c lapack_single.f
#lapack_cray.o: lapack_cray.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_cray.f

.c.o:
$(CC) $(CFLAGS) -c $*.c
.F.o:
$(CPP)
$(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
.F.f:
$(CPP)
.f.o:
$(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f

 

Step 2. Makefile for vasp.5.3.5

.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for Pentium/Athlon/Opteron
# based systems
# we recommend this makefile for both Intel as well as AMD systems
# for AMD based systems appropriate BLAS (libgoto) and fftw libraries are
# however mandatory (whereas they are optional for Intel platforms)
# For Athlon we recommend
#  ) to link against libgoto (and mkl as a backup for missing routines)
#  ) odd enough link in libfftw3xf_intel.a (fftw interface for mkl)
# feedback is greatly appreciated
#
# The makefile was tested only under Linux on Intel and AMD platforms
# the following compiler versions have been tested:
#  - ifc.7.1  works stable somewhat slow but reliably
#  - ifc.8.1  fails to compile the code properly
#  - ifc.9.1  recommended (both for 32 and 64 bit)
#  - ifc.10.1 partially recommended (both for 32 and 64 bit)
#             tested build 20080312 Package ID: l_fc_p_10.1.015
#             the gamma only mpi version can not be compiles
#             using ifc.10.1
#  - ifc.11.1 partially recommended (some problems with Gamma only and intel fftw)
#             Build 20090630 Package ID: l_cprof_p_11.1.046
#  - ifort.12.1 strongly recommended (we use this to compile vasp)
#             Version 12.1.5.339 Build 20120612
#
# it might be required to change some of library path ways, since
# LINUX installations vary a lot
#
# Hence check ***ALL*** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
#   retrieve the lapackage from ftp.netlib.org
#   and compile the blas routines (BLAS/SRC directory)
#   please use g77 or f77 for the compilation. When I tried to
#   use pgf77 or pgf90 for BLAS, VASP hang up when calling
#   ZHEEV  (however this was with lapack 1.1 now I use lapack 2.0)
# 2) more desirable: get an optimized BLAS
#
# the two most reliable packages around are presently:
# 2a) Intels own optimised BLAS (PIII, P4, PD, PC2, Itanium)
#     http://developer.intel.com/software/products/mkl/
#   this is really excellent, if you use Intel CPU's
#
# 2b) probably fastest SSE2 (4 GFlops on P4, 2.53 GHz, 16 GFlops PD,
#     around 30 GFlops on Quad core)
#   Kazushige Goto's BLAS
#   http://www.cs.utexas.edu/users/kgoto/signup_first.html
#   http://www.tacc.utexas.edu/resources/software/
#
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f90
SUFFIX=.f90

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=mpif90
# fortran linker
FCL=$(FC) -mkl


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
#  CPP_   =  /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
#  CPP_   =  /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
#  SUSE X.X, maybe some Red Hat distributions:

CPP_ =  ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

# this release should be fpp clean
# we now recommend fpp as preprocessor
# if this fails go back to cpp
CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf             charge density   reduced in X direction
# wNGXhalf            gamma point only reduced in X direction
# avoidalloc          avoid ALLOCATE if possible
# PGF90               work around some for some PGF90 / IFC bugs
# CACHE_SIZE          1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD
# RPROMU_DGEMV        use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV        use DGEMV instead of DGEMM in RACC (depends on used BLAS)
# tbdyn                 MD package of Tomas  Bucko
#-----------------------------------------------------------------------

CPP     = $(CPP_)  -DHOST=\"LinuxIFC\" \
-DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \
#          -DRPROMU_DGEMV  -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags  (there must a trailing blank on this line)
# byterecl is strictly required for ifc, since otherwise
# the WAVECAR file becomes huge
#-----------------------------------------------------------------------

#FFLAGS =  -FR -names lowercase -assume byterecl -I$(MKLROOT)/include/fftw -xAVX
FFLAGS =  -free -names lowercase -assume byterecl
#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK  SSE1 optimization,  but also generate code executable on all mach.
#       xK improves performance somewhat on XP, and a is required in order
#       to run the code on older Athlons as well
# -xW   SSE2 optimization
# -axW  SSE2 optimization,  but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------

# ifc.9.1, ifc.10.1 recommended
OFLAG=-O2 -ip
#OFLAG=-O2 -ip

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =
OBJ_NOOPT =
DEBUG  = -FR -O0
INLINE = $(OFLAG)

#-----------------------------------------------------------------------
# the following lines specify the position of BLAS  and LAPACK
# we recommend to use mkl, that is simple and most likely
# fastest in Intel based machines
#-----------------------------------------------------------------------

# mkl path for ifc 11 compiler
#MKL_PATH=$(MKLROOT)/lib/e

# mkl path for ifc 12 compiler
MKL_PATH=$(MKLROOT)/lib/intel64

MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/

# BLAS
# setting -DRPROMU_DGEMV  -DRACCMU_DGEMV in the CPP lines usually speeds up program execution
# BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide
# faster linking and available from at least version 11
#BLAS= -lguide  -mkl
BLAS= -mkl=sequential


# LAPACK, use vasp.5.lib/lapack_double

#LAPACK= ../vasp.5.lib/lapack_double.o

# LAPACK from mkl, usually faster and contains scaLAPACK as well
LAPACK =
#LAPACK= $(MKL_PATH)/libmkl_intel_lp64.a

# here a tricky version, link in libgoto and use mkl as a backup
# also needs a special line for LAPACK
# this is the best thing you can do on AMD based systems !!!!!!

#BLAS =  -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group  -liomp5
#LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a

#-----------------------------------------------------------------------

LIB  = -L../vasp.5.lib -ldmy \
../vasp.5.lib/linpack_double.o $(LAPACK) \
$(BLAS)

# options for linking, nothing is required (usually)
LINK =

#-----------------------------------------------------------------------
# fft libraries:
# VASP.5.2 can use fftw.3.1.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------

FFT3D   = fft3dfurth.o fft3dlib.o

# alternatively: fftw.3.1.X is slighly faster and should be used if available
#FFT3D   = fftw3d.o fft3dlib.o   /opt/libs/fftw-3.1.2/lib/libfftw3.a

# you may also try to use the fftw wrapper to mkl (but the path might vary a lot)
# it seems this is best for AMD based systems
#FFT3D   = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a
#INCS = -I$(MKLROOT)/include/fftw

#=======================================================================
# MPI section, uncomment the following lines until
#    general  rules and compile lines
# presently we recommend OPENMPI, since it seems to offer better
# performance than lam or mpich
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi
#-----------------------------------------------------------------------

#FC=mpif90
#FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf             charge density   reduced in Z direction
# wNGZhalf            gamma point only reduced in Z direction
# scaLAPACK           use scaLAPACK (recommended if mkl is available)
# avoidalloc          avoid ALLOCATE if possible
# PGF90               work around some for some PGF90 / IFC bugs
# CACHE_SIZE          1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD
# RPROMU_DGEMV        use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV        use DGEMV instead of DGEMM in RACC (depends on used BLAS)
# tbdyn                 MD package of Tomas  Bucko
#-----------------------------------------------------------------------

#-----------------------------------------------------------------------

CPP    = $(CPP_) -DMPI  -DHOST=\"LinuxIFC\" -DIFC \
-DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \
-DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK    \
-DRPROMU_DGEMV  -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply leave this section commented out
#-----------------------------------------------------------------------

# usually simplest link in mkl scaLAPACK
BLACS= -lmkl_blacs_openmpi_lp64
SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
#SCA= -lmkl_scalapack_lp64.a -lmkl_blacs_openmpi_lp64

#-----------------------------------------------------------------------
# libraries
#-----------------------------------------------------------------------

LIB     = -L../vasp.5.lib -ldmy  \
../vasp.5.lib/linpack_double.o \
$(SCA) $(LAPACK) $(BLAS)

#-----------------------------------------------------------------------
# parallel FFT
#-----------------------------------------------------------------------

# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
FFT3D   = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a

# alternatively: fftw.3.1.X is slighly faster and should be used if available
#FFT3D   = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o  /opt/libs/fftw-3.1.2/lib/libfftw3.a

# you may also try to use the fftw wrapper to mkl (but the path might vary a lot)
# it seems this is best for AMD based systems
#FFT3D   = fftmpiw.o fftmpi_map.o  fftw3d.o  fft3dlib.o   $(MKL_FFTW_PATH)/libfftw3xf_intel.a
#INCS = -I$(MKLROOT)/include/fftw

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC=   symmetry.o symlib.o   lattlib.o  random.o


SOURCE=  base.o     mpi.o      smart_allocate.o      xml.o  \
constant.o jacobi.o   main_mpi.o  scala.o   \
asa.o      lattice.o  poscar.o   ini.o  mgrid.o  xclib.o  vdw_nl.o  xclib_grad.o \
radial.o   pseudo.o   gridq.o     ebs.o  \
mkpoints.o wave.o     wave_mpi.o  wave_high.o  spinsym.o \
$(BASIC)   nonl.o     nonlr.o    nonl_high.o dfast.o    choleski2.o \
mix.o      hamil.o    xcgrad.o   xcspin.o    potex1.o   potex2.o  \
constrmag.o cl_shift.o relativistic.o LDApU.o \
paw_base.o metagga.o  egrad.o    pawsym.o   pawfock.o  pawlhf.o   rhfatm.o  hyperfine.o paw.o   \
mkpoints_full.o       charge.o   Lebedev-Laikov.o  stockholder.o dipol.o    pot.o \
dos.o      elf.o      tet.o      tetweight.o hamil_rot.o \
chain.o    dyna.o     k-proj.o    sphpro.o    us.o  core_rel.o \
aedens.o   wavpre.o   wavpre_noio.o broyden.o \
dynbr.o    hamil_high.o  rmm-diis.o reader.o   writer.o   tutor.o xml_writer.o \
brent.o    stufak.o   fileio.o   opergrid.o stepver.o  \
chgloc.o   fast_aug.o fock_multipole.o  fock.o  mkpoints_change.o sym_grad.o \
mymath.o   internals.o npt_dynamics.o   dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o \
nmr.o      pead.o     subrot.o   subrot_scf.o \
force.o    pwlhf.o    gw_model.o optreal.o  steep.o    davidson.o  david_inner.o \
electron.o rot.o  electron_all.o shm.o    pardens.o  paircorrection.o \
optics.o   constr_cell_relax.o   stm.o    finite_diff.o elpol.o    \
hamil_lr.o rmm-diis_lr.o  subrot_cluster.o subrot_lr.o \
lr_helper.o hamil_lrf.o   elinear_response.o ilinear_response.o \
linear_optics.o \
setlocalpp.o  wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o \
mlwf.o     ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o \
local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o \
bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o \
rmm-diis_mlr.o  linear_response_NMR.o wannier_interpol.o linear_response.o

vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp main.o  $(SOURCE)   $(FFT3D) $(LIB) $(LINK)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam  $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)

# special rules
#-----------------------------------------------------------------------
# these special rules have been tested for ifc.11 and ifc.12 only

fft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
fftw3d.o : fftw3d.F
$(CPP)
$(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX)
fftmpi.o : fftmpi.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
fftmpiw.o : fftmpiw.F
$(CPP)
$(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX)
wave_high.o : wave_high.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
# the following rules are probably no longer required (-O3 seems to work)
wave.o : wave.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
paw.o : paw.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
cl_shift.o : cl_shift.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
us.o : us.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)

Using pip to install python packages

Pointer 1: To install specific version of packages do

# pip install 'numpy==1.5.1'

Pointer 2: To show what files was installed

# pip show --files numpy
---
Name: numpy
Version: 1.8.1
Location: /usr/local/python-2.7.8/lib/python2.7/site-packages
Requires:
Files:
../numpy/__init__.py
.....
.....

Pointer 3: Uninstall a package

# pip uninstall num
Uninstalling SomePackage:

Pointer 4: Upgrade a package:

# pip install --upgrade SomePackage
[...]
Found existing installation: SomePackage 1.0
Uninstalling SomePackage:
Successfully uninstalled SomePackage
Running setup.py install for SomePackage
Successfully installed SomePackage

 Pointer 5: List what packages are outdated:

# pip list --outdated
SomePackage (Current: 1.0 Latest: 2.0)

References:

  1. pip 1.5.6 – A tool for installing and managing Python packages
  2. pip – installation