Deploying the BIG-IP Edge Gateway and Local Traffic Manager with Vmware View

The Article from F5 deals with the following issues

  1. Deploying F5 with Vmware View
  2. Configuring the BIG-IP Edge Gateway
  3. Configuring the BIG-IP LTM for Vmware View

Deploying the Big-IP Edge Gateway and Local Traffic Manager with Vmware View 4.0 and 4.5 (pdf)

Installing check_mk for Nagios on CentOS 5

check_mk is a wonderful “a new general purpose Nagios-plugin for retrieving data”. But this wonder plugins is a good replacement for NRPE, NSClients++ etc. I’ve tried using check_mk in place of NSClient++ to monitor my Windows Machines successfully

Installing Nagios is straightforward. You may want to see Blog Entry Using Nagios 2.x/3.x on CentOS. In a nutshell, do this in sequence to avoid dependency issues

# yum install nagios nagios-devel
# yum install nagios-plugins-all

Downloading and unpacking check_mk

# wget http://mathias-kettner.de/download/check_mk-1.1.8.tar.gz
# tar -zxvf check_mk-1.1.8.tar.gz
# cd check_mk-1.1.8
# ./setup.sh --yes

Restart the Service

# service nagios restart
# service apache restart

Making the agent accessible through xinetd

# cp -p /usr/share/check_mk/agents/check_mk_agent.linux /usr/bin/check_mk_agent
# cp -p /usr/share/check_mk/agents/xinetd.conf /etc/xinetd.d/check_mk

Restart xinetd service.

# service xinetd restart

For more information on check_mk on Debian Derivative, do look at the excellent writup “HOWTO: How to install Nagios with check_mk, PNP and NagVis

Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part II)

This is a continuation of the article Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part I)

B. Running IMB

After “make” the executable has been located. Run IMB_MPI1 pingpong from management node or head node. Ensure the IMB-MPT1 is on the directory.

# cd /home/hpc/imb/src
# mpirun -np 16 -host node1,node2 /home/hpc/imb/src/IMB-MPI1 pingpong
# mpirun -np 16 -host node1,node2 /home/hpc/imb/src/IMB-MPI1 sendrecv
# mpirun -np 16 -host node1,node2 /home/hpc/imb/src/IMB-MPI1 exchange

Example of output from “pingpong”

benchmarks to run pingpong
#---------------------------------------------------
#    Intel (R) MPI Benchmark Suite V3.2.2, MPI-1 part
#---------------------------------------------------
# Date                  : Mon Feb  7 10:42:48 2011
# Machine               : x86_64
# System                : Linux
# Release               : 2.6.18-164.el5
# Version               : #1 SMP Thu Sep 3 03:28:30 EDT 2009
# MPI Version           : 2.1
# MPI Thread Environment: MPI_THREAD_SINGLE

# New default behavior from Version 3.2 on:

# the number of iterations per message size is cut down
# dynamically when a certain run time (per message size sample)
# is expected to be exceeded. Time limit is defined by variable
# "SECS_PER_SAMPLE" (=> IMB_settings.h)
# or through the flag => -time

# Calling sequence was:

# /home/shared-rpm/imb/src/IMB-MPI1 pingpong

# Minimum message length in bytes:   0
# Maximum message length in bytes:   4194304
#
# MPI_Datatype                   :   MPI_BYTE
# MPI_Datatype for reductions    :   MPI_FLOAT
# MPI_Op                         :   MPI_SUM
#
#

# List of Benchmarks to run:

# PingPong

#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
# ( 46 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#bytes #repetitions      t[usec]   Mbytes/sec
0         1000         8.74         0.00
1         1000         8.82         0.11
2         1000         8.83         0.22
4         1000         8.89         0.43
8         1000         8.90         0.86
16         1000         8.99         1.70
32         1000         9.00         3.39
64         1000        10.32         5.91
128         1000        10.52        11.60
256         1000        11.24        21.72
512         1000        12.12        40.30
1024         1000        13.76        70.98
2048         1000        15.55       125.59
4096         1000        17.81       219.35
8192         1000        22.47       347.67
16384         1000        45.24       345.41
32768         1000        59.83       522.29
65536          640        87.68       712.85
131072          320       154.80       807.47
262144          160       312.87       799.05
524288           80       556.20       898.96
1048576           40      1078.94       926.84
2097152           20      2151.90       929.41
4194304           10      4256.70       939.69

# All processes entering MPI_Finalize

If you wish to use the torque to run the IMB, do read the IBM “Setting up an HPC cluster with Red Hat Enterprise Linux

Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part I)

This writeup focuses on verifying the performance of the Infiniband Interconnects or RDMA/iWARP Interconnects as well. The material is modified from IBM Portal “Setting up an HPC cluster with Red Hat Enterprise Linux

A. Building Intel MPI Benchmark (“IMB”)

IMB can be run on a single node or several nodes. 2 or more nodes will be required to test message passing between nodes

Step 1: Download the IMB

1. Go to Intel® MPI Benchmarks 3.2.2 and download the software

2. Untar the package to a shared directory used by the nodes

# tar -zxvf IMB_3.2.2.tar.gz -C /home/hpc

3. Change directory to source directory

# cd /home/hpc/imb/src

4. Edit the make_ict makefile to change the assignment of the CC value from mpiic to mpicc as shown

LIB_PATH    =
LIBS        =
CC          = mpicc
ifeq (,$(shell which ${CC}))
$(error ${CC} is not defined through the PATH environment variable setting. Please try sourcing an Intel(r) Cluster Tools script file such as "mpivars.[c]sh" or "ictvars.[c]sh")
endif
OPTFLAGS    =
CLINKER     = ${CC}
LDFLAGS     =
CPPFLAGS    =

export CC LIB_PATH LIBS OPTFLAGS CLINKER LDFLAGS CPPFLAGS
include Makefile.base

5. Type “make” at /home/hpc/imb/src

# make

You should see a IMB-MPT1 executable. If you cannot find it, do use the command “locate” or “find” to locate executable.

See Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part II) for the 2nd Part of the Article.