Using stunnel to generate to create a self-signed certificate for SL 6 and CentOS 6

Much of this material comes from the CentOS 5 guide but applied on SL 6 and CentOS 6

The stunnel Program allows administrator to create self-signed certification using external OpenSSL Libraries included with RHEL and its clone to provide strong cryptography and protect connection.

First ensure you have your repositories enabled. For more information on SL 6 and CentOS 6, see Repository of CentOS 6 and Scientific Linux 6

# yum install stunnel

To create a self-signed SSL certificate, first go to /etc/pki/tls/certs/ directory

# cd /etc/pki/tls/certs/
# make stunnel.pem

Answer all the questions.

Once the certificate is generated, it is possible to use the stunnel command to start the Pop3d mail daemon using the following command:

# /usr/sbin/stunnel -d 995 -l /usr/sbin/pop3d pop3d

Once this command is issued, it is possible to open an POP3 email client and connect to the email server using SSL encryption.

Installing Pylith using Pylith Installer

PyLith is a finite element code for the solution of dynamic and quasi-static tectonic deformation problems. This entry will only focus on the compilation of Pylith from the installer. Most if not all of the information comes from INSTALLER files.


OVERVIEW

This installer builds the current PyLith release and its dependencies from source.

PyLith depends on several other libraries, some of which depend on other libraries. As a result, building PyLith from source can be tricky and is fraught with potential pitfalls. This installer attempts to eliminate these obstacles by providing a utility that builds all of the dependencies in the proper order with the required options in addition to PyLith itself.

The installer will download the source code for PyLith and all of the dependencies during the install process, so you do not need to do this yourself. Additionally, the installer provides the option of checking out the PyLith and PETSc source code from the Subversion and Mercurial repositories (requires subversion and mercurial be installed); only use this option if you want the bleeding edge versions and are willing to rebuild frequently.

SYSTEM REQUIREMENTS

PyLith Installer should work on any UN*X system.  It requires the following language tools:

* A C compiler.
* Tar archiving utility
* wget or curl networking downloaders.

If you are using a modern UN*X system, there is a good chance that the above tools are already installed.

STEP 1 – Download and unpack the installer

Download the installer.

http://www.geodynamics.org/cig/software/pylith/pylith-installer-1.6.1-0.tgz

Untar the source code for the installer:

# mkdir -p $HOME/src/pylith
# cd $HOME/src/pylith
# mv $HOME/Downloads/pylith-installer-1.6.1-0.tgz
# tar -zxf pylith-installer-1.6.1-0.tgz

STEP 2 – Run Configure

On multi-core and multi-processor systems (not clusters but systems with more than one core and/or processor), the build process can be sped up by using multiple threads when running “make”. Use the configure argument –with-make-threads=NTHREADS where NTHREADS is the number of threads to use (1, 2, 4, 8, etc). The default is to use only
one thread. In the examples below, we set the number of threads to 2.

The examples below is not an exhaustive list of configure settings, rather it is a list of common combinations. You can enable/disable building each package to select the proper set of dependencies that need to be built.

Run configure with –help to see all of the command line arguments.

DEFAULT Installation

The default installation assumes you have
* C, C++, and Fortran compilers
* Python 2.4 or later
* MPI

$ mkdir -p $HOME/build/pylith
$HOME/src/pylith/pylith-installer-1.6.1-0/configure \
--with-make-threads=2 \
--prefix=$HOME/pylith

DESKTOP-LINUX-OPENMPI
In this case we assume MPI does not exist on your system and you want to use the OpenMPI implementation.

We assume you have
* C, C++, Fortran compilers
* Python 2.4 or later

mkdir -p $HOME/build/pylith
$HOME/src/pylith/pylith-installer-1.6.1-0/configure \
--enable-mpi=openmpi \
--with-make-threads=2 \
--prefix=$HOME/pylith

CLUSTER

We assume the cluster has been configured with compilers and MPI appropriate for the hardware. We assume that Python has not been installed or was not built with the selected compiler suite. So we assume you have
* C, C++, Fortran compilers
* MPI

mkdir -p $HOME/build/pylith
$HOME/src/pylith/pylith-installer-1.6.1-0/configure \
--enable-python \
--with-make-threads=2 \
--prefix=$HOME/pylith

STEP 3 – Setup your environment

Setup your environment variables (as indicated in the output of the
configure script).

cd $HOME/build/pylith
source setup.sh

STEP 4 – Build the software

Build all of the required dependencies and then PyLith. You do not need to run “make install”, because the installer includes this step in the make process.

#  make

NOTE

Depending on the speed and memory of your machine and the number of dependencies and which ones need to be built, the build process can   take anywhere from about ten minutes to several hours. As discussed above you can interrupt the build process and continue at a later   time from where you left off.

If you get error messages while running make with multiple threads, then try running make again as not all packages fully support parallel builds. You can also go to the build directory of the package and run “make” before running make in $HOME/build/pylith   again to resume the build process. For example,

#  cd netcdf-build
#  make

STEP 5 – Verify the installation

Run your favorite PyLith example or test problem to insure that PyLith
was installed properly.

Add the line

. $HOME/build/pylith/setup.sh

to your .bashrc (or other appropriate file) or manually add the environment variables from setup.sh to your .bashrc (or other appropriate file) so that the environment is setup properly automatically every time you open a shell.

Brief overview of Valgrind usage

This write-up covers some very basis commands. But I will try to list out some of the other collections of tutorial and reading to complement this lack of information. I’m assuming that you have compiled the program as written in Compiling Valgrind on CentOS 5

One of the most commonly used command in Valgrind is

# valgrind --tool=memcheck --leak-check=full ./my_program

Commonly-used Options

 S/No Command Option Description
 1  –leak-check=<no|summary|yes|full> [default: summary]  When enabled, search for memory leaks when the client program finishes. If set to summary, it says how many leaks occurred. If set to full or yes, it also gives details of each individual leak.
 2   –show-reachable=<yes|no> [default: no]  When disabled, the memory leak detector only shows “definitely lost” and “possibly lost” blocks. When enabled, the leak detector also shows “reachable” and “indirectly lost” blocks. (In other words, it shows all blocks, except suppressed ones)

For more information on more details usage of Valgrind of options and how to use,

  1. Valgrind Manual – 4.3 Memcheck Command Options
  2. Using Valgrind to Find Memory Leaks and Invalid Memory Use
  3. Using Valgrind to debug memory leaks

Compiling Valgrind on CentOS 5

Valgrind tools automatically detect many memory management and threading bugs, and is able to profile your programs in detail. It runs on the following platforms: X86/Linux, AMD64/Linux, ARM/Linux, PPC32/Linux, PPC64/Linux, S390X/Linux, ARM/Android (2.3.x), X86/Darwin and AMD64/Darwin (Mac OS X 10.6 and 10.7)

According to Valgrind, a number of useful tools are supplied as standard.

  1. Memcheck is a memory error detector. It helps you make your programs, particularly those written in C and C++, more correct.
  2. Cachegrind is a cache and branch-prediction profiler. It helps you make your programs run faster.
  3. Callgrind is a call-graph generating cache profiler. It has some overlap with Cachegrind, but also gathers some information that Cachegrind does not.
  4. Helgrind is a thread error detector. It helps you make your multi-threaded programs more correct.
  5. DRD is also a thread error detector. It is similar to Helgrind but uses different analysis techniques and so may find different problems.
  6. Massif is a heap profiler. It helps you make your programs use less memory.
  7. DHAT is a different kind of heap profiler. It helps you understand issues of block lifetimes, block utilisation, and layout inefficiencies.
  8. SGcheck is an experimental tool that can detect overruns of stack and global arrays. Its functionality is complementary to that of Memcheck: SGcheck finds problems that Memcheck can’t, and vice versa..
  9. BBV is an experimental SimPoint basic block vector generator. It is useful to people doing computer architecture research and development.

Compilation of Valgrind

Compilation is very straightforward……

# tar -xvjpf valgrind-3.7.0.tar.bz2
# cd valgrind-3.7.0
# ./configure --prefix=/usr/local/valgrind-3.7.0
# make; make install

Testing Valgrind

# /usr/local/valgrind-3.7.0/bin/valgrind ls -l

Either this works, or it bombs out with some complaint.

Compiling adaptive Poisson-Boltzmann Solver (APBS) on CentOS 5

Adaptive Poisson-Boltzmann Solver (APBS) is a software package for modeling biomolecular solvation through solution of the Poisson-Boltzmann equation (PBE), one of the most popular continuum models for describing electrostatic interactions between molecular solutes in salty, aqueous media.

Installation is very simple. There are many binaries there and you can use the binaries directly. Do note that the latest binaries (apbs-1.3) uses will require glibc 2.7 and greater. If you are using CentOS 5, you may want to use apbs-1.21 binaries or below.

I’m assuming you are using Intel Compilers. You can download and install Intel Compiler.

  1. If you are eligible for the Intel Compiler Free Download. Download the Free Non-Commercial Intel Compiler Download
  2. Build OpenMPI with Intel Compiler

If you are prepared to compile from source using the latest version, then you should be able to use the latest version even on CentOS 5.

To compile from source. The simplest and most straightforward compilation

# tar -zxvf apbs-1.3-source.tar.gz
# cd apbs-1.3-source
# ./configure --prefix=/usr/local/apbs-1.3
# make; make install

To enable openmpi

# ./configure --prefix=/usr/local/apbs-1.3 --with-openmpi=/usr/local/mpi
# make; make install

For more information, do look at the $HOME/apbs-1.3/configure –help or INSTALL file

1. APBS Project Site (http://sourceforge.net/projects/apbs/)

Using strace as a troubleshooting tool

Strace, when runs in conjunction with a program do output all the calls made to the kernel by the program.

One of quick way to found out what is going on in your program is to do

$ strace -c ./my_hello_world_program
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
74.80    0.002998        1499         2           wait4
21.91    0.000878           4       221           read
0.95    0.000038           0       237         2 mmap
0.77    0.000031          10         3         1 mkdir
0.67    0.000027           0       566       361 open
0.35    0.000014           0        81           mprotect
0.30    0.000012           0        62        37 stat
0.25    0.000010           0       225           close
0.00    0.000000           0        37         1 write
0.00    0.000000           0       132           fstat
0.00    0.000000           0         8           poll
0.00    0.000000           0         2           lseek
0.00    0.000000           0       120           munmap
0.00    0.000000           0        15           brk
0.00    0.000000           0        16           rt_sigaction
................

................

------ ----------- ----------- --------- --------- ----------------
100.00    0.004008                  1990       411 total

If you wish to do a tracing, just do a, you can easily find out the error if there was….

$ strace ./my_hello_world_program
............

............

open("/tmp/openmpi-sessions-root@starfruit-h00.cluster.spms.ntu.edu.sg_0/25979/1/0",
O_RDONLY|O_NONBLOCK|O_DIRECTORY) = -1 ENOENT (No such file or directory)
munmap(0x2b46e05ef000, 2111200)         = 0
munmap(0x2b46dffe5000, 2102312)         = 0
munmap(0x2b46dfdde000, 2123264)         = 0
munmap(0x2b46e103f000, 2106960)         = 0
munmap(0x2b46e1242000, 2104560)         = 0
munmap(0x2b46e269d000, 2114912)         = 0
munmap(0x2b46e41c9000, 2145008)         = 0
munmap(0x2b46e43d5000, 2162608)         = 0

If you wish the output of strace to a file instead, do use the argument -o

$ strace -o strace_output_file ./my_hello_world_program

If you wish to trace system call, process,network, you can use the “-e trace=file”, “-e trace=process”, “-e trace=network”,

$ strace -e trace=open,close,read,write ./my_hello_w0rld_program
$ strace -e trace=stat,chmod,unlink ./my_hello_world_program

Further Information:

  1. Solutions for tracing UNIX applications (IBM DeveloperWorks)
  2. strace – A very powerful troubleshooting tool for all Linux users (linuxhelp.blogspot.com)
  3. Ten commands every linux developer should know (Linux Journal)

Basic Overview and use of NMON on CentOS 5

nmon for Linux – Nigel’s performance Monitor for Linux is a wonderful Swiss Army Knife for Performance Information.You can display multiple screen on the same windows and get information on CPU, Memory, NFS, Network, Disks, Resource, kernel etc

nmon has single binaries for each operating system including Red Hat, SUSE, Ubuntu, OpenSUSE, Fedora etc. Using the binary is as simple as starting the executable like

$ ./nmon_x86_64_rhel54

Using nmon in basic mode. For more information, do read the nmon for Linux Getting Started for more details

  1. To quit, just hit “q”
  2. Most of the rest are toggled commands i.e. hit c to see the CPU stats and hit c again to remove CPU stats.
  3. For disk graphs hit d and you will see a 50 column graph of the read and write busy percentages
  4. For disk numbers hit D and if you hit D again you see different information eventually hitting D will close this section

Using nmon for Linux in data capture mode

  1. Capturing a small sample file: nmon -f -s 2 -c 30
  2. -f means the data will be saved and not displayed on the screen
  3. -s 2 means data capture every 2 seconds
  4. -c 30 means 30 data points or snap shots
  5. Do note that the nmon runs like a daemon process in the background. nmon will continue to run till completion whether you connect or log-off.
  6. You can check whether the nmon is running by “ps -ef | grep nmon”
  7. Resulting file is xxx.nmon