Compiling GNU Parallel on CentOS-7

GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU parallel can then split the input and pipe it into commands in parallel.

To Download, the got to the website https://mirror.freedif.org/GNU/parallel/

% tar -xvf parallel-latest.tar.bz2 
% ./configure --prefix=/usr/local/gnu/parallel 
% make 
% make install

There are some Youtube Resources on GNU Parallel which could be useful if you wish to learn how to use GNU Parallel

References:

  1. GNU Parallel

Intel® Edge AI Certification

Intel® Edge AI Certification training courses can be started and completed at no charge. To get officially certified and receive a badge, you must complete the assessment and review process, which costs $99 for one year. Follow up with an annual recertification course to update your skills and credentials.

Certification training includes:

  • Hands-on experience with edge AI tools and platforms, including the Intel® Distribution of OpenVINO™ toolkit and Intel® DevCloud for the Edge
  • Use cases that detect safety gear, prevent retail losses, identify manufacturing defects, and solve other real-world problems with the combined application of computer vision deep-learning inference.
  • Development of your own edge AI solutions portfolio, drawing on libraries and APIs for TensorFlow*, PyTorch*, Open Neural Network Exchange (ONNX*), and other public models, running on your choice of Intel® DevCloud for the Edge hardware clusters.

For more information and to Sign up….. Click Here

Compiling g2o with Eigen-3.3.9 with GNU-6.5

g2o is an open-source C++ framework for optimizing graph-based nonlinear error functions. g2o has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA.

A wide range of problems in robotics as well as in computer-vision involve the minimization of a non-linear error function that can be represented as a graph. Typical instances are simultaneous localization and mapping (SLAM) or bundle adjustment (BA). The overall goal in these problems is to find the configuration of parameters or state variables that maximally explain a set of measurements affected by Gaussian noise. g2o is an open-source C++ framework for such nonlinear least squares problems. g2o has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA.

https://github.com/RainerKuemmerle/g2o

What I used:

  • gnu-6.5
  • m4-1.4.18
  • gmp-6.1.0
  • mpfr-3.1.4
  • mpc-1.0.3
  • isl-0.18
  • gsl-2.1
  • cmake-3.21.3
% git clone https://github.com/RainerKuemmerle/g2o
% cd g2o
% mkdir build
% cd build

Sometimes, cmake does not capture your export value. The best is to force the c and cxx compilers directly by stating their path.

% cmake .. -DEIGEN3_INCLUDE_DIR=/usr/local/eigen-3.3.9/build/include/eigen3 -DCMAKE_INSTALL_PREFIX=/usr/local/g2o/ -DCHOLMOD_INCLUDE_DIR=/usr/lib64/ -DCMAKE_C_COMPILER=/usr/local/gcc-6.5.0/bin/gcc -DCMAKE_CXX_COMPILER=/usr/local/gcc-6.5.0/bin/g++
% make -j 4
% make all

References:

Checking Disk Usage within the subfolders but avoid mount-point

If you need to check Usage, but you wish to avoid the mount-point, you can use the command

[root@hpc-hn /]# du -h -x -d 1
48M     ./etc
552M    ./root
11G     ./var
1.1G    ./tmp
11G     ./usr
0       ./media
0       ./mnt
4.8G    ./opt
0       ./srv
0       ./install
0       ./log
0       ./misc
0       ./net
0       ./server_priv
0       ./ProjectSpace
0       ./media1
0       ./media2
28G     .
  • -h refers to human-readable
  • -d refers to depth level. By default, it is 0 which is the same as summarize
  • -x skip directories on different file systems

Gromacs-2020.6 and Plumed-2.7.2 with Intel-2019

Install Plumed-2.7.2

Plumed-2.7.2 can be installed in a similar fashion as Compiling plumed-2.6.4 with Intel 2019

Download and unpack Gromacs-2020.6

% wget https://ftp.gromacs.org/gromacs/gromacs-2020.6.tar.gz
% tar -zxvf gromacs-2020.6.tar.gz
% cd gromacs-2020.6

Plumed Gromacs-2020.6

% plumed patch -p
PLUMED patching tool

1) gromacs-2019.6   4) gromacs-4.5.7    7) namd-2.14
2) gromacs-2020.6   5) namd-2.12        8) qespresso-5.0.2
3) gromacs-2021     6) namd-2.13        9) qespresso-6.2
Choose the best matching code/version:2

Compile Gromacs as according to Compiling Gromacs-2019.3 with Intel 2018 MKL and AVX-512

References:

  1. Compiling plumed-2.6.4 with Intel 2019
  2. Compiling Gromacs-2019.3 with Intel 2018 MKL and AVX-512
  3. Install Gromacs-2016.3 and Plumed-2.3.3

Webinar: High Performance GPU Acceleration – Part 1: Code Design

  • Online Registration Here
  • Date: 13th October 2021 9am PDF

Heterogeneous computing comes with the challenge of designing code that can work in multi-processor/accelerator environments. Developers need to be equipped with the right set of metrics to make informed design and optimization decisions that take advantage of target hardware.

In Part 1 of this 2-part webinar series, Technical Consulting Engineer Cory Levels focuses on designing software for efficient offload from CPUs to GPUS—even before final hardware is available—using Intel® Advisor. Using a walkthrough of an ISO 3DFD example (3D isotropic Finite Difference), you will learn how to:

  • Optimize your CPU application for memory and compute
  • Identify efficient GPU offload opportunities and quantify the potential performance speed up
  • See performance headroom of your GPU offloaded code against hardware limitations, and get insights for an effective optimization roadmap

For More information, do take a look at the Intel Site Here.

Compiling plumed-2.6.4 with Intel 2019

PLUMED is a plugin that works with a large number of molecular dynamics codes (Codes interfaced with PLUMED ). It can be used to analyze features of the dynamics on-the-fly or to perform a wide variety of free energy methods. PLUMED can also work as a Command Line Tools to perform analysis on trajectories saved in most of the existing formats.

The Installation guide can be found Plumed Installation

Step 1: Source the Intel Compiler Environments. At least MKL, Compilers and MPI Environments should be

% source /usr/local/intel/2019u5/mkl/bin/mklvars.sh intel64
% source /usr/local/intel/2019u5/compilers_and_libraries/linux/bin/compilervars.sh intel64
% source /usr/local/intel/2019u5/impi/2019.5.281/intel64/bin/mpivars.sh intel64

Step 2: Download and Untar the Plumed Codes. For Plumed-2.8.4, you can download from https://github.com/plumed/plumed2/releases/tag/v2.6.4

Step 3: Compile the Codes.

% ./configure --prefix=/usr/local/plumed2-2.6.4_i2019 CC=mpiicc CXX=mpiicpc CXXFLAGS=-O3 --enable-mpi --disable-xdrfile LDFLAGS=-L/usr/local/intel/2019u5/mkl/lib/intel64  CPPFLAGS=-I/usr/local/intel/2019u5/mkl/include
% make -j 4
% make install

Compiling ANTs with GNU-6.5

What is Advanced Normalization Tools?

ANTS is a tool for computational neuroanatomy based on medical images. ANTS reads any image type that can be read by ITK (www.itk.org), that is, jpg, tiff, hdr, nii, nii.gz, mha/d and more image types as well. For the most part, ANTS will output float images which you can convert to other types with the ANTS
ConvertImagePixelType tool. ImageMath has a bunch of basic utilities such as multiplication, inversion and many more advanced tools such as computation of the Lipschitz norm of a deformation field. ANTS
programs may be called from the command line on almost any platform.

ANTs project site can be found at GitHub – ANTsX/ANTs: Advanced Normalization Tools (ANTs). Compilation Information can found at Compiling ANTs on Linux and Mac OS · ANTsX/ANTs Wiki · GitHub

Prerequisites

  • gnu-6.5
  • m4-1.4.18
  • gmp-6.1.0
  • mpfr-3.1.4
  • mpc-1.0.3
  • isl-0.18
  • gsl-2.1
  • cmake-3.21.3

ANTs can be not too difficult if you use their installation script found here

% mkdir /usr/local/ANTs
% cd /usr/local/ANTs
% git clone https://github.com/cookpa/antsInstallExample.git
% ./installANTs.sh

Once done, you should see in the ANTs directory

ANTs  build  install  installANTs.sh

Inside ANTs, you can see the install directory where the bin and lib lies

Intel unveil Second-Generation Neuromorphic Chip

Various processors and pieces of code are often compared to brains, but neuromorphic chips work to much more directly mimic neurological systems through the use of computational “neurons” that communicate with one another. Intel’s first-generation Loihi chip, introduced in 2017, has around 128,000 of those digital neurons. Over the ensuing four years, Loihi has been packed into increasingly large systemslearned to touch and even been taught to smell.

Now, it’s getting a new family member: Loihi 2. In its press release, Intel said that years of testing with the first-generation Loihi chip helped them to design a second generation with up to ten times the processing speed; up to 15 times greater resource density; and up to a million computational neurons per chip – more than seven times those in the first generation. Intel reports that early tests have shown that Loihi 2 required more than 60 times fewer ops per inference when running deep neural networks as compared to Loihi 1 (without a loss in accuracy).

Intel Unveils Loihi 2, Its Second-Generation Neuromorphic Chip, HPCWire