Compiling ORCA-4.2.1 with OpenMPI-3.1.4

ORCA is a general-purpose quantum chemistry package that is free of charge for academic users. The Project and Download Website can be found at ORCA Forum

You have to register yourself before you can participate in the forum or download ORCA-4.2.1. The current latest version for ORCA is 5.0.3. The package you might want to consider is ORCA 4.2.1, Linux, x86-64, .tar.xz Archive

Prerequisites that I use.

Unpacking ORCA-4.2.1

% tar -xvf orca_4_2_1_linux_x86-64_openmpi314.tar.xz
.....
.....
orca_4_2_1_linux_x86-64_openmpi314/autoci_rhf_poly1_sigma
orca_4_2_1_linux_x86-64_openmpi314/orca_eprnmr_mpi
orca_4_2_1_linux_x86-64_openmpi314/autoci_uhf_poly1_sigma
orca_4_2_1_linux_x86-64_openmpi314/orca_casscf
orca_4_2_1_linux_x86-64_openmpi314/autoci_iprocisd_sigma_alpha_doublet_mpi
orca_4_2_1_linux_x86-64_openmpi314/autoci_rohf_cisd_product
orca_4_2_1_linux_x86-64_openmpi314/orca_gstep
orca_4_2_1_linux_x86-64_openmpi314/contrib/
orca_4_2_1_linux_x86-64_openmpi314/contrib/G2_MP2.cmp
orca_4_2_1_linux_x86-64_openmpi314/contrib/W2_2.cmp
orca_4_2_1_linux_x86-64_openmpi314/contrib/G2_MP2_SV.cmp
orca_4_2_1_linux_x86-64_openmpi314/contrib/G2_MP2_SVP.cmp
orca_4_2_1_linux_x86-64_openmpi314/orca4.2-eula.pdf
orca_4_2_1_linux_x86-64_openmpi314/Third_Party_Licenses_ORCA_4.2.pdf

Running ORCA. If your environment has Module Environment

% module load openmpi/3.1.4/gcc-6.5.0

If not, you have to pacify PATH and LD_LIBRARY_PATH, MANPATH

export PATH=$PATH:$OPENMPI_HOME/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$OPENMPI_HOME/lib:$OPENMPI_HOME/lib64
export MANPATH=$MANPATH:$OPENMPI_HOME/share

Typical Input file

Calling ORCA requires full pathing

/usr/local/orca_4_2_1_linux_x86-64_openmpi314/orca $INPUT > $OUTPUT "--bind-to core --verbose"

For Input File usage, you may want to take a look at the ORCA 4.2.1 Manual found when you unpack or you can look at it online at orca_manual_4_2_1.pdf (enea.it) .

For example…….

! B3LYP def2-SVP SP
%tddft
tda false
nroots 50
triplets true
end
%pal
nprocs 32
end

* xyz 0 1 fac_irppy3.xyz
  Ir        0.00000        0.00000        0.03016
   N       -1.05797        1.55546       -1.09121
   N        1.87606        0.13850       -1.09121
.....
.....

High-Severity Zero-Day Bug in Google Chrome

This article is taken from Singapore Computer Emergency Response Team (SINGCERT) titled High-Severity Zero-Day Bug in Google Chrome

Google has released Chrome 99.0.4844.84 for Windows, Mac, Linux and Chrome 99.0.4844.88 for Android users to address a high-severity zero-day bug (CVE-2022-1096)The vulnerability is a Type Confusion in V8 JavaScript engine exploit, and is reported to exist in the wild. V8 is Chrome’s component that is responsible for processing JavaScript code.

Type confusion refers to coding bugs during which an application initialises data execution operations using input of a specific “type” but is tricked into treating the input as a different “type”. This leads to logical errors in the application’s memory, which may allow an attacker to run unrestricted malicious codes inside an application.

No further technical details about the bug have been published by Google.

Google Chrome users on Windows, Mac and Linux are advised to upgrade to Chrome 99.0.4844.84 immediately by going into Chrome menu > Help > About Google Chrome, while Android users may refer to the Google Play Store for Chrome 99 (99.0.4844.88) version.

High-Severity Zero-Day Bug in Google Chrome

Compiling pybind11 with GNU-6.5

The Project Website can be found at https://github.com/pybind/pybind11

pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code. Its goals and syntax are similar to the excellent Boost.Python library by David Abrahams: to minimize boilerplate code in traditional extension modules by inferring type information using compile-time introspection.

The Compiling Steps can be found at https://pybind11.readthedocs.io/en/stable/basics.html

mkdir build
cd build
cmake .. -DDOWNLOAD_EIGEN=ON -DDOWNLOAD_CATCH=ON
make check -j 4

Managing of Roaming Users’ Home Directories with Systemd-Homed

This article can be taken from OpenSource.com titled “Manage Linux users’ home directories with systemd-homed

Image By: OpenSource.com

The systemd-homed service supports user account portability independent of the underlying computer system. A practical example is to carry around your home directory on a USB thumb drive and plug it into any system which would automatically recognize and mount it. According to Lennart Poettering, lead developer of systemd, access to a user’s home directory should not be allowed to anyone unless the user is logged in. The systemd-homed service is designed to enhance security, especially for mobile devices such as laptops. It also seems like a tool that might be useful with containers.

This objective can only be achieved if the home directory contains all user metadata. The ~/.identity file stores user account information, which is only accessible to systemd-homed when the password is entered. This file holds all of the account metadata, including everything Linux needs to know about you, so that the home directory is portable to any Linux host that uses systemd-homed. This approach prevents having an account with a stored password on every system you might need to use.

The home directory can also be encrypted using your password. Under systemd-homed, your home directory stores your password with all of your user metadata. Your encrypted password is not stored anywhere else thus cannot be accessed by anyone. Although the methods used to encrypt and store passwords for modern Linux systems are considered to be unbreakable, the best safeguard is to prevent them from being accessed in the first place. Assumptions about the invulnerability of their security have led many to ruin.

This service is primarily intended for use with portable devices such as laptops. Poettering states, “Homed is intended primarily for client machines, i.e., laptops and thus machines you typically ssh from a lot more than ssh to, if you follow what I mean.” It is not intended for use on servers or workstations that are tethered to a single location by cables or locked into a server room.

The systemd-homed service is enabled by default on new installations—at least for Fedora, which is the distro that I use. This configuration is by design, and I don’t expect that to change. User accounts are not affected or altered in any way on systems with existing filesystems, upgrades or reinstallations that keep the existing partitions, and logical volumes.

Manage Linux users’ home directories with systemd-homed (OpenSource.com)

For more Read-Up, do take a look at “Manage Linux users’ home directories with systemd-homed

Licensing Errors for ANSYS IcePak Solver

If you are encountering issues like the one below.

"If you have this error "Failed to enable features using current license settings. Note that Pro, Premium, Enterprise licenses are available on your server. To use these licenses check the corresponding option. For more information, search "PPE" in the help documentation. Failover feature "ANSYS IcePak Solver" is not available. Request name aice_solv does not exist in the....."

Highlighted Area of Research by HPCWire

HPCWire highlighted 3 Areas of Research in the High-Performance Computing and Related Domains. The article can be found here

Research 1: HipBone: A performance-portable GPU-accelerated C++ version of the NekBone benchmark

HipBone “is a fully GPU-accelerated C++ implementation of the original NekBone CPU proxy application with several novel algorithmic and implementation improvements which optimize its performance on modern finegrain parallel GPU accelerators.” 

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

Research 2: A Case for intra-rack resource disaggregation in HPC

A multi-institution research team utilized Cori, a high performance computing system at the National Energy Research Scientific Computing Center, to analyze “resource disaggregation to enable finer-grain allocation of hardware resources to applications.”

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

Research 3: Improving Scalability with GPU-Aware Asynchronous Tasks

Computer scientists from the University of Illinois at Urbana-Champaign and Lawrence Livermore National Laboratory demonstrated improved scalability to hide communication behind computation with GPU-aware asynchronous tasks.

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

SLAW and Singularity

What is SLAW?

SLAW is a scalable, containerized workflow untargeted LC-MS processing. It was developed by Alexis Delabriere in the Zamboni Lab at ETH Zurich. An explanation of the advantages of SLAW and its motivations of development can be found in this blog post. In brief, the core advantages of SLAW are:

Getting the Test Data from the Source Code

You may want to download the SLAW Source Code which contains some test data you can test for your SLAW Container.

% git clone https://github.com/zamboni-lab/SLAW.git

Let make a new directory my_SLAW and copy the test_data out. I’m assuming you have Singularity Installed. Compiling Singularity-CE-3.9.2 on CentOS-7.

Create the output folder and unzip the MzmL.zip

% cp SLAW/test_data ~my_SLAW
% cd ~my_SLAW/test_data
% mkdir output
% unzip mzML.zip
% singularity pull slaw.sif docker://zambonilab/slaw:latest

Test Runing. Just a few things to note. Try to use the absolute PATH for PATH_OUTPUT and MZML_Folder

% singularity run -C -W . -B PATH_OUTPUT:/output  -B MZML_FOLDER:/input slaw.sif

For example,

% singularity run -C -W . -B /home/user1/my_SLAW/output:/output  -B /home/user1/my_SLAW/mzmL:/input slaw.sif
2022-02-25|00:46:52|INFO: Total memory available: 53026 and 32 cores. The workfl
2022-02-25|00:46:52|INFO: Guessing polarity from file:DDA1.mzML
2022-02-25|00:46:53|INFO: Polarity detected: positive
2022-02-25|00:46:54|INFO: STEP: initialisation TOTAL_TIME:2.41s LAST_STEP:2.41s
2022-02-25|00:46:55|INFO: 0 peakpicking added
2022-02-25|00:46:59|INFO: MS2 extraction finished
2022-02-25|00:46:59|INFO: Starting peaktable filtration
2022-02-25|00:46:59|INFO: Done peaktables filtration
2022-02-25|00:46:59|INFO: STEP: peakpicking TOTAL_TIME:7.57s LAST_STEP:5.16s
2022-02-25|00:46:59|INFO: Alignment finished
2022-02-25|00:46:59|INFO: STEP: alignment TOTAL_TIME:7.60s LAST_STEP:0.03s
2022-02-25|00:47:10|INFO: Gap filling and isotopic pattern extraction finished.
2022-02-25|00:47:10|INFO: STEP: gap-filling TOTAL_TIME:18.01s LAST_STEP:10.41s
2022-02-25|00:47:10|INFO: Annotation finished
2022-02-25|00:47:10|INFO: STEP: annotation TOTAL_TIME:18.04s LAST_STEP:0.03s
2022-02-25|00:47:10|INFO: Processing finished.

Compiling OpenFOAM-9 with Third-Party-9 with Intel MPI on CentOS 7

Step 1a: Get the Software

If you do have root access to the machine, the recommended installation directory is $HOME/OpenFOAM . If you have root permissions and the installation is for more than one user, one of the ‘standard’ locations can be used, e.g. /usr/local/OpenFOAM.

# wget -O - http://dl.openfoam.org/source/9 | tar xvz
# wget -O - http://dl.openfoam.org/third-party/9 | tar xvz

Step 1b: The files unpack to produce directories OpenFOAM-9-version-9 and ThirdParty-9-version-9, which need to be renamed as follows:

# mv OpenFOAM-9-version-9 OpenFOAM-9
# mv ThirdParty-9-version-9 ThirdParty-9

Step 2a: Load Intel Compilers. I loaded the Intel Parallel Cluster Suite 2018

# source /usr/local/intel/2018u3/bin/compilervars.sh intel64
# source /usr/local/intel/2018u3/mkl/bin/mklvars.sh intel64
# source /usr/local/intel/2018u3/impi/2018.3.222/bin64/mpivars.sh intel64
# source /usr/local/intel/2018u3/parallel_studio_xe_2018/bin/psxevars.sh intel64
# export MPI_ROOT=/usr/local/intel/2018u3/impi/2018.3.222/intel64

Step 2b: Create softlink for include64 and lib64 for Intel MPI (if requried only). You can check first. If it is there, it should be something like this.

# ls -l /usr/local/intel/2018u3/impi/2018.3.222/intel64/include64
# lrwxrwxrwx 1 root hpccentrifyusers 7 Aug  9  2019 /usr/local/intel/2018u3/impi/2018.3.222/intel64/include64 -> include

If not there, you may want to do this.

# cd /usr/local/intel/2018u3/impi/2018.3.222/intel64
# ln -s include include64
# ln -s lib lib64

Step 3: Edit the OpenFOAM bashrc

# vim /usr/local/OpenFOAM/OpenFOAM-9/etc/bashrc
.....
export FOAM_INST_DIR=$(cd $(dirname ${BASH_SOURCE:-$0})/../.. && pwd -P) || \
export FOAM_INST_DIR=/usr/local/$WM_PROJECT
.....
#- Compiler location:
#    WM_COMPILER_TYPE= system | ThirdParty (OpenFOAM)
export WM_COMPILER_TYPE=system

#- Compiler:
#    WM_COMPILER = Gcc | Gcc48 ... Gcc62 | Clang | Icc
export WM_COMPILER=Icc
unset WM_COMPILER_ARCH WM_COMPILER_LIB_ARCH

#- Memory addressing:
#    On a 64bit OS this can be 32bit or 64bit
#    On a 32bit OS addressing is 32bit and this option is not used
#    WM_ARCH_OPTION = 32 | 64
export WM_ARCH_OPTION=64

#- Precision:
#    WM_PRECISION_OPTION = SP | DP | LP
export WM_PRECISION_OPTION=DP

#- Label size:
#    WM_LABEL_SIZE = 32 | 64
export WM_LABEL_SIZE=32

#- Optimised, debug, profiling:
#    WM_COMPILE_OPTION = Opt | Debug | Prof
export WM_COMPILE_OPTION=Opt

#- MPI implementation:
#    WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI
#               | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI
export WM_MPLIB=INTELMPI

#- Operating System:
#    WM_OSTYPE = POSIX | ???
export WM_OSTYPE=POSIX
.....

Step 4a: Edit the ThirdParty-9 scotch_6.0.9 Packages

$ cd /usr/local/OpenFOAM/ThirdParty-9/scotch_6.0.9/src

Step 4b: Copy the right Architecture and place it with the directory that holds the Makefile. For example we wanted “Makefile.inc.x86-64_pc_linux2.icc.impi”

# cp /usr/local/OpenFOAM/ThirdParty-9/scotch_6.0.9/src/Make.inc/Makefile.inc.x86-64_pc_linux2.icc.impi /usr/local/OpenFOAM/ThirdParty-9/scotch_6.0.9/src

Step 5: Go back to OpenFOAM source directory

# source /usr/local/OpenFOAM/OpenFOAM-9/etc/bashrc
# cd /usr/local/OpenFOAM/OpenFOAM-9
# ./Allwmake -j 16 | tee Allwmake.log

Webinar – Cloud-Native Supercomputing Powers New Data Centre Architecture

Computing power becomes the service. Data center becomes the new computing unit to serve the unlimited computing resource with high performance, flexibility and security. Network as the bridge between the computing resource and storage resource, between data centers and between the user and data center,  is becoming the key to impact performance and security. The Cloud Native Supercomputing architecture is designed to leverage the advantage from both supercomputer and cloud to provide the best performance in the modern zero trust environment.

By attending this webinar, you will learn how to:

  • Use the supercomputing technologies in data center
  • Deliver the cloud flexibility with supercomputing technologies to drive the most powerful data center
  • Provide the cloud native supercomputing service in zero trust environment

Date: February 23, 2022
Time: 15:00 – 16:00 SGT
Duration: 1 hour

To Register (Cloud Native Supercomputing Powers New Data Center Architecture (nvidianews.com)