LAMMPS is a classical molecular dynamics code with a focus on materials modeling. It’s an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. More Information on the software, do take a look at https://www.lammps.org/
You may want to module use which come in the hpcx installation
export HPCX_HOME=/usr/local/hpcx-v2.15-gcc-MLNX_OFED_LINUX-5-redhat8-cuda12-gdrcopy2-nccl2.17-x86_64 module use $HPCX_HOME/modulefiles
Next, I used the following parameters that suit my HPC Environment. The default installation is already double-precision. I needed MPI, OPenMPI and needs AVX512…..
# ./configure --prefix=/usr/local/fftw-3.3.10 --enable-threads --enable-openmp --enable-mpi --enable-avx512 # make && make install
ORCA is a general-purpose quantum chemistry package that is free of charge for academic users. The Project and Download Website can be found at ORCA Forum. The current version is 5.0.4.
The current prerequisites that I have used were OpenMPI-4.1.1 and System GNU which is 8.5.
Unless I have missed something, the packages of ORCA-5.0.4 has been split into 3 different packages which you have to untar and combine together
orca_5_0_4_linux_x86-64_openmpi411_part1
orca_5_0_4_linux_x86-64_openmpi411_part2
orca_5_0_4_linux_x86-64_openmpi411_part3
How do I untar the packages?
The first thing is to untar all the packages separately first. Assuming you are untarring at the /usr/local/
$ tar -xf orca_5_0_4_linux_x86-64_openmpi411_part1.tar.xz $ tar -xf orca_5_0_4_linux_x86-64_openmpi411_part2.tar.xz $ tar -xf orca_5_0_4_linux_x86-64_openmpi411_part3.tar.xz
How do I do with all the untarred packages?
Copy all the untar files into /usr/local/orca-5.0.4.
If you are not using the Module Environment, you can consider installing. For more information do take a look at Installing Environment Modules on Rocky Linux 8.5. All you need to do is then is to load the additional module such as OpenMPI as a prerequisites. Alternatively, you can set the PATH, LD_LIBRARY_PATH of OpenMPI something like this.
# wget https://github.com/openucx/ucx/releases/download/v1.4.0/ucx-1.4.0.tar.gz
$ tar xzf ucx-1.4.0.tar.gz
$ cd ucx-1.4.0
$ ./contrib/configure-release --prefix=/usr/local/ucx-1.4.0
$ make -j8
$ make install
Prerequisites 3
Make sure you have install GNU and GNU-C++. This can be done easily using the
The error message indicates that the shared memory has no permission to be used, The permission of /dev/shm is found to be 755, not 777, causing the error. The issue can be resolved after the permission is changed to 777. To change and verify the changes:
NVIDIA® HPC-X® is a comprehensive software package that includes Message Passing Interface (MPI), Symmetrical Hierarchical Memory (SHMEM) and Partitioned Global Address Space (PGAS) communications libraries, and various acceleration packages. For more information, do take a look at https://developer.nvidia.com/networking/hpc-x
What is CP2K?
CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimization using NEB or dimer method. (Detailed overview of features.). For more information, do take a look at https://www.cp2k.org/
Unpack hpcx and Optimised OpenMPI Libraries. For more information on installation, do take a look at Installing and Loading HPC-X
Extract hpcx.tbz into your current working directory.
% tar -xvf hpcx.tbz
% cd hpcx
% export HPCX_HOME=$PWD
% module use $HPCX_HOME/modulefiles
% module load hpcx
Use the CP2K Toolchain to Compile for the easiest
% cd cp2k % cd /usr/local/software/cp2k/tools/toolchain % ./install_cp2k_toolchain.sh --no-check-certificate --with-openmpi --with-sirius=no
Compiling the CP2K
.....
.....
==================== generating arch files ====================
arch files can be found in the /usr/local/software/cp2k/tools/toolchain/install/arch subdirectory
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local.ssmp
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local_static.ssmp
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local.sdbg
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local_coverage.sdbg
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local.psmp
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local.pdbg
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local_static.psmp
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local_warn.psmp
Wrote /usr/local/software/cp2k/tools/toolchain/install/arch/local_coverage.pdbg
========================== usage =========================
Done!
Now copy:
cp /usr/local/software/cp2k/tools/toolchain/install/arch/* to the cp2k/arch/ directory
To use the installed tools and libraries and cp2k version
compiled with it you will first need to execute at the prompt:
source /usr/local/software/cp2k/tools/toolchain/install/setup
To build CP2K you should change directory:
cd cp2k/
make -j 80 ARCH=local VERSION="ssmp sdbg psmp pdbg"
Do exactly on the ending instruction
% cp /usr/local/software/cp2k/tools/toolchain/install/arch/* /usr/local/software/cp2k/arch % source /usr/local/software/cp2k/tools/toolchain/install/setup % cd /usr/local/software/cp2k % make -j 32 ARCH=local VERSION="ssmp sdbg psmp pdbg"
If you encounter an error during making like the one below, just do an install for liblsan
This article is taken from Intel “Efficient Heterogeneous Parallel Programming Using OpenMP”. In this article, we will show you how to do CPU+GPU asynchronous calculations using OpenMP.
In some cases, offloading computations to an accelerator like a GPU means that the host CPU sits idle until the offloaded computations are finished. However, using the CPU and GPU resources simultaneously can improve the performance of an application. In OpenMP® programs that take advantage of heterogenous parallelism, the master clause can be used to exploit simultaneous CPU and GPU execution. In this article, we will show you how to do CPU+GPU asynchronous calculation using OpenMP. ….. ….. …..
The Intel® oneAPI DPC++/C++ Compiler was used with following command-line options: ‑O3 ‑Ofast ‑xCORE‑AVX512 ‑mprefer‑vector‑width=512 ‑ffast‑math ‑qopt‑multiple‑gather‑scatter‑by‑shuffles ‑fimf‑precision=low ‑fiopenmp ‑fopenmp‑targets=spir64=”‑fp‑model=precise”
….. ….. ….. OpenMP provides true asynchronous, heterogeneous execution on CPU+GPU systems. It’s clear from our timing results and VTune profiles that keeping the CPU and GPU busy in the OpenMP parallel region gives the best performance. We encourage you to try this approach.
ORCA is a general-purpose quantum chemistry package that is free of charge for academic users. The Project and Download Website can be found at ORCA Forum
You have to register yourself before you can participate in the forum or download ORCA-4.2.1. The current latest version for ORCA is 5.0.3. The package you might want to consider is ORCA 4.2.1, Linux, x86-64, .tar.xz Archive
For Input File usage, you may want to take a look at the ORCA 4.2.1 Manual found when you unpack or you can look at it online at orca_manual_4_2_1.pdf (enea.it) .
For example…….
! B3LYP def2-SVP SP
%tddft
tda false
nroots 50
triplets true
end
%pal
nprocs 32
end
* xyz 0 1 fac_irppy3.xyz
Ir 0.00000 0.00000 0.03016
N -1.05797 1.55546 -1.09121
N 1.87606 0.13850 -1.09121
.....
.....