Truncate Log Files in Linux

If you need to shrink or empty the Log Files on a Linux System, you can use the “truncate” command

% truncate -s 0 logfile

where -s is used to set or adjust the file size

If you need to delete multiple files

% truncate -s /var/log/messages-*

References:

  1. How to empty (truncate) Log files in Linux
Advertisement

The Future of Power-Efficient Datacenters

Submer is not only obsessed with Immersion Cooling, but also looks at the entire ecosystem surrounding datacenters for possible points of optimization. In this educational and informative webinar, we were joined by John Laban, from the Open Compute Project Foundation (OCP).

John helped us identify all points of electricity waste in the delivery of power to datacenters and HPC installations.

VMware-NVIDIA AI-Ready Enterprise platform

NVIDIA and VMware have formed a strategic partnership to transform the data center to bring AI and modern workloads to every enterprise.

NVIDIA AI Enterprise is an end-to-end, cloud-native suite of  AI and data analytics software, optimized, certified, and supported by NVIDIA to run on VMware vSphere with NVIDIA-Certified  Systems. It includes key enabling technologies  from NVIDIA for rapid deployment, management, and scaling of AI workloads in the modern hybrid cloud.

For more information, see NVIDIA AI Enterprise

Paraview and OpenGL

After you enter the command “paraview” and If you are encountering errors like “Your OpenGL drivers don’t support required OpenGL, features for basic rendering. Applications cannot continue. Please exit and use an older version. CONTINUE AT YOUR OWN RISK! OpenGL Vendor: Information Unavailable, OpenGL Version: Information Unavailable OpenGL Renderer: Information Unavailable.

The output messages shows

  • “Unable to find a valid OpenGL 3.2 or later…
  • failed to create offscreen window
  • GLEW could not be initialized

To resolve the issue, do run the command, to bypass hardware acceleration

% paraview --mesa

References:

/bin/rm : Argument list too long

I was trying to delete the files in the folder that has more than 90,000. When I delete the files, I had errors

% /bin/rm : Argument list too long

The issue is that there are too many files and rm is not able to clear. To workaround the issues, you can do the following. Make sure you are in the directory where you want to clear the file.

 % find . -name '*' | xargs rmfind . -name '*' | xargs rm

References:

Compiling USER-GFMD to LAMMPS-10Mar21 with OpenMPI and GNU

Prerequisites

  • openmpi-3.1.4
  • gnu-6.5
  • m4-1.4.18
  • gmp-6.1.0
  • mpfr-3.1.4
  • mpc-1.0.3
  • isl-0.18
  • gsl-2.1
  • python-3.6.9
  • fftw-3.3.8

Download the lammps.10Mar2021.tar.gz from https://download.lammps.org/tars/

Step 1: Untar LAMMPS

% tar -zxvf lammps-10Mar2021.tar.gz

Step 2: Go to $LAMMPS_HOME/src. Make Packages. I will only need kspace and USER-GFMD.

% make yes-kspace
% make pi
% make ps

Step 3: Preparing Green’s function molecular dynamics (GFMD) code on LAMMPS. For more information on the website, https://github.com/Atomistica/user-gfmd

% cd $LAMMPS_HOME
% git clone https://github.com/Atomistica/user-gfmd.git

You should see a directory called user-gfmd in your $LAMMPS_HOME. Rename it to uppercase

% mv user-gfmd USER-GFMD

Copy the USER-GDMD to the $LAMMPS_HOME/src and
Make the package

% cp USER-GFMD to src
% make yes-user-gfmd

Step 4: Edit the Makefile.g++_openmpi

% vim $LAMMPS_HOME/src/MAKE/OPTIONS/Makefile.g++_openmpi
FFT_INC =       -DFFT_FFTW3 -I/usr/local/fftw-3.3.8-gcc6/include
FFT_PATH =      -L/usr/local/fftw-3.3.8-gcc6/lib
FFT_LIB =        -lfftw3

The -DFFT_FFTW3 was important or I will get error like

mpicxx -std=c++11 -g -O3 -DLAMMPS_GZIP -DGFMD_FFTW3 -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -I/usr/local/fftw-3.3.8-gcc6/include -c ../nonperiodic_stiffness.cpp
../nonperiodic_stiffness.cpp:199:4: error: #error NonperiodicStiffnessKernel requires FFTW3
#error NonperiodicStiffnessKernel requires FFTW3
^~~~~
make[1]: *** [nonperiodic_stiffness.o] Error 1

Step 5: Compile LAMMPS

% make clean-all
% make g++_openmpi

You should have binary called lmp_g++_openmpi. Do a softlink

% ln -s lmp_g++_openmpi lammps

Make and Run USER-GFMD Units Tests

% cd src/USER-GFMD/
% make unittests
% ./unittests
[==========] Running 23 tests from 5 test cases.
[----------] Global test environment set-up.
[----------] 5 tests from CrystalSurfaceTest
[ RUN      ] CrystalSurfaceTest.fcc100
[       OK ] CrystalSurfaceTest.fcc100 (0 ms)
[ RUN      ] CrystalSurfaceTest.fcc100_supercell
[       OK ] CrystalSurfaceTest.fcc100_supercell (11 ms)
[ RUN      ] CrystalSurfaceTest.reverse_indices
[       OK ] CrystalSurfaceTest.reverse_indices (0 ms)
[ RUN      ] CrystalSurfaceTest.fcc100_3nn
[       OK ] CrystalSurfaceTest.fcc100_3nn (0 ms)
[ RUN      ] CrystalSurfaceTest.fcc100_neighbor_shells
[       OK ] CrystalSurfaceTest.fcc100_neighbor_shells (0 ms)
[----------] 5 tests from CrystalSurfaceTest (12 ms total)
.....
.....
.....

Run Tests

% cd src/USER-GFMD/tests
% sh run_tests.sh ../../lmp_g++_openmpi

I had some errors, but it did not look critical. Tests were run without failures

.....
.....
TEST_Hertz_sc100_128x128_a0_1.3
eval.py:83: RuntimeWarning: invalid value encountered in sqrt
  pa_xy = np.where(r_xy<a, p0*np.sqrt(1-(r_xy/a)**2), np.zeros_like(r_xy))
.ok.
TEST_restart
.ok.
Ran 20 tests; 0 failures, 20 successes.

The SPEChpc 2021 Benchmark suite

The full writeup can be found at REAL-WORLD HPC GETS THE BENCHMARK IT DESERVES

While nothing can beat the notoriety of the long-standing LINPACK benchmark, the metric by which supercomputer performance is gauged, there is ample room for a more practical measure. It might not garner the same mainstream headlines as the Top 500 list of the world’s largest systems, but a new benchmark may fill in the gaps between real-world versus theoretical peak compute performance.

The reason this new high performance computing (HPC) benchmark can come out of the gate with immediate legitimacy is because it is from the Standard Performance Evaluation Corporation (SPEC) organization, which has been delivering system benchmark suites since the late 1980s. And the reason it is big news today is because the time is right for a more functional, real-world measure, especially one that can adequately address the range of architectures and changes in HPC (from various accelerators to new steps toward mixed precision, for example).

…..
…..
…..

The SPEChpc 2021 suite includes a broad swath of science and engineering codes that are representative (and portable ) across much of what we see in HPC.

– A tested set of benchmarks with performance measurement and validation built into the test harness.
– Benchmarks include full and mini applications covering a wide range of scientific domains and Fortran/C/C++ programming languages.
– Comprehensive support for multiple programming models, including MPI, MPI+OpenACC, MPI+OpenMP, and MPI+OpenMP with target offload.
– Support for most major compilers, MPI libraries, and different flavors of Linux operating systems.
– Four suites, Tiny, Small, Medium, and Large, with increasing workload sizes, allows for appropriate evaluation of different-sized HPC systems, ranging from a single node to many thousands of nodes.

REAL-WORLD HPC GETS THE BENCHMARK IT DESERVES at The Next Platform

For more information, see https://www.spec.org/hpc2021/