Another way to calculate shared memory swapping

Using ipcs utlities to find out information on shared memory utilisation which can be useful for analysing the performance of the system. Let’s says you want to measure how much memory has been swapped.

% ipcs -mu
------ Shared Memory Status --------
segments allocated 55
pages allocated 6655333
pages resident  5661034
pages swapped   947522
Swap performance: 0 attempts     0 successes

where
-m is “information about active shared memory segments”
-u is “Show status summary”

You would need PAGE Memory

getconf PAGESIZE
4096

To provide us with the information in MB

echo "$((947522*4096/1024/1024)) MB"
3701 MB

Cultural Data Sculpting

SPEAKER: Sarah Kenderdine | Digital Museology, Digital Humanities Institute |
Lead: Laboratory for Experimental Museology (eM+) | Director: ArtLab | EPFL Lausanne Switzerland

TITLE: Cultural data sculpting

DATE&TIME: Thursday, June 11, 2020 at 4:00 PM CEST

ABSTRACT:
In 1889 the curator G. B. Goode of the Smithsonian Institute delivered an anticipatory lecture entitled ‘The Future of the Museum’ in which he said this future museum would stand side by side with the library and the laboratory.’
Convergence in collecting organisations propelled by the liquidity of digital data now sees them reconciled as information providers in a networked world.
The media theorist Lev Manovich described this world-order as “database logic,” whereby users transform the physical assets of cultural organisations into digital assets to be—uploaded, downloaded, visualized, shared, users who treat institutions not as storehouses of physical objects, but rather as datasets to be manipulated. This presentation explores how such a mechanistic description can replaced by ways in which computation has become ‘experiential, spatial and materialized; embedded and embodied’. It was at the birth of the Information Age in the 1950s that the prominent designer Gyorgy Kepes of MIT said “information abundance” should be a “landscapes of the senses” that organizes both perception and practice. “This ‘felt order’ he said should be “a source of beauty, data transformed from its measured quantities and recreated as sensed forms exhibiting properties of harmony, rhythm and proportion.”

Archives call for the creation of new prosthetic architectures for the production and sharing of archival resources. At the intersection of immersive visualisation technologies, visual analytics, aesthetics and cultural (big) data, this presentation explores diverse digital cultural heritage experiences of diverse archives from scientific, artistic and humanistic perspectives.
Exploiting a series of experimental and embodied platforms, the discussion argues for a reformulation of engagement with digital archives at the intersection of the tangible and intangible and as a convergence across domains. The performative interfaces and repertoires described demonstrate opportunities to reformulate narrative in a digital context and they ways they support personal affective engagement with cultural memory.

Addressing The Challenges In Higher ED and Research

Date: Wednesday, June 17, 2020
Time: 11:00am – 12:00am SGT
Duration: 1 hour

Universities are undergoing an unprecedented challenge to provide staff to work from home, remote teaching and learning , and still provide high value learning to students and cutting edge tools and services to faculty and researchers. While remote learning is not a new phenomenon, providing quality service at scale is now a requirement, along with a new set of challenges that span user experience, mobility, effective management of a distributed deployment.

Solutions that enable remote learning and research, such as NVIDIA virtual GPU (vGPU) technology, enable you to meet these new requirements across various workloads with cost-effective solutions for existing on-premise infrastructure assets and in the cloud.

By attending this webinar, you’ll learn:
How NVIDIA vGPU technology solutions enable remote work and learning
How vGPU solutions are helping universities, across both education and research
How to get started with vGPU and vComputeServer to accelerate VDI and computational workloads in your institution

ISC High Performance 2020 Digital (Free Registration)

Welcome to ISC 2020 Digital, the inaugural online event that focuses on bringing the most critical developments and trends in high performance computing, machine learning and data analytics for the benefit of the global HPC community.

As the largest online HPC event this year, we anticipate registration numbers to match our live Frankfurt event, which is 3,700 registrations.

The event takes place over four days, from Monday, June 22 – Thursday, June 25, and is free of registration fees. All talks are exclusively available for registered participants for 14 days.

https://www.isc-hpc.com/

Using strace to detect df hanging issues on NFS

strace is a wonderful tool to trace system calls and signals

I was hanging issues whenever I do a “df” and I was curious which file system is calling issues

% strace df
.....
.....
stat("/run/user/1304561586", {st_mode=S_IFDIR|0700, st_size=40, ...}) = 0
stat("/run/user/17132623", {st_mode=S_IFDIR|0700, st_size=40, ...}) = 0
stat("/run/user/17149581", {st_mode=S_IFDIR|0700, st_size=40, ...}) = 0
stat("/run/user/1304565184", {st_mode=S_IFDIR|0700, st_size=60, ...}) = 0
stat("/scratch",

It is obvious that /scratch file hang immediately after being launched.

High Performance Computing: The Power of Language

SPEAKER: Alan Edelman, Professor of Applied Mathematics at Massachusetts Institute of Technology | Member of MIT’s Computer Science & AI Laboratory | the Leader of the JuliaLab and Applied Computing Group at MIT | cofounder of Julia Computing Inc.

DATE&TIME: Thursday, June 4, 2020 at 4:00 PM CEST

 

ABSTRACT:
Julia is now being used for high performance computing for the important problems of today including climate change and Covid-19. We describe how language is making all the difference.

 

SPEAKER’S BIO:
Alan Edelman is a Professor of Applied Mathematics at MIT, is a member of MIT’s Computer Science & AI Laboratory, and is the leader of the JuliaLab and Applied Computing Group at MIT. He is also a cofounder of Julia Computing Inc. He works on numerical linear algebra, random matrix theory and parallel computing. He is a fellow of SIAM, IEEE, and the American Mathematial Society. He has won numerous prizes for his research, most recently in 2019 the Fernbach Prize from IEEE for innovation in high performance computing.

 

DATE&TIME: Thursday, June 4, 2020 at 4:00 PM CEST

 

PLEASE REGISTER AT:
https://supercomputingfrontiers.eu/2020/tickets/neijis7eekieshee/

Release of MVAPICH2 2.3.4 GA and OSU Micro-Benchmarks (OMB) 5.6.3

The MVAPICH team is pleased to announce the release of MVAPICH2 2.3.4 GA and OSU Micro-Benchmarks (OMB) 5.6.3.

Features and enhancements for MVAPICH2 2.3.4 GA are as follows:

* Features and Enhancements (since 2.3.3):

  • Improved performance for small message collective operations
  • Improved performance for data transfers from/to non-contiguous buffers used by user-defined datatypes
  • Add custom API to identify if MVAPICH2 has in-built CUDA support
  • New API ‘MPIX_Query_cuda_support’ defined in mpi-ext.h
    • New macro ‘MPIX_CUDA_AWARE_SUPPORT’ defined in mpi-ext.h
  • Add support for MPI_REAL16 based reduction operations for Fortran programs
    • MPI_SUM, MPI_MAX, MPI_MIN, MPI_LAND, MPI_LOR, MPI_MINLOC, and MPI_MAXLOC
    • Thanks to Greg Lee@LLNL for the report and reproduced
    • Thanks to Hui Zhou@ANL for the initial patch
  • Add support to intercept aligned_alloc in ptmalloc
    • Thanks to Ye Luo @ANL for the report and the reproduced
  • Add support to enable fork safety in MVAPICH2 using environment variable
    • “MV2_SUPPORT_FORK_SAFETY”
  • Add support for user to modify QKEY using environment variable
    • “MV2_DEFAULT_QKEY”
  • Add multiple MPI_T PVARs and CVARs for point-to-point and collective operations
  • Enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Longhorn@TACC, Mayer@Sandia, Pitzer@OSC, Catalyst@EPCC, Summit@ORNL, Lassen@LLNL, and Sierra@LLNL systems
  • Give preference to CMA if LiMIC2 and CMA are enabled at the same time
  • Move -lmpi, -lmpicxx, and -lmpifort before other LDFLAGS in compiler wrappers like mpicc, mpicxx, mpif77, and mpif90
  • Allow passing flags to nvcc compiler through environment variable NVCCFLAGS
  • Display more meaningful error messages for InfiniBand asynchronous events
  • Add support for AMD Optimizing C/C++ (AOCC) compiler v2.1.0
  • Add support for GCC compiler v10.1.0
    • Requires setting FFLAGS=-fallow-argument-mismatch at configure time
  • Update to hwloc v2.2.0

 

* Bug Fixes (since 2.3.3):

  • Fix compilation issue with IBM XLC++ compilers and CUDA 10.2
  • Fix hangs with MPI_Get operations win UD-Hybrid mode
  • Initialize MPI3 data structures correctly to avoid random hangs caused by garbage values
  • Fix corner case with LiMIC2 and MPI3 one-sided operations
  • Add proper fallback and warning message when shared RMA window cannot be created
  • Fix race condition in calling mv2_get_path_rec_sl by introducing mutex
    • Thanks to Alexander Melnikov for reporting the issue and providing the patch
  • Fix mapping generation for the cases where hwloc returns zero on non-numa machines
    • Thanks to Honggang Li @Red Hat for the report and initial patch
  • Fix issues with InfiniBand registration cache and PGI20 compiler
  • Fix warnings raised by Coverity scans
    • Thanks to Honggang Li @Red Hat for the report
  • Fix bad baseptr address returned from MPI_Win_shared_query
    • Thanks to Adam Moody@LLNL for the report and discussion
  • Fix issues with HCA selection logic in heterogeneous multi-rail scenarios
  • Fix spelling mistake in error message
    • Thanks to Bill Long and Krishna Kandalla @Cray/HPE for the report
  • Fix compilation warnings and memory leaks

 

New features, enhancements and bug fixes for OSU Micro-Benchmarks

(OMB) 5.6.3 are listed here

* New Features & Enhancements (since v5.6.2)

  • Add support for benchmarking applications that use ‘fork’ system call

* osu_latency_mp

 

* Bug Fixes (since v5.6.2)

  • Fix compilation issue with IBM XLC++ compilers and CUDA 10.2
  • Allow passing flags to nvcc compiler
  • Fix issues in window creation with host-to-device and device-to-host transfers for one-sided tests

For downloading MVAPICH2 2.3.4 GA, OMB 5.6.3, and associated user guides, quick start guide, and accessing the SVN, please visit the following URL:

http://mvapich.cse.ohio-state.edu

Using Find and Tar Together to Backup and Archive

Point 1: If you wish to find files in a single folder and tar them into gzip-compressed archive. You can use a one-liner to do it.

% find -maxdepth 1 -name '*.sh' | tar czf script.tgz -T -

“-maxdepth” refers to the current depth of 1 or current directory

“-T -” causes tar to read its list from a file rather than the command line. The “-” means standard input and output.

You should have a file is script.tgz

Point 2: If you wish to find files in a single folder and tar them into bzip2-compressed archive.

% find -maxdepth 1 -name '*.sh' | tar cjf script.tgz -T -

The Progress and the Future of AI

I really enjoy the presentation by Dr. Goh Eng Lim on the Future of AI. Really absorbing…..

Synopsis:
Dr. Eng Lim Goh explains how AI’s predictive technology is being leveraged for our customers today. HPE is partnering with DZNE to use Memory-Driven Computing to find a cure for Alzheimers and with our supercomputing power we won in poker and created a commercial-off-the-shelf computer system to work in space. Also, Emily Kennedy from Marinus Analytics joins Dr. Goh to discuss fighting human trafficking using AI technologies and becoming a global company with the help of HPE. We are committed to help accelerate what is next for your enterprise.