Hewlett Packard Enterprise selected to build new supercomputer for the National Supercomputing Centre Singapore

The next generation national supercomputer for Singapore will be a green, warm water-cooled system – one of the first known deployments of such a system in a tropical environment. When operational the supercomputer is expected to provide an aggregate of up to 10 PFLOPS of raw compute power and is eight times more powerful than the current ASPIRE1 supercomputer. ASPIRE1, which was commissioned in 2016, has been running at near full capacity in support of local advanced research that requires high-end computing resources. The new system is the first in a series of supercomputers that will be deployed in phases from now till 2025 to expand and upgrade Singapore’s high-performance computing (HPC) capabilities for the research community here.

– National Supercomputing Centre Singapore –

For more information,

Extremely Low Thermal Conductivity material to insulate Space Craft

This video was demonstrated in 2011 and yet even now in 2021, I am still fascinated with science. Picking up a block at 2200 degree F with bare hand…. Wow… Enjoy

For more information, do take a look at

For more information about the Space Shuttle thermal tiles, how they work, what they are made of used on the Space Shuttle see Thermal Protection Systems. Enjoy

Intel 5G Vision: Unleash Network Modernization

https://www.intel.com/content/www/us/en/wireless-network/5g-vision-dan-rodriguez-unleash-network-modernization-video.html

5G is so much more than just delivering better broadband. It’s also about enabling service providers to deliver all sorts of vertical market, specific applications to unlock and unleash the potential across a wide variety of industries. And with that, we expect the network to truly be transformed, be very flexible, be very server-like, and utilize all sorts of cloud technologies to unleash the potential of all these wide variety of use cases.

Intel

Compiling GAMESS-v2020.2 with Intel MPI

GAMESS Download Site can be found at https://www.msg.chem.iastate.edu/GAMESS/download/dist.source.shtml

Compiling GAMESS

% tar -zxvf gamess-current.tar.gz
% cd gamess
% ./config

You have to answer the following question on

  • Machine Type? – I chose “linux64
  • GAMESS directory? -I chose “/usr/local/gamess
  • GAMESS Build Directory – I chose “/usr/local/gamess
  • Version? [00] – I chose default [00]
  • Choice of Fortran Compilers – I chose “ifort”
  • Version Number of ifort – I chose “18” (You can check by issuing the command ifort -V)
  • Standard Math Library – I chose “mkl”
  • Path of MKL – I chose “/usr/local/intel/2018u3/compilers_and_libraries_2018.3.222/linux/mkl
  • Type “Proceed” next
  • Communication Library – I chose “mpi” (I’m using Infiniband)
  • Enter MPI Library – I chose “impi
  • Enter Location of impi – I chose “/usr/local/intel/2018u3/impi/2018.3.222
  • Build experimental support of LibXC – I chose “no
  • Build Beta Version of Active Space CCT3 and CCSD3A – I chose “no
  • Build LIBCCHEM – I chose “no
  • Build GAMESS with OpenMP thread support – I chose “yes”

Once done, you should see

Your configuration for GAMESS compilation is now in
     /usr/local/gamess/install.info
Now, please follow the directions in
     /usr/local/gamess/machines/readme.unix

Compiling ddi

Edit DDI Node Sizes by editing /usr/local/gamess/ddi/compddi
Look at Line 90 and 91. You may want to edit MAXCPUS and MAXNODES. Once done, you can compile ddi

% ./compddi >& compddi.log &

Compiling GAMESS

The compilation will take a while. So relax…..

% ./compall >& compall.log &

Linking Executable Form of GAMESS with the command

./lked gamess 01 >& lked.log &

Edit the Scratch Directory setting at rungms

% vim rungms
set SCR=/scratch/$USER

Finding Top Processes using Highest Memory and CPU Usage in Linux

Read this Article from Find Top Running Processes by Highest Memory and CPU Usage in Linux. This is a quick way to view processes that consumed the largest RAM and CPU

ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head
   PID   PPID CMD                         %MEM %CPU
414699 414695 /usr/local/ansys_inc/v201/f 20.4 98.8
 30371      1 /usr/local/pbsworks/pbs_acc  0.2  1.0
 32241      1 /usr/local/pbsworks/pbs_acc  0.2  4.0
 30222      1 /usr/local/pbsworks/pbs_acc  0.2  0.6
  7191      1 /usr/local/pbsworks/dm_exec  0.1  0.8
 30595      1 /usr/local/pbsworks/pbs_acc  0.1  3.1
 30013      1 /usr/local/pbsworks/pbs_acc  0.1  0.3
 29602  29599 nginx: worker process        0.1  0.2
 29601  29599 nginx: worker process        0.1  0.3

The -o is to specify the output format. The -e is to select all processes. In order to sort in descending format, it hsould be –sort=%mem

Interesting.

Storage Performance Basics for Deep Learning

This is an interesting write-up from James Mauro from Nvidia on Storage Performance Basics for Deep Learning.

The complexity of the workloads plus the volume of data required to feed deep-learning training creates a challenging performance environment. Deep learning workloads cut across a broad array of data sources (images, binary data, etc), imposing different disk IO load attributes, depending on the model and a myriad of parameters and variables.”

For Further Reads… Do take a look at https://developer.nvidia.com/blog/storage-performance-basics-for-deep-learning/