Why IBM is suing GlobalFoundries over Chip Roadmap failures

An interesting article on delicate relationship between chip designs and foundries

The tight linkage between chip designs and chip manufacturing processes has caused its shared of havoc in the IT sector, and it is getting worse as Moore’s Law has slowed and Dennard scaling died a decade ago. Wringing more performance out of devices while trying to keep a lid on power draw is causing loads of trouble as chip makers try to advance the state of the art. When there are failures to meet chip process targets set by the foundries of the world, chips drive off the roadmap page and smash on the floor.

Timothy Prickett Morgan (www.TheNextPlatform.com)

Webinar – Advanced code optimization for 3rd Gen Intel® Xeon® Scalable Processors.

An interesting seminar which you may be interested to signed up Free-Of-Charge. The Registration site can be found at https://techdecoded.intel.io/webinar-registration/upcoming-webinars/

If the data center is part of your development wheelhouse, you’re likely familiar with a little CPU called “Xeon”. This webinar unpacks the latest methodologies of tuning complex AI and HPC workloads for the third generation Xeon platform (formerly code-named Ice Lake).

Delivering up to 40 cores per processor, 3rd Gen Intel® Xeon® Scalable processors are designed for compute-intense, data-centric workloads spanning the cloud to the network and the edge.

In this session, Intel engineer Vladimir Tsymbal will show you how to optimize your AI and HPC applications and solutions to unlock the full spectrum of these processors’ power. You’ll learn:

  • The top-down tuning methodology that uses Xeon hardware-performance metrics to identify issues including critical bottlenecks caused by data locality, CPU interconnect bandwidth, cache limitations, instructions execution stalls, and I/O interfaces
  • How a high-level HPC Characterization Analysis helps you find inefficient parallel tasks
  • How to optimize software that uses the latest Intel® DL Boost VNNI instructions

Finding physical cpus, cores and logical cpus

Number of Active Physical Processors

% grep physical.id /proc/cpuinfo | sort -u | wc -l
8

Number of cores per CPU

% grep cpu.cores /proc/cpuinfo | sort -u
cpu cores       : 26

Therefore the total number of cores is
8×26 = 208 cores

Number of Logical Processors

The number of cores seen by the Linux. Since the Server has switched on Hyperthreading

% grep processor /proc/cpuinfo | wc -l
416

References:

  1. How to find the number of physical cpus, cpu cores, and logical cpus

Understanding Load Average in Linux

Taken from RedHat Article “What is the relation between I/O wait and load average?” I have learned quite a bit on this article.

Linux, unlike traditional UNIX operating systems, computes its load average as the average number of runnable or running processes (R state), and the number of processes in uninterruptable sleep (D state) over the specified interval. On UNIX systems, only the runnable or running processes are taken into account for the load average calculation.

On Linux the load average is a measurement of the amount of “work” being done by the machine (without being specific as to what that work is). This “work” could reflect a CPU intensive application (compiling a program or encrypting a file), or something I/O intensive (copying a file from disk to disk, or doing a database full table scan), or a combination of the two.

In the article, you can determine whether the high load average is the result processes in the running state or uninterruptible state,

I like this script…… that was written in the knowledgebase. The script show the running, blocked and runnin+blocked.

[user@node1 ~]$ while true; do echo; uptime; ps -efl | awk 'BEGIN {running = 0; blocked = 0} $2 ~ /R/ {running++}; $2 ~ /D/ {blocked++} END {print "Number of running/blocked/running+blocked processes: "running"/"blocked"/"running+blocked}'; sleep 5; done

 23:45:52 up 52 days,  7:06, 22 users,  load average: 1.40, 1.26, 1.02
Number of running/blocked/running+blocked processes: 3/1/4

 23:45:57 up 52 days,  7:06, 22 users,  load average: 1.45, 1.27, 1.02
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:02 up 52 days,  7:06, 22 users,  load average: 1.41, 1.27, 1.02
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:07 up 52 days,  7:07, 22 users,  load average: 1.46, 1.28, 1.03
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:12 up 52 days,  7:07, 22 users,  load average: 1.42, 1.27, 1.03
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:17 up 52 days,  7:07, 22 users,  load average: 1.55, 1.30, 1.04
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:22 up 52 days,  7:07, 22 users,  load average: 1.51, 1.30, 1.04
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:27 up 52 days,  7:07, 22 users,  load average: 1.55, 1.31, 1.05
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:32 up 52 days,  7:07, 22 users,  load average: 1.62, 1.33, 1.06
Number of running/blocked/running+blocked processes: 2/1/3

 23:46:38 up 52 days,  7:07, 22 users,  load average: 1.81, 1.38, 1.07
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:43 up 52 days,  7:07, 22 users,  load average: 1.66, 1.35, 1.07
Number of running/blocked/running+blocked processes: 1/0/1

 23:46:48 up 52 days,  7:07, 22 users,  load average: 1.53, 1.33, 1.06
Number of running/blocked/running+blocked processes: 1/0/1

Another useful way to typical top output when the load average is high (filter the idle/sleep status tasks with i). So the high load average is because lots of sendmail tasks are in D status. They may be waiting either for I/O or network.

op - 13:23:21 up 329 days,  8:35,  0 users,  load average: 50.13, 13.22, 6.27
Tasks: 437 total,   1 running, 435 sleeping,   0 stopped,   1 zombie
Cpu(s):  0.1%us,  1.5%sy,  0.0%ni, 93.6%id,  4.5%wa,  0.1%hi,  0.2%si,  0.0%st
Mem:  34970576k total, 24700568k used, 10270008k free,  1166628k buffers
Swap:  2096440k total,        0k used,  2096440k free, 11233868k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
11975 root      15   0 13036 1356  820 R  0.7  0.0   0:00.66 top                
15915 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15918 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15920 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15921 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15922 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15923 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15924 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15926 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15928 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15929 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15930 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15931 root      18   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           

References:

  1. What is the relation between I/O wait and load average?

Singapore using power of HPC for even better climate projections

Taken from Singapore developing its own ‘crystal ball’ for better climate projections

Global temperatures and sea levels are rising, certain extreme weather events could intensify, and rainfall patterns could become more erratic. But at a finer resolution, many questions remain about how these changes would manifest in Singapore and South-east Asia. These are questions that scientists at the Centre for Climate Research Singapore (CCRS) – a division under the National Environment Agency’s Meteorological Service Singapore – are looking into. CCRS is working with the National Supercomputing Centre to downscale these models to produce grid cells spanning from about 2km to 8km.

Creating Virtual Environment with Python using venv

Virtual Environment and Packages

Sometimes there is a need to create virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages as there is sometimes a need for specific version of python library. Here come venv

Creating Virtual Environments

% module load python/3/intel/2020
% python3 -m venv myenv

This will create a myenv directory if it doesn’t exist, and also create directories inside it containing a copy of the Python interpreter, the standard library, and various supporting files.

% source myenv/bin/activate

Activating the virtual environment will change your shell’s prompt to show what virtual environment you’re using,

(tutorial-env) [user1@hpc-node1 bin]$ python
Python 3.7.4 (default, Nov 22 2019, 21:31:39)
[GCC 7.3.0] :: Intel(R) Corporation on linux
Type "help", "copyright", "credits" or "license" for more information.
Intel(R) Distribution for Python is brought to you by Intel Corporation.
Please check out: https://software.intel.com/en-us/python-distibution
>>> import sys
>>> sys.path
['', '/usr/local/intelpython3/lib/python37.zip', ....., '.....', '/myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages']
>>>

Managing Packages with pip

You can install, upgrade and remove using a program called pip. By default, pip will install packages from the Python Package Index (http://pypi.org).

You can install the latest version of a package by specifying a package’s name:

(tutorial-env) [user1@hpc-node1 bin]$ python -m pip install novas
Collecting novas
  Downloading https://files.pythonhosted.org/packages/42/95/a05bc35cb119925e10f9faa8a2bd17020b0a585744a38921a709acdd9a14/novas-3.1.1.5.tar.gz (135kB)
    100% |████████████████████████████████| 143kB 4.6MB/s
Installing collected packages: novas
  Running setup.py install for novas ... done
Successfully installed novas-3.1.1.5

You can also install specific version by using the command

(tutorial-env) [user1@hpc-node1 bin]$ python -m pip install requests==2.6.0
Collecting requests==2.6.0
  Downloading https://files.pythonhosted.org/packages/73/63/b0729be549494a3e31316437053bc4e0a8bb71a07a6ee6059434b8f1cd5f/requests-2.6.0-py2.py3-none-any.whl (469kB)
    100% |████████████████████████████████| 471kB 4.3MB/s
Installing collected packages: requests
Successfully installed requests-2.6.0

You can also upgrade the package by using the command

(tutorial-env) [user1@hpc-node1 bin]$ python -m pip install --upgrade request    s
Collecting requests
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50    280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl     (61kB)
    100% |████████████████████████████████| 61kB 3.4MB/s
Collecting chardet<5,>=3.0.2 (from requests)
  Downloading https://files.pythonhosted.org/packages/19/c7/fa589626997dd07bd87    d9269342ccb74b1720384a4d739a1872bd84fbe68/chardet-4.0.0-py2.py3-none-any.whl (1    78kB)
    100% |████████████████████████████████| 184kB 5.3MB/s
Collecting urllib3<1.27,>=1.21.1 (from requests)
  Downloading https://files.pythonhosted.org/packages/0c/cd/1e2ec680ec7b09846dc    6e605f5a7709dfb9d7128e51a026e7154e18a234e/urllib3-1.26.5-py2.py3-none-any.whl (    138kB)
    100% |████████████████████████████████| 143kB 5.9MB/s
Collecting idna<3,>=2.5 (from requests)
  Downloading https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6    f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl (58kB)
    100% |████████████████████████████████| 61kB 4.3MB/s
Collecting certifi>=2017.4.17 (from requests)
  Downloading https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec    9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.wh    l (145kB)
    100% |████████████████████████████████| 153kB 5.1MB/s
Installing collected packages: chardet, urllib3, idna, certifi, requests
  Found existing installation: requests 2.6.0
    Uninstalling requests-2.6.0:
      Successfully uninstalled requests-2.6.0
Successfully installed certifi-2021.5.30 chardet-4.0.0 idna-2.10 requests-2.25.    1 urllib3-1.26.5

You can display all the packages in the environment in virtual environment

(tutorial-env) [user1@hpc-node1 bin]$ pip list
Package    Version
---------- ---------
certifi    2021.5.30
chardet    4.0.0
idna       2.10
novas      3.1.1.5
pip        19.0.3
requests   2.25.1
setuptools 40.8.0
urllib3    1.26.5

pip freeze can be used to produce a similar list of installed packages and formatted in a output file that pip install can read from

(tutorial-env) [user1@hpc-node1 bin]$ pip freeze > requirements.txt
(tutorial-env) [user1@hpc-node1 bin]$ cat requirements.txt
certifi==2021.5.30
chardet==4.0.0
idna==2.10
novas==3.1.1.5
requests==2.25.1
urllib3==1.26.5

Install all the necessary packages with install -r from requirements.txt

(tutorial-env) [user1@hpc-node1 bin]$ python -m pip install -r requirements.txt
Requirement already satisfied: certifi==2021.5.30 in /myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (2021.5.30)
Requirement already satisfied: chardet==4.0.0 in /myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (4.0.0)
Requirement already satisfied: idna==2.10 in /myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (2.10)
Requirement already satisfied: novas==3.1.1.5 in /myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (3.1.1.5)
Requirement already satisfied: requests==2.25.1 in /myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages (from -r requirements.txt (line 5)) (2.25.1)
Requirement already satisfied: urllib3==1.26.5 in /myhome/melvin/python_project/tutorial-env/lib/python3.7/site-packages (from -r requirements.txt (line 6)) (1.26.5)

References: