The MLPERF Benchmark Is Good For AI

Commissioned In just about any situation where you are making capital investments in equipment, you are worried about three things: performance, price/performance, and total cost of ownership. Without some sort of benchmark on which to gauge performance and without some sense of relative pricing, it is impossible to calculate total cost of ownership, and therefore, it is impossible to try to figure out what to invest the budget in.

This is why the MLPerf benchmark suite is so important. MLPerf was created only three and a half years ago by researchers and engineers from Baidu, Google, Harvard University, Stanford University, and the University of California Berkeley and it is now administered by the MLCommons consortium, formed in December 2020. Very quickly, it has become a key suite of tests that hardware and software vendors use to demonstrate the performance of their AI systems and that end user customers depend on to help them make architectural choices for their AI systems.

Next Platform “Why the MLPerf Benchmark is good for AI, and good for you.”

The MLPerf site can be found at https://mlcommons.org/en/

NVIDIA Special Address at SIGGRAPH 2021

NVIDIA and SIGGRAPH share a long history of innovation and discovery. Over the last 25 years our community has seen giant leaps forward, driven by brilliant minds and curious explorers. We are now upon the opening moments of an AI-powered revolution in computer graphics with massive advancements in rendering, AI, simulation, and compute technologies across every industry. With open standards and connected ecosystems, we are on the cusp of achieving a new way to interact and exist with graphics in shared virtual worlds.

NVIDIA Special Address | MWC Barcelona 2021

In a special address at MWC Barcelona 2021, NVIDIA announced its partnership with Google Cloud to create the industry’s first AI-on-5G open innovation lab that will speed AI application development for 5G network operators.

Additional announcements included: ● Extending the 5G ecosystem with Arm CPU cores on NVIDIA BlueField-3 DPUs ● Launching NVIDIA CloudXR 3.0 with bidirectional audio for remote collaboration

Address Blockchain’s Biggest Problem with Supercomputing

Producing digital coins is not environmentally friendly, to say the least. Bitcoin mining – one of the best-known implementations of blockchain – consumes around 110 Terawatt Hours per year, which is more than the annual consumption of countries such as Sweden or Argentina.

The project involves running open-source simulations to study how the speed of transactions on the blockchain could be increased using various techniques, such as sharding.

Sharding implies splitting a blockchain network into smaller partitions called ‘shards’ that work in parallel to increase its transactional throughput. In other words, it’s like spreading out the workload of a network to allow more transactions to be processed, a technique similar to that used in supercomputing.

In the world of high-performance computers, ways to parallelize computation have been developed for decades to increase scalability. This point is where lessons learned from supercomputing come in handy.

“A blockchain like Ethereum is something like a global state machine, or in less technical words, a global computer. This global computer has been running for over five years on a single core, more specifically a single chain,” Bautista tells ZDNet.

“The efforts of the Ethereum community are focused on making this global computer into a multi-core computer, more specifically a multi-chain computer. The objective is to effectively parallelize computation into multiple computing cores called ‘shards’ – hence the name of this technology.”

ZDNet “Supercomputing can help address blockchain’s biggest problem. Here’s how”

For further read, do take a look at Supercomputing can help address blockchain’s biggest problem. Here’s how

Creating a Self-Signed Certificate on RHEL

You can create your own self-signed certificate. Note that a self-signed certificate does not provide the security guarantees of a CA-signed certificate.

Generating a Key

Taken from RHEL Administration Guide 25.6. GENERATING A KEY and Creating a Self-Signed Certificate

Step 1: Clean up fake key and certificate

Go to /etc/httpd/conf/ directory. Remove the fake key and certificate that were generated during the installation

# cd /etc/httpd/conf/
# rm ssl.key/server.keyrm ssl.crt/server.crt

Step 2: Create your own Random Key

Go to usr/share/ssl/certs/ and generate key

# cd /usr/share/ssl/certs/
# make genkey

Your system displays a message similar to the following:

mask 77 ; \
/usr/bin/openssl genrsa -des3 1024 > /etc/httpd/conf/ssl.key/server.key
Generating RSA private key, 1024 bit long modulus
.......++++++
................................................................++++++
e is 65537 (0x10001)
Enter pass phrase:

You now must enter in a passphrase. For security reason, it should contain at least eight characters, include numbers and/or punctuation, and it should not be a word in a dictionary.

Re-type the passphrase to verify that it is correct. Once you have typed it in correctly, /etc/httpd/conf/ssl.key/server.key, the file containing your key, is created.

Note that if you do not want to type in a passphrase every time you start your secure server, you must use the following two commands instead of make genkey to create the key.

# /usr/bin/openssl genrsa 1024 > /etc/httpd/conf/ssl.key/server.key

Then, use the following command to make sure the permissions are set correctly for the file:

# chmod go-rwx /etc/httpd/conf/ssl.key/server.key

After you use the above commands to create your key, you do not need to use a passphrase to start your secure server.

* The server.key file should be owned by the root user on your system and should not be accessible to any other user. Make a backup copy of this file and keep the backup copy in a safe, secure place. You need the backup copy because if you ever lose the server.key file after using it to create your certificate request, your certificate no longer works and the CA is not able to help you. Your only option is to request (and pay for) a new certificate.

Creating a Self-Signed Certificate

Once you have a key, make sure you are in the /usr/share/ssl/certs/ directory, and type the following command:

# /usr/share/ssl/certs/make testcert

The following output is shown and you are prompted for your passphrase (unless you generated a key without a passphrase):

umask 77 ; \
/usr/bin/openssl req -new -key -set_serial num /etc/httpd/conf/ssl.key/server.key  
-x509 -days 365 -out /etc/httpd/conf/ssl.crt/server.crt
Using configuration from /usr/share/ssl/openssl.cnf
Enter pass phrase:

Next, you are asked for more information. The computer’s output and a set of inputs looks like the following (provide the correct information for your organization and host):

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a
DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:SG

After you provide the correct information, a self-signed certificate is created in /etc/httpd/conf/ssl.crt/server.crt. Restart the secure server after generating the certificate with following the command:

# /sbin/service httpd restart

The Internet of Workflow

Photo Taken from https://www.azoquantum.com/

The next big thing, Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows (IoW).

Taken from “What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé“,

The IoW, said Dubé, is about “applying those principles to a much broader set of scientific fields because we’re convinced that is where this is going.”

Dubé presented here are six takeaway, briefly touching on recent relevant advances as well as a list of requirements for developing the IoW.

  1. First the Basics. The effort to achieve exascale and the needs of heterogeneous computing generally were catalysts in producing technologies needed for IoW. Dubé also noted the “countless silicon startups doing accelerators” to tackle diverse workloads. Still, lots more work is needed. Here’s snippet on MCM’s expected impact on memory.
  2. White Hats & Data Sovereignty. A key issue, currently not fully addressed, is data sovereignty. Dubé agrees it’s a critical challenge now and will be even more so in an IoW world. He didn’t offer specific technology or practice guidelines.
  3. New Runtimes for a Grand Vision. It’s one thing to dream of IoW; it’s another to build it. Effective parallel programming for diverse devices and the availability of reasonably performant runtime systems able to accommodate device diversity are all needed.
  4. Chasing Performance Portability…Still. Tight vertical software integration as promoted by some (pick your favorite target vendor) isn’t a good idea, argued Dubé. This isn’t a new controversy and maybe it’s a hard-stop roadblock for IoW. We’ll see. Dubé argues for openness and says HPE (Cray) is trying to make the Cray Programming Environment a good choice.
  5. A Combinatorial Explosion of Configurations”. Now there’s an interesting turn of phrase. The avalanche of new chips from old and newcomers is a blessing and curse. Creating systems to accommodate the new wealth of choices is likewise exciting but daunting and expensive. Dubé argues we need to find ways to cut the costs of silicon innovation and subsequent systems to help bring the IoW into being.
  6. Worldwide Data Hub? If one is going to set goals, they may as well be big ones. Creating an infrastructure with reasonable governance and practices to support an IoW is a big goal. Data is at the core of nearly everything, Dubé argued.

Intel Accelerates Process and Packaging Innovations

Taken from Youtube – Intel NewRoom

During the “Intel Accelerated” webcast, Intel’s technology leaders revealed one of the most detailed process and packaging technology roadmaps the company has provided. The event on July 26, 2021, showcased a series of foundational innovations that will power products through 2025 and beyond. As part of the presentations, Intel announced RibbonFET, its first new transistor architecture in more than a decade, and PowerVia, an industry-first new backside power delivery method. (Credit: Intel Corporation)

AMD strong comeback

In the article from the next platform “AMD is finally trusted in the Datacentre again

AMD turned in the best quarter that we can remember, and is now firmly in place as the gadfly counterbalance to the former hegemony of Intel. And that is good for everyone who buys a game console, a PC, an edge device, and a server. And the game is only going to get more interesting with Intel getting its chip together and preparing for a long battle with AMD and other XPU usurpers in chip design and as well as Taiwan Semiconductor Manufacturing Corp in chip etching and packaging.

We do get some hints, however. Lisa Su, AMD’s president and chief executive officer, said that AMD’s datacenter business – at this point meaning Epyc CPUs and Instinct GPU accelerators – comprised more than 20 percent of the company’s overall sales, and the big driver in this quarter was not just second generation “Rome” Epyc 7002 and third generation “Milan” Epyc 7003 server chips – Rome is still outselling Milan, but the crossover is coming in the third quarter of this year – but the Radeon Instinct M100 GPU accelerators launched last fall. The datacenter GPU business more than doubled from a year ago, according to Su, and AMD expects it to continue to grow in the second half of the year as the 1.5 exaflops “Frontier” supercomputer at Oak Ridge National Laboratory in the United States, the as-yet-unnamed pre-exascale system at Pawsey Supercomputing Center in Australia, and the Lumi pre-exascale system in Finland all get their Radeon Instinct motors installed.

the Next Platform “AMD is finally trusted in the Datacentre again”