Highlighted Area of Research by HPCWire

HPCWire highlighted 3 Areas of Research in the High-Performance Computing and Related Domains. The article can be found here

Research 1: HipBone: A performance-portable GPU-accelerated C++ version of the NekBone benchmark

HipBone “is a fully GPU-accelerated C++ implementation of the original NekBone CPU proxy application with several novel algorithmic and implementation improvements which optimize its performance on modern finegrain parallel GPU accelerators.” 

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

Research 2: A Case for intra-rack resource disaggregation in HPC

A multi-institution research team utilized Cori, a high performance computing system at the National Energy Research Scientific Computing Center, to analyze “resource disaggregation to enable finer-grain allocation of hardware resources to applications.”

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

Research 3: Improving Scalability with GPU-Aware Asynchronous Tasks

Computer scientists from the University of Illinois at Urbana-Champaign and Lawrence Livermore National Laboratory demonstrated improved scalability to hide communication behind computation with GPU-aware asynchronous tasks.

What’s New in HPC Research: HipBone, GPU-Aware Asynchronous Tasks, Autotuning & More

Address Blockchain’s Biggest Problem with Supercomputing

Producing digital coins is not environmentally friendly, to say the least. Bitcoin mining – one of the best-known implementations of blockchain – consumes around 110 Terawatt Hours per year, which is more than the annual consumption of countries such as Sweden or Argentina.

The project involves running open-source simulations to study how the speed of transactions on the blockchain could be increased using various techniques, such as sharding.

Sharding implies splitting a blockchain network into smaller partitions called ‘shards’ that work in parallel to increase its transactional throughput. In other words, it’s like spreading out the workload of a network to allow more transactions to be processed, a technique similar to that used in supercomputing.

In the world of high-performance computers, ways to parallelize computation have been developed for decades to increase scalability. This point is where lessons learned from supercomputing come in handy.

“A blockchain like Ethereum is something like a global state machine, or in less technical words, a global computer. This global computer has been running for over five years on a single core, more specifically a single chain,” Bautista tells ZDNet.

“The efforts of the Ethereum community are focused on making this global computer into a multi-core computer, more specifically a multi-chain computer. The objective is to effectively parallelize computation into multiple computing cores called ‘shards’ – hence the name of this technology.”

ZDNet “Supercomputing can help address blockchain’s biggest problem. Here’s how”

For further read, do take a look at Supercomputing can help address blockchain’s biggest problem. Here’s how

How AI Is Reshaping HPC And What This Means For Data Center Architects

In quarterly earnings reports this year, the CEO and founder of NVIDIA (a Liqid partner) noted that its recent advancements in delivering its new compute platform designed with AI in mind and its acquisition of a leading networking company this year are all designed to achieve the central goal of advancing what is increasingly known as data center-scale computing. For providers of high-performance computing solutions, both those built around NVIDIA’s tech and those that are competing with the GPU goliath, this need for data center-scale computing has been defined by and escalated alongside the data performance requirements of artificial intelligence and machine learning (AI+ML), something I discuss further in a recent article.