Address Blockchain’s Biggest Problem with Supercomputing

Producing digital coins is not environmentally friendly, to say the least. Bitcoin mining – one of the best-known implementations of blockchain – consumes around 110 Terawatt Hours per year, which is more than the annual consumption of countries such as Sweden or Argentina.

The project involves running open-source simulations to study how the speed of transactions on the blockchain could be increased using various techniques, such as sharding.

Sharding implies splitting a blockchain network into smaller partitions called ‘shards’ that work in parallel to increase its transactional throughput. In other words, it’s like spreading out the workload of a network to allow more transactions to be processed, a technique similar to that used in supercomputing.

In the world of high-performance computers, ways to parallelize computation have been developed for decades to increase scalability. This point is where lessons learned from supercomputing come in handy.

“A blockchain like Ethereum is something like a global state machine, or in less technical words, a global computer. This global computer has been running for over five years on a single core, more specifically a single chain,” Bautista tells ZDNet.

“The efforts of the Ethereum community are focused on making this global computer into a multi-core computer, more specifically a multi-chain computer. The objective is to effectively parallelize computation into multiple computing cores called ‘shards’ – hence the name of this technology.”

ZDNet “Supercomputing can help address blockchain’s biggest problem. Here’s how”

For further read, do take a look at Supercomputing can help address blockchain’s biggest problem. Here’s how

How AI Is Reshaping HPC And What This Means For Data Center Architects

In quarterly earnings reports this year, the CEO and founder of NVIDIA (a Liqid partner) noted that its recent advancements in delivering its new compute platform designed with AI in mind and its acquisition of a leading networking company this year are all designed to achieve the central goal of advancing what is increasingly known as data center-scale computing. For providers of high-performance computing solutions, both those built around NVIDIA’s tech and those that are competing with the GPU goliath, this need for data center-scale computing has been defined by and escalated alongside the data performance requirements of artificial intelligence and machine learning (AI+ML), something I discuss further in a recent article.

https://www.forbes.com/sites/forbestechcouncil/2021/01/19/how-ai-is-reshaping-hpc-and-what-this-means-for-data-center-architects/?sh=3dec4e4d7371