Extremely Low Thermal Conductivity material to insulate Space Craft

This video was demonstrated in 2011 and yet even now in 2021, I am still fascinated with science. Picking up a block at 2200 degree F with bare hand…. Wow… Enjoy

For more information, do take a look at

For more information about the Space Shuttle thermal tiles, how they work, what they are made of used on the Space Shuttle see Thermal Protection Systems. Enjoy

Researchers Train Fluid Dynamics Neural Networks on Supercomputers

Fluid dynamics simulations are critical for applications ranging from wind turbine design to aircraft optimization. Running these simulations through direct numerical simulations, however, is computationally costly. Many researchers instead turn to large-eddy simulations (LES), which generalize the motions of a given fluid in order to reduce the computational costs – but these generalizations lead to tradeoffs in accuracy. Now, researchers are using supercomputers at the High-Performance Computing Center Stuttgart (HLRS) to help make those more accurate simulations accessible to more researchers.


For more information, do take a look at Researchers Train Fluid Dynamics Neural Networks on Supercomputers

Intel turns to TSMC: another step towards fabless?

The recent news that Intel will turn to TSMC to mass produce CPU products signals a new era in the processor IDM/foundry arena. The production is slated to start in the second half of 2021 and will cover some of Intel’s low- and mid- tier CPU products. Yole Développement’s report “Computing for Datacenter Servers 2021” and “Processor Quarterly Market Monitor” cover the market space where these events are occurring. Meanwhile, speculation over Intel’s motivation is rampant, as are theories of what this means for the firm’s long-term strategy.


For more information, do take a look at Intel turns to TSMC: another step towards fabless?

How AI Is Reshaping HPC And What This Means For Data Center Architects

In quarterly earnings reports this year, the CEO and founder of NVIDIA (a Liqid partner) noted that its recent advancements in delivering its new compute platform designed with AI in mind and its acquisition of a leading networking company this year are all designed to achieve the central goal of advancing what is increasingly known as data center-scale computing. For providers of high-performance computing solutions, both those built around NVIDIA’s tech and those that are competing with the GPU goliath, this need for data center-scale computing has been defined by and escalated alongside the data performance requirements of artificial intelligence and machine learning (AI+ML), something I discuss further in a recent article.


How to train a robot (using AI and supercomputers)

From Science Daily

Computer scientists developed a deep learning method to create realistic objects for virtual environments that can be used to train robots. The researchers used TACC’s Maverick2 supercomputer to train the generative adversarial network. The network is the first that can produce colored point clouds with fine details at multiple resolutions.