Fluid dynamics simulations are critical for applications ranging from wind turbine design to aircraft optimization. Running these simulations through direct numerical simulations, however, is computationally costly. Many researchers instead turn to large-eddy simulations (LES), which generalize the motions of a given fluid in order to reduce the computational costs – but these generalizations lead to tradeoffs in accuracy. Now, researchers are using supercomputers at the High-Performance Computing Center Stuttgart (HLRS) to help make those more accurate simulations accessible to more researchers.
The recent news that Intel will turn to TSMC to mass produce CPU products signals a new era in the processor IDM/foundry arena. The production is slated to start in the second half of 2021 and will cover some of Intel’s low- and mid- tier CPU products. Yole Développement’s report “Computing for Datacenter Servers 2021” and “Processor Quarterly Market Monitor” cover the market space where these events are occurring. Meanwhile, speculation over Intel’s motivation is rampant, as are theories of what this means for the firm’s long-term strategy.
In quarterly earnings reports this year, the CEO and founder of NVIDIA (a Liqid partner) noted that its recent advancements in delivering its new compute platform designed with AI in mind and its acquisition of a leading networking company this year are all designed to achieve the central goal of advancing what is increasingly known as data center-scale computing. For providers of high-performance computing solutions, both those built around NVIDIA’s tech and those that are competing with the GPU goliath, this need for data center-scale computing has been defined by and escalated alongside the data performance requirements of artificial intelligence and machine learning (AI+ML), something I discuss further in a recent article.
Computer scientists developed a deep learning method to create realistic objects for virtual environments that can be used to train robots. The researchers used TACC’s Maverick2 supercomputer to train the generative adversarial network. The network is the first that can produce colored point clouds with fine details at multiple resolutions.