An interesting article on delicate relationship between chip designs and foundries
The tight linkage between chip designs and chip manufacturing processes has caused its shared of havoc in the IT sector, and it is getting worse as Moore’s Law has slowed and Dennard scaling died a decade ago. Wringing more performance out of devices while trying to keep a lid on power draw is causing loads of trouble as chip makers try to advance the state of the art. When there are failures to meet chip process targets set by the foundries of the world, chips drive off the roadmap page and smash on the floor.
If the data center is part of your development wheelhouse, you’re likely familiar with a little CPU called “Xeon”. This webinar unpacks the latest methodologies of tuning complex AI and HPC workloads for the third generation Xeon platform (formerly code-named Ice Lake).
Delivering up to 40 cores per processor, 3rd Gen Intel® Xeon® Scalable processors are designed for compute-intense, data-centric workloads spanning the cloud to the network and the edge.
In this session, Intel engineer Vladimir Tsymbal will show you how to optimize your AI and HPC applications and solutions to unlock the full spectrum of these processors’ power. You’ll learn:
The top-down tuning methodology that uses Xeon hardware-performance metrics to identify issues including critical bottlenecks caused by data locality, CPU interconnect bandwidth, cache limitations, instructions execution stalls, and I/O interfaces
How a high-level HPC Characterization Analysis helps you find inefficient parallel tasks