Myths and Legends in High-Performance Computing

Abstract Taken form Myths and Legends in High-Performance Computing

In this humorous and thought provoking article, we discuss certain myths and legends that are folklore among members of the high-performance computing community. We collected those myths from conversations at conferences and meetings, product advertisements, papers, and other communications such as tweets, blogs, and news articles within (and beyond) our community. We believe they represent the zeitgeist of the current era of massive change, driven by the end of many scaling laws such as ennard scaling and Moore’s law. While some laws end, new directions open up, such as algorithmic scaling or novel architecture research. However, these myths are rarely based on scientific facts but often on some evidence or argumentation. In fact, we believe that this is the very reason for the existence of many myths and why they cannot be answered clearly. While it feels like there should be clear answers for each, some may remain endless philosophical debates such as the question whether Beethoven was better than Mozart. We would like to see our collection of myths as a discussion of possible new directions for research and industry investment

Myths and Legends in High-Performance Computing

The article addresses the follow myths

  • Myth 1: Quantum Computing Will Take Over HPC!
  • Myth 2: Everything Will Be Deep Learning!
  • Myth 3: Extreme Specialization as Seen in Smartphones Will Push Supercomputers Beyond Moore’s Law!
  • Myth 4: Everything Will Run on Some Accelerator!
  • Myth 5: Reconfigurable Hardware Will Give You 100X Speedup!
  • Myth 6: We Will Soon Run at Zettascale!
  • Myth 7: Next-Generation Systems Need More Memory per Core!
  • Myth 8: Everything Will Be Disaggregated!
  • Myth 9: Applications Continue to Improve, Even on Stagnating Hardware!
  • Myth 10: Fortran Is Dead, Long Live the DSL!
  • Myth 11: HPC Will Pivot to Low or Mixed Precision!
  • Myth 12: All HPC Will Be Subsumed by the Clouds!

Supporting Science with HPC

Article is taken from Supporting Science with HPC from Scientific-Computing

HPC integrators can help scientists and HPC research centres through the provisioning and management of HPC clusters. As the number of applications and potential user groups for  HPC continues to expand supporting domain expert scientists use and access of HPC resources is increasingly important.  

While just ten years ago a cluster would have been used by just a few departments at a University, now there is a huge pool of potential users from non-traditional HPC applications. This also includes Artificial intelligence (AI) and machine learning (ML)  as well as big data or applying advanced analytics to data sets from research areas that would previously not be interested in the use of HPC systems. 

This culminates in a growing need to support and facilitate the use of HPC  resources in academia or research and development. These organisations can either choose to employ the staff to support this infrastructure or try to outsource some or all of these processes to companies experienced in the management and support of HPC systems. 

Article is taken from Supporting Science with HPC from Scientific-Computing

High Performance Computing is at Inflection Point

A group of researchers led by Martin Schulz of the Leibniz Supercomputing Center (Munich) presented a paper in which they argue HPC architectural landscape of High-Performance Computing (HPC) is undergoing a seismic shift.

The Full Article is taken from Summer Reading: “High-Performance Computing Is at an Inflection Point”

4 Guiding Principles for the Future of HPC Architecture

  • Energy consumption is no longer merely a cost factor but also a hard feasibility constraint for facilities.
  • Specialization is key to further increase performance despite stagnating frequencies and within limited energy bands.
  • A significant portion of the energy budget is spent moving data and future architectures must be designed to minimize such data movements.
  • Large-scale computing centers must provide optimal computing resources for increasingly differentiated workloads.

Ideas Snippets – Integrated Heterogeneity

Integrated Heterogenous Systems are a promising alternative, which integrate multiple specialized architectures on a single node while keeping the overall system architecture a homogeneous collection of mostly identical nodes. This allows applications to switch quickly between accelerator modules at a fine-grained scale, while minimizing the energy cost and performance overhead, enabling truly heterogeneous applications.

Integrated HPC Systems and How They will Change HPC System Operations
Leibniz-SC-Paper_fig1-768×449

Ideas Snippets – Challenges of a Integrated Heterogeneity

a single application is likely not going use all specialized compute elements at the same time, leading to idle processing elements. Therefore, the choice of the best-suited accelerator mix is an important design criterion during procurement, which can only be achieved via co-design between the computer center and its users on one side and the system vendor on the other. Further, at runtime, it will be important to dynamically schedule and power the respective compute resources. Using power overprovisioning, i.e., planning for a TDP and maximal node power that is reached with a subset of dynamically chosen accelerated processing elements, this can be easily achieved, but requires novel software approaches in system and resource management.”

They note the need for programming environments and abstractions to exploit the different on-node accelerators. “For widespread use, such support must be readily available and, in the best case, in a unified manner in one programming environment. OpenMP, with its architecture-agnostic target concept, is a good match for this. Domain-specific frameworks, as they are, e.g., common in AI, ML or HPDA (e.g., Tensorflow, Pytorch or Spark), will further help to hide this heterogeneity and help make integrated platforms accessible to a wide range of users

HPCWire – Summer Reading: “High-Performance Computing Is at an Inflection Point”

Idea Snippets – Coping with Idle Periods among different Devices (Project Regale)

Application Level. Changing application resources in terms of number and type of processing elements dynamically.

Node Level. Changing node settings, e.g. power/energy consumption via techniques like DVFS or power capping as well as node level partitioning of memory, caches, etc.

System Level. Adjusting system operation based on work- loads or external inputs, e.g., energy prices or supply levels.

HPCWire – Summer Reading: “High-Performance Computing Is at an Inflection Point”