Find CPU and GPU Performance Headroom using Roofline Analysis

Join Technical Consulting Engineer and HPC programming expert Cedric Andreolli for a session covering:

  • How to perform GPU headroom and GPU caches locality analysis using Advisor Roofline extensions for oneAPI and OpenMP
  • An introduction to a new memory-level Roofline feature that helps pinpoint which specific memory level (L1, L2, L3, or DRAM) is causing the bottleneck
  • A walkthrough of Intel Advisor’s improved user interface

To see video, see https://techdecoded.intel.io/essentials/find-cpu-gpu-performance-headroom-using-roofline-analysis/#gs.fpbz93

Intel Launches 11th Gen Intel Core and Intel Evo (code-named “Tiger Lake”)

Intel released 11th Gen Intel® Core™ mobile processors with Iris® Xe graphics (code-named “Tiger Lake”). The new processors break the boundaries of performance with unmatched capabilities in productivity, collaboration, creation, gaming and entertainment on ultra-thin-and-light laptops. They also power the first class of Intel Evo platforms, made possible by the Project Athena innovation program. (Credit: Intel Corporation)

  • Intel launches 11th Gen Intel® Core™ processors with Intel® Iris® Xe graphics, the world’s best processors for thin-and-light laptops1, delivering up to 2.7x faster content creation2, more than 20% faster office productivity3 and more than 2x faster gaming plus streaming4 in real-world workflows over competitive products.
  • Intel® Evo™ platform brand introduced for designs based on 11th Gen Intel Core processors with Intel Iris Xe graphics and verified through the Project Athena innovation program’s second-edition specification and key experience indicators (KEIs).
  • More than 150 designs based on 11th Gen Intel Core processors are expected from Acer, Asus, Dell, Dynabook, HP, Lenovo, LG, MSI, Razer, Samsung and others.

Accelerate Insights with AI and HPC Combined by Intel

In this presentation the presenter will address those questions and give an overview of respective technology for Artificial Intelligence, including hardware platforms & software stacks with a special focus on how to enable successful development of AI solutions. The presenter will look into how to do this on the datacenter technology you know and use today, as well as specific technology for AI workloads. This will also be illustrated with practical customer examples

Compiling Quantum ESPRESSO-6.5.0 with Intel MPI 2018 on CentOS 7

Step 1: Download Quantum ESPRESSO 6.5.0 from Quantum ESPRESSO Download Site or git-clone QE

$ git clone https://gitlab.com/QEF/q-e.git

Step 2: Remember to source the Intel Compilers and indicate MKLROOT in your .bashrc

export MKLROOT=/usr/local/intel_2018/mkl/lib
source /usr/local/intel/2018u3/parallel_studio_xe_2018/bin/psxevars.sh intel64
source /usr/local/intel/2018u3/compilers_and_libraries/linux/bin/compilervars.sh intel64
source /usr/local/intel/2018u3/impi/2018.3.222/bin64/mpivars.sh intel64

Step 3: Make a file call setup.sh and copy the contents inside.

export F90=mpiifort
export F77=mpiifort
export MPIF90=mpiifort
export CC=mpiicc
export CPP="icc -E"
export CFLAGS=$FCFLAGS
export AR=xiar
export BLAS_LIBS=""
export LAPACK_LIBS="-lmkl_blacs_intelmpi_lp64"
export SCALAPACK_LIBS="-lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64"
export FFT_LIBS="-L$MKLROOT/intel64"
# ./configure  --enable-parallel --prefix=/usr/local/espresso-6.5.0
# ./setup.sh
# make all -j 16 
# make install

 

Intel® Math Kernel Library Link Line Advisor

The Intel® Math Kernel Library (Intel® MKL) is designed to run on multiple processors and operating systems. It is also compatible with several compilers and third party libraries, and provides different interfaces to the functionality. To support these different environments, tools, and interfaces Intel MKL provides mutliple libraries from which to choose.

 

For more information to generate the libraries  Intel® Math Kernel Library Link Line Advisor