Learn to Accelerate TensorFlow on Intel® Architecture with Minimal Code Changes

The OpenVINO™ integration with TensorFlow enables you to speed up the TensorFlow workflow by adding just two lines of code. Enhance performance on Intel platforms while using the familiar TensorFlow APIs. Download this whitepaper to get started.

Do sign up and get the white papers Learn to Accelerate TensorFlow on Intel® Architecture with Minimal Code Changes

Advertisement

Efficient Heterogeneous Parallel Programming Using OpenMP

This article is taken from Intel “Efficient Heterogeneous Parallel Programming Using OpenMP”. In this article, we will show you how to do CPU+GPU asynchronous calculations using OpenMP.

In some cases, offloading computations to an accelerator like a GPU means that the host CPU sits idle until the offloaded computations are finished. However, using the CPU and GPU resources simultaneously can improve the performance of an application. In OpenMP® programs that take advantage of heterogenous parallelism, the master clause can be used to exploit simultaneous CPU and GPU execution. In this article, we will show you how to do CPU+GPU asynchronous calculation using OpenMP.
…..
…..
…..

The Intel® oneAPI DPC++/C++ Compiler was used with following command-line options:
‑O3 ‑Ofast ‑xCORE‑AVX512 ‑mprefer‑vector‑width=512 ‑ffast‑math ‑qopt‑multiple‑gather‑scatter‑by‑shuffles ‑fimf‑precision=low
‑fiopenmp ‑fopenmp‑targets=spir64=”‑fp‑model=precise”

…..
…..
…..
OpenMP provides true asynchronous, heterogeneous execution on CPU+GPU systems. It’s clear from our timing results and VTune profiles that keeping the CPU and GPU busy in the OpenMP parallel region gives the best performance. We encourage you to try this approach.

Intel: Efficient Heterogeneous Parallel Programming Using OpenMP (Best Practices to Keep the CPU and GPU Working at the Same Time)

Building a Deployment-Ready TensorFlow Model (Part 1)

This is an interesting 3-part article on OpenVINO Deep Learning Workbench.

Pruning deep learning models, combining network layers, developing for multiple hardware targets—getting from a trained deep learning model to a ready-to-deploy inference model seems like a lot of work, which it can be if you hand code it.

With Intel® tools you can go from trained model to an optimized, packaged inference model entirely online without a single line of code. In this article, we’ll introduce you to the Intel® toolkits for deep learning deployments, including the Intel® Distribution of OpenVINO™ toolkit and Deep Learning Workbench. After that, we’ll get you signed up for a free Intel DevCloud for the Edge account so that you can start optimizing your own inference models.

The No-Code Approach to Deploying Deep Learning Models on Intel® Hardware

For more information, see The No-Code Approach to Deploying Deep Learning Models on Intel® Hardware

ModuleNotFoundError: No module named ‘torch’ for OneAPI AI Toolkit

If you are using OneAPI Environment, and if you are having this issue

ModuleNotFoundError: No module named 'torch'

Here are some steps, you may wish to use to troubleshoot.

Make sure you activated the oneAPI environment using below command

% source /usr/local/intel/oneapi/2021.3/setvars.sh

:: initializing oneAPI environment ...
   -bash: BASH_VERSION = 4.2.46(2)-release
:: clck -- latest
:: compiler -- latest
:: dal -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: inspector -- latest
:: intelpython -- latest
:: ipp -- latest
:: itac -- latest
:: LPOT -- latest
:: mkl -- latest
:: modelzoo -- latest
:: mpi -- latest
:: pytorch -- latest
:: tbb -- latest
:: tensorflow -- latest
:: oneAPI environment initialized ::

You might want to check the conda environment

% conda info --envs

# conda environments:
#
myenv                    /myhome/melvin/.conda/envs/myenv
myfsl                    /myhome/melvin/.conda/envs/myfsl
base                  *  /usr/local/intel/oneapi/2021.3/intelpython/latest
2021.3.0                 /usr/local/intel/oneapi/2021.3/intelpython/latest/envs/2021.3.0
myoneapi                 /usr/local/intel/oneapi/2021.3/intelpython/latest/envs/myoneapi
pytorch                  /usr/local/intel/oneapi/2021.3/intelpython/latest/envs/pytorch
pytorch-1.8.0            /usr/local/intel/oneapi/2021.3/intelpython/latest/envs/pytorch-1.8.0
tensorflow               /usr/local/intel/oneapi/2021.3/intelpython/latest/envs/tensorflow
tensorflow-2.5.0         /usr/local/intel/oneapi/2021.3/intelpython/latest/envs/tensorflow-2.5.0
                         /usr/local/intel/oneapi/2021.3/pytorch/1.8.0
                         /usr/local/intel/oneapi/2021.3/tensorflow/2.5.0

Activate Pytorch

% conda activate pytorch
% python
% (pytorch-1.8.0) [user1@node1 ~]$ python
Python 3.7.10 (default, Jun  4 2021, 06:52:02)
[GCC 9.3.0] :: Intel Corporation on linux
Type "help", "copyright", "credits" or "license" for more information.
Intel(R) Distribution for Python is brought to you by Intel Corporation.
Please check out: https://software.intel.com/en-us/python-distribution
>>> import torch

If you are still having the error “ModuleNotFoundError: No module named ‘torch’ “

You may want to install directly if you have root access

% conda install pytorch torchvision cpuonly -c pytorch

If not, you may want to create a private environment similar to Creating Virtual Environment with Python using venv

References:

Webinar: High Performance GPU Acceleration – Part 1: Code Design

  • Online Registration Here
  • Date: 13th October 2021 9am PDF

Heterogeneous computing comes with the challenge of designing code that can work in multi-processor/accelerator environments. Developers need to be equipped with the right set of metrics to make informed design and optimization decisions that take advantage of target hardware.

In Part 1 of this 2-part webinar series, Technical Consulting Engineer Cory Levels focuses on designing software for efficient offload from CPUs to GPUS—even before final hardware is available—using Intel® Advisor. Using a walkthrough of an ISO 3DFD example (3D isotropic Finite Difference), you will learn how to:

  • Optimize your CPU application for memory and compute
  • Identify efficient GPU offload opportunities and quantify the potential performance speed up
  • See performance headroom of your GPU offloaded code against hardware limitations, and get insights for an effective optimization roadmap

For More information, do take a look at the Intel Site Here.

Installing Intel® oneAPI AI Analytics Toolkit

What is included in the Intel oneAPI AI Analytics Toolkit? For more information, do take a look at Intel OneAPI Al Analytics Toolkit

  • Intel® Distribution for Python*
  • Intel® Distribution of Modin* (via Anaconda distribution of the toolkit using the Conda package manager)
  • Intel® Low Precision Optimization Tool
  • Intel® Optimization for PyTorch*
  • Intel® Optimization for TensorFlow*
  • Model Zoo for Intel® Architecture
  • Download size: 2.18 GB
  • Date: August 2, 2021
  • Version: 2021.3

Command Line Installation

wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18040/l_AIKit_p_2021.3.0.1370_offline.sh

sudo bash l_AIKit_p_2021.3.0.1370_offline.sh

Installation Instruction

Step 1: From the console, locate the downloaded install file.
Step 2: Use $ sudo sh ./<installer>.sh to launch the GUI Installer as the root.
Optionally, use $ sh ./<installer>.sh to launch the GUI Installer as the current user.
Step 3: Follow the instructions in the installer.
Step 4: Explore the Get Started Guide.

References:

  1. Intel OneAPI Al Analytics Toolkit

Installing Intel OneAPI HPC Toolkit for Linux

What is included in the OneAPI Installer? For more information, do take a look at Get the Intel® oneAPI HPC Toolkit

  • Intel® oneAPI DPC++/C++ Compiler
  • Intel® oneAPI Fortran Compiler
  • Intel® C++ Compiler Classic
  • Intel® Cluster Checker
  • Intel® Inspector
  • Intel® MPI Library
  • Intel® Trace Analyzer and Collector
  • Download size: 1.25 GB
  • Version: 2021.3
  • Date: June 21, 2021
wget https://registrationcenter-download.intel.com/akdlm/irc_nas/17912/l_HPCKit_p_2021.3.0.3230_offline.sh

sudo bash l_HPCKit_p_2021.3.0.3230_offline.sh

Installation Instruction:

  • Step 1: From the console, locate the downloaded install file.
  • Step 2: Use $ sudo sh ./<installer>.sh to launch the GUI Installer as root.
    Optionally, use $ sh ./<installer>.sh to launch the GUI Installer as current user.
  • Step 3: Follow the instructions in the installer.
  • Step 4: Explore the Get Started Guide.

References:

How to Install Intel® oneAPI Toolkits

In this four-minute video, Intel’s David Liu provides installation instructions for an offline/local installation of Intel’s oneAPI Toolkits. David shows you where to find the Intel oneAPI Toolkits, how to download the toolkits, and the various ways to install the toolkits: from the GUI or from the command line (a silent install without the GUI, an interactive install, or a custom install).