Intel OpenVINO 2022.2 is available

Key Updates includes:

Broader Model & Hardware Support

  • Preview support for upcoming Intel® processors, including the Intel® Data Center GPU Flex Series and Intel® Arc™ GPU
  • Support for 4th Gen Intel® Xeon Scalable processor (code named Sapphire Rapids)
  • Reduced memory consumption when using dynamic shapes on CPU to improve efficiency of NLP applications

Portability and Performance

Introducing new performance hint “Cumulative throughput” in AUTO device plug-in, enabling multiple accelerators (e.g. multiple GPUs) to be used at once maximizing inferencing performance.

To download the latest release, do take a look at Intel® Distribution of OpenVINO™ Toolkit

Advertisement

Intel Distribution OpenVINO Toolkit 2022.1 is available!

For more information, do take a look at Intel® Distribution of OpenVINO™ Toolkit

Updated, Cleaner API

  • The new OpenVINO API 2.0 was introduced, which aligns OpenVINO inputs and outputs with frameworks. Input and output tensors use native framework layouts and element types. 
  • The API parameters in Model Optimizer have been reduced to minimize complexity. Performance has been significantly improved for model conversion on Open Neural Network Exchange (ONNX*) models.

Broader Model Support

  • With Dynamic Input Shapes capabilities on CPU, OpenVINO is able to adapt to multiple input dimensions in a single model providing more complete NLP support. Support for Dynamic Shapes on additional XPUs is expected in a future dot release.
  • New models with a focus on NLP and a new category, Anomaly Detection, and support for conversion and inference of select PaddlePaddle* models:
    • Pretrained models for anomaly segmentation focus on industrial inspection making speech denoising trainable, plus updates on speech recognition and speech synthesis
    • Combined demonstration that includes noise reduction, speech recognition, question answering, translation, and text to speech
    • Public models with a focus on NLP ContextNet, Speech-Transformer, HiFi-GAN, Glow-TTS, FastSpeech2, and Wav2Vec

Portability and Performance

  • New AUTO plug-in self-discovers available system inferencing capacity based on model requirements so applications no longer need to know their compute environment in advance.
  • Automatic batching functionality via code hints automatically scale batch size based on XPU and available memory.
  • Built with 12th generation Intel® Core™ processors (formerly code named Alder Lake) in mind. Supports the hybrid architecture necessary to deliver enhancements for high performance inferencing on CPUs and integrated GPUs.