Skip to content

Releases: openvinotoolkit/openvino

2023.2.0.dev20230922

27 Sep 12:40
e7c1344
Compare
Choose a tag to compare
2023.2.0.dev20230922 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.2.0.dev20230922 pre-release version here:

Release notes are available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/

What's Changed

  • CPU runtime:
    • Optimized Yolov8n and YoloV8s models on BF16/FP32.
    • Optimized Falcon model on 4th Generation Intel® Xeon® Scalable Processors.
  • GPU runtime:
    • int8 weight compression further improves LLM performance. PR #19548
    • Optimization for gemm & fc in iGPU. PR #19780
  • TensorFlow FE:
    • Added support for Selu operation. PR #19528
    • Added support for XlaConvV2 operation. PR #19466
    • Added support for TensorListLength and TensorListResize operations. PR #19390
  • PyTorch FE:
    • New operations supported
      • aten::minimum aten::maximum. PR #19996
      • aten::broadcast_tensors. PR #19994
      • added support aten::logical_and, aten::logical_or, aten::logical_not, aten::logical_xor. PR #19981
      • aten::scatter_reduce and extend aten::scatter. PR #19980
      • prim::TupleIndex operation. PR #19978
      • mixed precision in aten::min/max. PR #19936
      • aten::tile op PR #19645
      • aten::one_hot PR #19779
      • PReLU. PR #19515
      • aten::swapaxes. PR #19483
      • non-boolean inputs for or and and operations. PR #19268
  • Torchvision NMS can accept negative scores. PR #19826

New openvino_notebooks:

  • Visual Question Answering and Image Captioning using BLIP

Fixed GitHub issues

  • Fixed #19784 “[Bug]: Cannot install libprotobuf-dev along with libopenvino-2023.0.2 on Ubuntu 22.04” with PR #19788
  • Fixed #19617 “Add a clear error message when creating an empty Constant” with PR #19674
  • Fixed #19616 “Align openvino.compile_model and openvino.Core.compile_model functions” with PR #19778
  • Fixed #19469 “[Feature Request]: Add SeLu activation in the OpenVino IR (TensorFlow Conversion)” with PR #19528
  • Fixed #19019 “[Bug]: Low performance of the TF quantized model.” With PR #19735
  • Fixed #19018 “[Feature Request]: Support aarch64 python wheel for Linux” with PR #19594
  • Fixed #18831 “Question: openvino support for Nvidia Jetson Xavier ?” with PR #19594
  • Fixed #18786 “OpenVINO Wheel does not install Debug libraries when CMAKE_BUILD_TYPE is Debug #18786” with PR #19197
  • Fixed #18731 “[Bug] Wrong output shapes of MaxPool” with PR #18965
  • Fixed #18091 “[Bug] 2023.0 Version crashes on Jetson Nano - L4T - Ubuntu 18.04” with PR #19717
  • Fixed #7194 “Conan for simplifying dependency management” with PR #17580

Acknowledgements

Thanks for contributions from the OpenVINO developer community:
@siddhant-0707,
@PRATHAM-SPS,
@okhovan

Full Changelog: 2023.1.0.dev20230811...2023.2.0.dev20230922

2023.1.0

18 Sep 09:20
47b736f
Compare
Choose a tag to compare

Summary of major features and improvements

  • More Generative AI options with Hugging Face and improved PyTorch model support.
    • NEW: Your PyTorch solutions are now even further enhanced with OpenVINO. You’ve got more options and you no longer need to convert to ONNX for deployment. Developers can now use their API of choice - PyTorch or OpenVINO for added performance benefits. Additionally, users can automatically import and convert PyTorch models for quicker deployment. You can continue to make the most of OpenVINO tools for advanced model compression and deployment advantages, ensuring flexibility and a range of options.
    • torch.compile (preview) – OpenVINO is now available as a backend through PyTorch torch.compile, empowering developers to utilize OpenVINO toolkit through PyTorch APIs. This feature has also been integrated into the Automatic1111 Stable Diffusion Web UI, helping developers achieve accelerated performance for Stable Diffusion 1.5 and 2.1 on Intel CPUs and GPUs in both Native Linux and Windows OS platforms.
    • Optimum Intel – Hugging Face and Intel continue to enhance top generative AI models by optimizing execution, making your models run faster and more efficiently on both CPU and GPU. OpenVINO serves as a runtime for inferencing execution. New PyTorch auto import and conversion capabilities have been enabled, along with support for weights compression to achieve further performance gains.
  • Broader LLM model support and more model compression techniques
    • Enhanced performance and accessibility for Generative AI: Runtime performance and memory usage have been significantly optimized, especially for Large Language models (LLMs). Models used for chatbots, instruction following, code generation, and many more, including prominent models like BLOOM, Dolly, Llama 2, GPT-J, GPTNeoX, ChatGLM, and Open-Llama have been enabled.
    • Improved LLMs on GPU – Model coverage for dynamic shapes support has been expanded, further helping the performance of generative AI workloads on both integrated and discrete GPUs. Furthermore, memory reuse and weight memory consumption for dynamic shapes have been improved.
    • Neural Network Compression Framework (NNCF) now includes an 8-bit weights compression method, making it easier to compress and optimize LLM models. SmoothQuant method has been added for more accurate and efficient post-training quantization for Transformer-based models.
  • More portability and performance to run AI at the edge, in the cloud or locally.
    • NEW: Support for Intel(R) Core(TM) Ultra (codename Meteor Lake). This new generation of Intel CPUs is tailored to excel in AI workloads with a built-in inference accelerators.
    • Integration with MediaPipe – Developers now have direct access to this framework for building multipurpose AI pipelines. Easily integrate with OpenVINO Runtime and OpenVINO Model Server to enhance performance for faster AI model execution. You also benefit from seamless model management and version control, as well as custom logic integration with additional calculators and graphs for tailored AI solutions. Lastly, you can scale faster by delegating deployment to remote hosts via gRPC/REST interfaces for distributed processing.

Support Change and Deprecation Notices

  • OpenVINO™ Development Tools package (pip install openvino-dev) is currently being deprecated and will be removed from installation options and distribution channels with 2025.0. For more info, see the documentation for Legacy Features.
  • Tools:
    • Accuracy Checker is deprecated and will be discontinued with 2024.0.
    • Post-Training Optimization Tool (POT)  has been deprecated and will be discontinued with 2024.0.
  • Runtime:
    • Intel® Gaussian & Neural Accelerator (Intel® GNA) is being deprecated, the GNA plugin will be discontinued with 2024.0.
    • OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.
    • Python 3.7 will be discontinued with 2023.2 LTS release.

You can find OpenVINO™ toolkit 2023.1 release here:

Release documentation is available here: https://docs.openvino.ai/2023.1
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-1.html

2023.0.2

04 Sep 16:56
e662b1a
Compare
Choose a tag to compare

This release provides functional bug fixes and capability updates for 2023.0 that enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.

Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.

Major changes:

  • OpenVINO GNA Plugin:
    • Fixes the issue when GNA device would not work on Gemini Lake (GLK) platforms
    • Fixes the problem with memory leak during HLK test
  • OpenVINO CPU Plugin:
    • Fixes the issues occurred in Multi-Threading 2.0 getting CPU mapping detail on Windows 7 platforms
  • OpenVINO Core:
    • Fixes the issues occurred when compiling a Pytorch model with unfold op

You can find OpenVINO™ toolkit 2023.0.2 release here:

Release documentation is available here: https://docs.openvino.ai/2023.0/home.html

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html

2023.1.0.dev20230811

17 Aug 11:05
e33de35
Compare
Choose a tag to compare
2023.1.0.dev20230811 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.1.0.dev20230811 pre-release version here:

Release notes are available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/

What's Changed

  • CPU runtime:
    • Enabled weights decompression support for Large Language models (LLMs). The implementation supports avx2 and avx512 HW targets for Intel® Core™ processors, improving performance in the latency mode (comparison: FP32 VS FP32+INT8 weights). For 4th Generation Intel® Xeon® Scalable Processors (formerly Sapphire Rapids) this INT8 decompression feature improves performance compared to pure BF16 inference. PRs: #18915, #19111
    • Reduced memory consumption of the ‘compile model’ stage by moving constant folding of Transpose nodes to the CPU Runtime side. PR: #18877
    • Set FP16 inference precision by default for non-convolution networks on ARM. Convolution networks will be executed in FP32. PRs: #19069, #19192, #19176
  • GPU runtime: Added paddings for dynamic convolutions to improve performance for models like Stable-Diffusion v2.1, PR: #19001
  • Python API:
    • Added the torchvision.transforms object to OpenVINO preprocessing. PR: #17934
    • All python tools related with OpenVINO are now available via single namespace, to improve user experience by better API readability. PR: #18157
  • TensorFlow FE:
    • Added support for the TensorFlow 1 Checkpoint format. All native TensorFlow formats are now enabled.
    • Added support for 8 new operations:
  • PyTorch FE:
    • Added support for 7 new operations. To know how to enjoy PyTorch models conversion follow Link

New openvino_notebooks

Fixed GitHub issues

  • Fixed #18978 "Webassembly build fails" with PR #19005
  • Fixed #18847 "Debugging OpenVINO Python GIL Error" with PR #18848
  • Fixed #18465 "OpenVINO can't be built in an environment that has an 'ambient' oneDNN installation" with PR #18805

Acknowledgements
Thanks for contributions from the OpenVINO developer community: @DmitriyValetov, @kai-waang

Full Changelog: 2023.1.0.dev20230728...2023.1.0.dev20230811

2023.1.0.dev20230728

02 Aug 12:10
c7cde6a
Compare
Choose a tag to compare
2023.1.0.dev20230728 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.1.0.dev20230728 pre-release version here:

Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/

2023.0.1

03 Jul 17:58
fa1c419
Compare
Choose a tag to compare

This release provides functional bug fixes and capability updates for 2023.0 that enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.

Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.

Major changes:

  • POT:
    • Fixes the errors caused by the default usage of the MMap allocator (enabled in 2023.0). Only Windows affected.
  • OpenVINO Core
    • Fixes the issue with properly handling the directory in read_model() on Windows

You can find OpenVINO™ toolkit 2023.0.1 release here:

Release documentation is available here: https://docs.openvino.ai/2023.0/home.html

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html

2023.1.0.dev20230623

28 Jun 11:18
54e9690
Compare
Choose a tag to compare
2023.1.0.dev20230623 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.1.0.dev20230623 pre-release version here:

Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/

2022.3.1

20 Jun 09:33
cf2c7da
Compare
Choose a tag to compare

Major Features and Improvements Summary

This is a Long-Term Support (LTS) release. LTS versions are released every year and supported for two years (one year for bug fixes, and two years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy  v.2 for more details.

  • This 2022.3.1 LTS release provides functional bug fixes and minor capability changes for the previous 2022.3 Long-Term Support (LTS) release, enabling developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.
  • Intel® Movidius ™ VPU-based products are supported in this release.

You can find OpenVINO™ toolkit 2022.3 release here:

Release documentation is available here: https://docs.openvino.ai/2022.3/

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-lts/2022-3.html

2023.0.0

01 Jun 07:19
b4452d5
Compare
Choose a tag to compare

Summary of major features and improvements

  • More integrations, minimizing code changes
    • Now you can load TensorFlow and TensorFlow Lite models directly in OpenVINO Runtime and OpenVINO Model Server. Models are converted automatically. For maximum performance, it is still recommended to convert to OpenVINO Intermediate Representation or IR format before loading the model. Additionally, we’ve introduced a similar functionality with PyTorch models as a preview feature where you can convert PyTorch models directly without needing to convert to ONNX.
    • Support for Python 3.11
    • NEW: C++ developers can now install OpenVINO runtime from Conda Forge
    • NEW: ARM processors are now supported in CPU plug-in, including dynamic shapes, full processor performance, and broad sample code/notebook coverage. Officially validated for Raspberry Pi 4 and Apple® Mac M1/M2
    • Preview: A new Python API has been introduced to allow developers to convert and optimize models directly from Python scripts
  • Broader model support and optimizations
    • Expanded model support for generative AI: CLIP, BLIP, Stable Diffusion 2.0, text processing models, transformer models (i.e. S-BERT, GPT-J, etc.), and others of note: Detectron2, Paddle Slim, RNN-T, Segment Anything Model (SAM), Whisper, and YOLOv8 to name a few.
    • Initial support for dynamic shapes on GPU - you no longer need to change to static shapes when leveraging the GPU which is especially important for NLP models.
    • Neural Network Compression Framework (NNCF) is now the main quantization solution. You can use it for both post-training optimization and quantization-aware training. Try it out: pip install nncf
  • Portability and performance​
    • CPU plugin now offers thread scheduling on 12th gen Intel® Core and up. You can choose to run inference on E-cores, P-cores, or both, depending on your application’s configurations. It is now possible to optimize for performance or for power savings as needed. ​
    • NEW: Default Inference Precision - no matter which device you use, OpenVINO will default to the format that enables its optimal performance. For example, FP16 for GPU or BF16 for 4th Generation Intel® Xeon®. You no longer need to convert the model beforehand to specific IR precision, and you still have the option of running in accuracy mode if needed.​
    • Model caching on GPU is now improved with more efficient model loading/compiling.

You can find OpenVINO™ toolkit 2023.0 release here:

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html

2023.0.0.dev20230427

04 May 12:07
40bf400
Compare
Choose a tag to compare
2023.0.0.dev20230427 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.0.0.dev20230427 pre-release version here:

Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/