Skip to content

Commit

Permalink
[DOCS] Update of hyperlinks to 2024 + new ov homepage diagram image f…
Browse files Browse the repository at this point in the history
…or master (openvinotoolkit#23091)

* Update of links in docs to 2024 in repo.
* Replaced ov homepage diagram with a new version without Kalid, MXNet
and Caffe
  • Loading branch information
msmykx-intel authored Feb 28, 2024
1 parent 050e967 commit 8d49595
Show file tree
Hide file tree
Showing 141 changed files with 1,768 additions and 1,787 deletions.
28 changes: 14 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,18 +67,18 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE), Intel® Advanced Matrix Extensions (Intel® AMX)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html">ARM CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with Apple silicon
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
Expand All @@ -96,22 +96,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.html">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/multi-device.html">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
Expand Down Expand Up @@ -160,9 +160,9 @@ You can also check out [Awesome OpenVINO](https://github.com/openvinotoolkit/awe
## System requirements

The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/2023.3/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.3/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.3/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Linux](https://docs.openvino.ai/2024/get-started/install-openvino-overview/install-openvino-linux-header.html)
- [Windows](https://docs.openvino.ai/2024/get-started/install-openvino-overview/install-openvino-windows-header.html)
- [macOS](https://docs.openvino.ai/2024/get-started/install-openvino-overview/install-openvino-macos-header.html)

## How to build

Expand All @@ -177,7 +177,7 @@ See [CONTRIBUTING](./CONTRIBUTING.md) for contribution details. Thank you!
Visit [Intel DevHub Discord server](https://discord.gg/7pVRxUwdWG) if you need help or wish to talk to OpenVINO developers. You can go to the channel dedicated to Good First Issue support if you are working on a task.

## Take the issue
If you wish to be assigned to an issue please add a comment with `.take` command.
If you wish to be assigned to an issue please add a comment with `.take` command.

## Get support

Expand All @@ -192,7 +192,7 @@ Report questions, issues and suggestions, using:

* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
* Additional OpenVINO™ toolkit modules:
* Additional OpenVINO™ toolkit modules:
* [openvino_contrib](https://github.com/openvinotoolkit/openvino_contrib)
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
Expand All @@ -206,6 +206,6 @@ Report questions, issues and suggestions, using:
\* Other names and brands may be claimed as the property of others.

[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[OpenVINO Model Converter (OVC)]:https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html#convert-a-model-in-cli-ovc
[OpenVINO™ Runtime]:https://docs.openvino.ai/2024/openvino-workflow/running-inference.html
[OpenVINO Model Converter (OVC)]:https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html#convert-a-model-in-cli-ovc
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
Original file line number Diff line number Diff line change
Expand Up @@ -555,7 +555,7 @@ to OpenVINO IR or ONNX before running inference should be considered the default
OpenVINO versions of 2023 are mostly compatible with the old instructions,
through a deprecated MO tool, installed with the deprecated OpenVINO Developer Tools package.

`OpenVINO 2023.0 <https://docs.openvino.ai/2023.3/Supported_Model_Formats_MO_DG.html>`__ is the last
`OpenVINO 2023.0 <https://docs.openvino.ai/archive/2023.0/Supported_Model_Formats.html>`__ is the last
release officially supporting the MO conversion process for the legacy formats.


Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Converting a TensorFlow RetinaNet Model


.. meta::
:description: Learn how to convert a RetinaNet model
:description: Learn how to convert a RetinaNet model
from TensorFlow to the OpenVINO Intermediate Representation.


Expand All @@ -14,11 +14,11 @@ Converting a TensorFlow RetinaNet Model
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.

This tutorial explains how to convert a RetinaNet model to the Intermediate Representation (IR).

`Public RetinaNet model <https://github.com/fizyr/keras-retinanet>`__ does not contain pretrained TensorFlow weights.
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2023.3/omz_models_model_retinanet_tf.html>`__.
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2024/omz_models_model_retinanet_tf.html>`__.

After converting the model to TensorFlow format, run the following command:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ Overview of OpenVINO Plugin Library


.. meta::
:description: Develop and implement independent inference solutions for
different devices with the components of plugin architecture
:description: Develop and implement independent inference solutions for
different devices with the components of plugin architecture
of OpenVINO.


Expand All @@ -28,8 +28,8 @@ Overview of OpenVINO Plugin Library
openvino_docs_ie_plugin_api_references


The plugin architecture of OpenVINO allows to develop and plug independent inference
solutions dedicated to different devices. Physically, a plugin is represented as a dynamic library
The plugin architecture of OpenVINO allows to develop and plug independent inference
solutions dedicated to different devices. Physically, a plugin is represented as a dynamic library
exporting the single ``create_plugin_engine`` function that allows to create a new plugin instance.

OpenVINO Plugin Library
Expand Down Expand Up @@ -78,7 +78,7 @@ OpenVINO plugin dynamic library consists of several main components:
* Provides the device specific remote tensor API and implementation.


.. note::
.. note::

This documentation is written based on the ``Template`` plugin, which demonstrates plugin development details. Find the complete code of the ``Template``, which is fully compilable and up-to-date, at ``<openvino source dir>/src/plugins/template``.

Expand All @@ -96,6 +96,6 @@ Detailed Guides
API References
##############

* `OpenVINO Plugin API <https://docs.openvino.ai/2023.3/api/c_cpp_api/group__ov__dev__api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.3/api/c_cpp_api/group__ie__transformation__api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2024/api/c_cpp_api/group__ov__dev__api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2024/api/c_cpp_api/group__ie__transformation__api.html>`__

Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Plugin API Reference


.. meta::
:description: Learn about extra API references required for the development of
:description: Learn about extra API references required for the development of
plugins in OpenVINO.

.. toctree::
Expand All @@ -17,6 +17,6 @@ Plugin API Reference

The guides below provides extra API references needed for OpenVINO plugin development:

* `OpenVINO Plugin API <https://docs.openvino.ai/2023.3/api/c_cpp_api/group__ov__dev__api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.3/api/c_cpp_api/group__ie__transformation__api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2024/api/c_cpp_api/group__ov__dev__api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2024/api/c_cpp_api/group__ie__transformation__api.html>`__

Original file line number Diff line number Diff line change
Expand Up @@ -135,16 +135,16 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400

Try the `Python Quick Start Example <https://docs.openvino.ai/2023.3/notebooks/201-vision-monodepth-with-output.html>`__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
Try the `Python Quick Start Example <https://docs.openvino.ai/2024/notebooks/201-vision-monodepth-with-output.html>`__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.

Get started with Python
+++++++++++++++++++++++

Visit the :doc:`Tutorials <tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:

* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2023.3/notebooks/002-openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2023.3/notebooks/001-hello-world-with-output.html>`__
* `Convert a PyTorch model and use it for image background removal <https://docs.openvino.ai/2023.3/notebooks/205-vision-background-removal-with-output.html>`__
* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2024/notebooks/002-openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2024/notebooks/001-hello-world-with-output.html>`__
* `Convert a PyTorch model and use it for image background removal <https://docs.openvino.ai/2024/notebooks/205-vision-background-removal-with-output.html>`__



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ You need a model that is specific for your inference task. You can get it from o
Convert the Model
--------------------

If Your model requires conversion, check the `article <https://docs.openvino.ai/2023.3/openvino_docs_get_started_get_started_demos.html>`__ for information how to do it.
If Your model requires conversion, check the `article <https://docs.openvino.ai/2024/learn-openvino/openvino-samples/get-started-demos.html>`__ for information how to do it.

.. _download-media:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,6 @@ Additional Resources
- :doc:`Get Started with Samples <openvino_docs_get_started_get_started_demos>`
- :doc:`Using OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>`
- :doc:`Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
- `API Reference <https://docs.openvino.ai/2023.2/api/api_reference.html>`__
- `API Reference <https://docs.openvino.ai/2024/api/api_reference.html>`__
- `Hello NV12 Input Classification C++ Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/hello_nv12_input_classification/README.md>`__
- `Hello NV12 Input Classification C Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/c/hello_nv12_input_classification/README.md>`__
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Model input dimensions can be specified as dynamic using the model.reshape metho

Some models may already have dynamic shapes out of the box and do not require additional configuration. This can either be because it was generated with dynamic shapes from the source framework, or because it was converted with Model Conversion API to use dynamic shapes. For more information, see the Dynamic Dimensions “Out of the Box” section.

The examples below show how to set dynamic dimensions with a model that has a static ``[1, 3, 224, 224]`` input shape (such as `mobilenet-v2 <https://docs.openvino.ai/2023.3/omz_models_model_mobilenet_v2.html>`__). The first example shows how to change the first dimension (batch size) to be dynamic. In the second example, the third and fourth dimensions (height and width) are set as dynamic.
The examples below show how to set dynamic dimensions with a model that has a static ``[1, 3, 224, 224]`` input shape. The first example shows how to change the first dimension (batch size) to be dynamic. In the second example, the third and fourth dimensions (height and width) are set as dynamic.

.. tab-set::

Expand Down Expand Up @@ -177,7 +177,7 @@ The lower and/or upper bounds of a dynamic dimension can also be specified. They
.. tab-item:: C
:sync: c

The dimension bounds can be coded as arguments for `ov_dimension <https://docs.openvino.ai/2023.3/api/c_cpp_api/structov__dimension.html>`__, as shown in these examples:
The dimension bounds can be coded as arguments for `ov_dimension <https://docs.openvino.ai/2024/api/c_cpp_api/structov__dimension.html>`__, as shown in these examples:

.. doxygensnippet:: docs/snippets/ov_dynamic_shapes.c
:language: cpp
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@ To build your project using CMake with the default build tools currently availab
Additional Resources
####################

* See the :doc:`OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>` page or the `Open Model Zoo Demos <https://docs.openvino.ai/2023.3/omz_demos.html>`__ page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
* See the :doc:`OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>` page or the `Open Model Zoo Demos <https://docs.openvino.ai/2024/omz_demos.html>`__ page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
* :doc:`OpenVINO™ Runtime Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`
* :doc:`String Tensors <openvino_docs_OV_UG_string_tensors>`
* :doc:`Using Encrypted Models with OpenVINO <openvino_docs_OV_UG_protecting_model_guide>`
Expand Down
Loading

0 comments on commit 8d49595

Please sign in to comment.