From 04959875fcc0ce261fe68bec8cbef8fd210c3073 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 22 Jan 2025 14:56:44 +0100 Subject: [PATCH] pierdolety do potegi --- .../about-openvino/key-features.rst | 2 +- .../get-started/configurations.rst | 5 ++-- .../get-started/install-openvino.rst | 14 +++++----- .../install-openvino-genai.rst | 28 ++++++++++--------- .../openvino-workflow-generative.rst | 2 +- .../inference-with-genai-on-npu.rst | 4 +-- .../inference-with-genai.rst | 2 +- .../model-caching-overview.rst | 14 ++++++---- 8 files changed, 38 insertions(+), 33 deletions(-) diff --git a/docs/articles_en/about-openvino/key-features.rst b/docs/articles_en/about-openvino/key-features.rst index c751a5bc65d3cf..7e4ffab3cbb2ec 100644 --- a/docs/articles_en/about-openvino/key-features.rst +++ b/docs/articles_en/about-openvino/key-features.rst @@ -14,7 +14,7 @@ Easy Integration OpenVINO optimizations to your PyTorch models directly with a single line of code. | :doc:`GenAI Out Of The Box <../openvino-workflow-generative/inference-with-genai>` -| With the genAI flavor of OpenVINO, you can run generative AI with just a couple lines of code. +| With the OpenVINO GenAI, you can run generative models with just a few lines of code. Check out the GenAI guide for instructions on how to do it. | `Python / C++ / C / NodeJS APIs `__ diff --git a/docs/articles_en/get-started/configurations.rst b/docs/articles_en/get-started/configurations.rst index 3e471c33445292..c0e885dd956c78 100644 --- a/docs/articles_en/get-started/configurations.rst +++ b/docs/articles_en/get-started/configurations.rst @@ -32,8 +32,9 @@ potential of OpenVINO™. Check the following list for components used in your w for details. | **OpenVINO GenAI Dependencies** -| OpenVINO GenAI is a flavor of OpenVINO, aiming to simplify running generative - AI models. For information on the dependencies required to use OpenVINO GenAI, see the +| OpenVINO GenAI is a tool based on the OpenVNO Runtime but simplifying the process of + running generative AI models. For information on the dependencies required to use + OpenVINO GenAI, see the :doc:`guide on OpenVINO GenAI Dependencies `. | **Open Computer Vision Library** diff --git a/docs/articles_en/get-started/install-openvino.rst b/docs/articles_en/get-started/install-openvino.rst index 387a0bf2ab37e3..7616a87d6f3384 100644 --- a/docs/articles_en/get-started/install-openvino.rst +++ b/docs/articles_en/get-started/install-openvino.rst @@ -11,11 +11,11 @@ Install OpenVINO™ 2025.0 :maxdepth: 3 :hidden: + OpenVINO GenAI OpenVINO Runtime on Linux OpenVINO Runtime on Windows OpenVINO Runtime on macOS Create an OpenVINO Yocto Image - OpenVINO GenAI Flavor .. raw:: html @@ -30,13 +30,13 @@ All currently supported versions are: * 2023.3 (LTS) -.. dropdown:: Effortless GenAI integration with OpenVINO GenAI Flavor +.. dropdown:: Effortless GenAI integration with OpenVINO GenAI - A new OpenVINO GenAI Flavor streamlines application development by providing - LLM-specific interfaces for easy integration of language models, handling tokenization and - text generation. For installation and usage instructions, proceed to - :doc:`Install OpenVINO GenAI Flavor <../openvino-workflow-generative>` and - :doc:`Run LLMs with OpenVINO GenAI Flavor <../openvino-workflow-generative/inference-with-genai>`. + OpenVINO GenAI streamlines application development by providing LLM-specific interfaces for + easy integration of language models, handling tokenization and text generation. + For installation and usage instructions, check + :doc:`OpenVINO GenAI installation <../openvino-workflow-generative>` and + :doc:`inference with OpenVINO GenAI <../openvino-workflow-generative/inference-with-genai>`. .. dropdown:: Building OpenVINO from Source diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst b/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst index b548353b36977e..026a76f2ee86d7 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst @@ -1,24 +1,26 @@ Install OpenVINO™ GenAI ==================================== -OpenVINO GenAI is a new flavor of OpenVINO, aiming to simplify running inference of generative AI models. -It hides the complexity of the generation process and minimizes the amount of code required. -You can now provide a model and input context directly to OpenVINO, which performs tokenization of the -input text, executes the generation loop on the selected device, and returns the generated text. -For a quickstart guide, refer to the :doc:`GenAI API Guide <../../openvino-workflow-generative/inference-with-genai>`. - -To see GenAI in action, check the Jupyter notebooks: -`LLM-powered Chatbot `__ and +OpenVINO GenAI is a tool, simplifying generative AI model inference. It is based on the +OpenVINO Runtime, hiding the complexity of the generation process and minimizing the amount of +code required. You provide a model and the input context directly to the tool, while it +performs tokenization of the input text, executes the generation loop on the selected device, +and returns the generated content. For a quickstart guide, refer to the +:doc:`GenAI API Guide <../../openvino-workflow-generative/inference-with-genai>`. + +To see OpenVINO GenAI in action, check these Jupyter notebooks: +`LLM-powered Chatbot `__ +and `LLM Instruction-following pipeline `__. -The OpenVINO GenAI flavor is available for installation via PyPI and Archive distributions. +OpenVINO GenAI is available for installation via PyPI and Archive distributions. A `detailed guide `__ on how to build OpenVINO GenAI is available in the OpenVINO GenAI repository. PyPI Installation ############################### -To install the GenAI flavor of OpenVINO via PyPI, follow the standard :doc:`installation steps `, +To install the GenAI package via PyPI, follow the standard :doc:`installation steps `, but use the *openvino-genai* package instead of *openvino*: .. code-block:: python @@ -28,9 +30,9 @@ but use the *openvino-genai* package instead of *openvino*: Archive Installation ############################### -The OpenVINO GenAI archive package includes the OpenVINO™ Runtime and :doc:`Tokenizers <../../openvino-workflow-generative/ov-tokenizers>`. -To install the GenAI flavor of OpenVINO from an archive file, follow the standard installation steps for your system -but instead of using the vanilla package file, download the one with OpenVINO GenAI: +The OpenVINO GenAI archive package includes the OpenVINO™ Runtime, as well as :doc:`Tokenizers <../../openvino-workflow-generative/ov-tokenizers>`. +It installs the same way as the standard OpenVINO Runtime, so follow its installation steps, +just use the OpenVINO GenAI package instead: Linux ++++++++++++++++++++++++++ diff --git a/docs/articles_en/openvino-workflow-generative.rst b/docs/articles_en/openvino-workflow-generative.rst index 14521f118f6dfc..e4cc8e81848e30 100644 --- a/docs/articles_en/openvino-workflow-generative.rst +++ b/docs/articles_en/openvino-workflow-generative.rst @@ -96,7 +96,7 @@ The advantages of using OpenVINO for generative model deployment: Proceed to guides on: -* :doc:`OpenVINO GenAI Flavor <./openvino-workflow-generative/inference-with-genai>` +* :doc:`OpenVINO GenAI <./openvino-workflow-generative/inference-with-genai>` * :doc:`Hugging Face and Optimum Intel <./openvino-workflow-generative/inference-with-optimum-intel>` * `Generative AI with Base OpenVINO `__ diff --git a/docs/articles_en/openvino-workflow-generative/inference-with-genai-on-npu.rst b/docs/articles_en/openvino-workflow-generative/inference-with-genai-on-npu.rst index 8fb6ad27c4232f..cb806df77b0288 100644 --- a/docs/articles_en/openvino-workflow-generative/inference-with-genai-on-npu.rst +++ b/docs/articles_en/openvino-workflow-generative/inference-with-genai-on-npu.rst @@ -2,9 +2,9 @@ Inference with OpenVINO GenAI ========================================== .. meta:: - :description: Learn how to use the OpenVINO GenAI flavor to execute LLM models on NPU. + :description: Learn how to use OpenVINO GenAI to execute LLM models on NPU. -This guide will give you extra details on how to utilize NPU with the GenAI flavor. +This guide will give you extra details on how to utilize NPU with OpenVINO GenAI. :doc:`See the installation guide <../../get-started/install-openvino/install-openvino-genai>` for information on how to start. diff --git a/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst b/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst index 1f19c3eed7da8f..e54878cf8b95ad 100644 --- a/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst +++ b/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst @@ -2,7 +2,7 @@ Inference with OpenVINO GenAI =============================================================================================== .. meta:: - :description: Learn how to use the OpenVINO GenAI flavor to execute LLM models. + :description: Learn how to use OpenVINO GenAI to execute LLM models. .. toctree:: :maxdepth: 1 diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst index a27bc55d1884ee..b1b6da190a0192 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst @@ -96,7 +96,7 @@ skipping the read step: :fragment: [ov:caching:part1] -With model caching enabled, the total load time is even shorter, if ``read_model`` is optimized as well. +The total load time is even shorter, when model caching is enabled and ``read_model`` is optimized as well. .. tab-set:: @@ -143,9 +143,9 @@ model caching, use the following code in your application: Enable cache encryption +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -If model caching is enabled in the CPU Plugin, Set "cache_encryption_callbacks" config option to the model topology can be encrypted -while it is saved to the cache and decrypted when it is loaded from the cache. -Currently, this property can be set only in ``compile_model``. +If model caching is enabled in the CPU Plugin, set the "cache_encryption_callbacks" +config option to encrypt the model while caching it and decrypt it when +loading it from the cache. Currently, this property can be set only in ``compile_model``. .. tab-set:: @@ -163,7 +163,7 @@ Currently, this property can be set only in ``compile_model``. :language: cpp :fragment: [ov:caching:part4] -If model caching is enabled in the GPU Plugin, the model topology can be encrypted while it is saved to the cache and decrypted when it is loaded from the cache. Full encryption only works when the ``CacheMode`` property is set to ``OPTIMIZE_SIZE``. +Full encryption only works when the ``CacheMode`` property is set to ``OPTIMIZE_SIZE``. .. tab-set:: @@ -183,4 +183,6 @@ If model caching is enabled in the GPU Plugin, the model topology can be encrypt .. important:: - Currently, this property is supported only by the CPU and GPU plugins. For other HW plugins, setting this property will not encrypt/decrypt the model topology in cache and will not affect performance. + Currently, encryption is supported only by the CPU and GPU plugins. Enabling this + feature for other HW plugins will not encrypt/decrypt model topology in the + cache and will not affect performance.