diff --git a/notebooks/integrations/cohere/inference-cohere.ipynb b/notebooks/integrations/cohere/inference-cohere.ipynb index 3064452d..b8888c23 100644 --- a/notebooks/integrations/cohere/inference-cohere.ipynb +++ b/notebooks/integrations/cohere/inference-cohere.ipynb @@ -184,9 +184,9 @@ "id": "96788aa1" }, "source": [ - "## Create the inference task\n", + "## Create the inference endpoint\n", "\n", - "Let's create the inference task by using the [Create inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html).\n", + "Let's create the inference endpoint by using the [Create inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html).\n", "\n", "You'll need an Cohere API key for this that you can find in your Cohere account under the [API keys section](https://dashboard.cohere.com/api-keys). A paid membership is required to complete the steps in this notebook as the Cohere free trial API usage is limited." ] @@ -204,7 +204,7 @@ "\n", "client.inference.put_model(\n", " task_type=\"text_embedding\",\n", - " model_id=\"cohere_embeddings\",\n", + " inference_id=\"cohere_embeddings\",\n", " body={\n", " \"service\": \"cohere\",\n", " \"service_settings\": {\n", @@ -226,7 +226,7 @@ "source": [ "## Create an ingest pipeline with an inference processor\n", "\n", - "Create an ingest pipeline with an inference processor by using the [`put_pipeline`](https://www.elastic.co/guide/en/elasticsearch/reference/master/put-pipeline-api.html) method. Reference the Cohere model created above to infer against the data that is being ingested in the pipeline." + "Create an ingest pipeline with an inference processor by using the [`put_pipeline`](https://www.elastic.co/guide/en/elasticsearch/reference/master/put-pipeline-api.html) method. Reference the inference endpoint created above as the `model_id` to infer against the data that is being ingested in the pipeline." ] }, { @@ -265,7 +265,7 @@ "Let's note a few important parameters from that API call:\n", "\n", "- `inference`: A processor that performs inference using a machine learning model.\n", - "- `model_id`: Specifies the ID of the machine learning model to be used. In this example, the model ID is set to `cohere_embeddings`.\n", + "- `model_id`: Specifies the ID of the inference endpoint to be used. In this example, the model ID is set to `cohere_embeddings`.\n", "- `input_output`: Specifies input and output fields.\n", "- `input_field`: Field name from which the `dense_vector` representation is created.\n", "- `output_field`: Field name which contains inference results." @@ -406,7 +406,7 @@ " \"field\": \"plot_embedding\",\n", " \"query_vector_builder\": {\n", " \"text_embedding\": {\n", - " \"model_id\": \"cohere_embeddings\",\n", + " \"inference_id\": \"cohere_embeddings\",\n", " \"model_text\": \"Fighting movie\",\n", " }\n", " },\n", diff --git a/notebooks/search/07-inference.ipynb b/notebooks/search/07-inference.ipynb index e0a53a7b..bc443d29 100644 --- a/notebooks/search/07-inference.ipynb +++ b/notebooks/search/07-inference.ipynb @@ -154,9 +154,9 @@ "id": "840d92f0", "metadata": {}, "source": [ - "## Create the inference task\n", + "## Create the inference endpoint\n", "\n", - "Let's create the inference task by using the [Create inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html).\n", + "Let's create the inference endpoint by using the [Create inference API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html).\n", "\n", "You'll need an OpenAI API key for this that you can find in your OpenAI account under the [API keys section](https://platform.openai.com/api-keys). A paid membership is required to complete the steps in this notebook as the OpenAI free trial API usage is limited." ] @@ -172,7 +172,7 @@ "\n", "client.inference.put_model(\n", " task_type=\"text_embedding\",\n", - " model_id=\"my_openai_embedding_model\",\n", + " inference_id=\"my_openai_embedding_model\",\n", " body={\n", " \"service\": \"openai\",\n", " \"service_settings\": {\"api_key\": API_KEY},\n", @@ -181,6 +181,14 @@ ")" ] }, + { + "cell_type": "markdown", + "id": "1f2e48b7", + "metadata": {}, + "source": [ + "**NOTE:** If you use Elasticsearch 8.12, you must change `inference_id` in the snippet above to `model_id`! " + ] + }, { "cell_type": "markdown", "id": "1024d070", @@ -188,7 +196,7 @@ "source": [ "## Create an ingest pipeline with an inference processor\n", "\n", - "Create an ingest pipeline with an inference processor by using the [`put_pipeline`](https://www.elastic.co/guide/en/elasticsearch/reference/master/put-pipeline-api.html) method. Reference the OpenAI model created above to infer against the data that is being ingested in the pipeline." + "Create an ingest pipeline with an inference processor by using the [`put_pipeline`](https://www.elastic.co/guide/en/elasticsearch/reference/master/put-pipeline-api.html) method. Reference the inference endpoint created above as `model_id` to infer against the data that is being ingested in the pipeline." ] }, { @@ -223,7 +231,7 @@ "Let's note a few important parameters from that API call:\n", "\n", "- `inference`: A processor that performs inference using a machine learning model.\n", - "- `model_id`: Specifies the ID of the machine learning model to be used. In this example, the model ID is set to `my_openai_embedding_model`. Use the model ID you defined when created the inference task.\n", + "- `model_id`: Specifies the ID of the inference endpoint to be used. In this example, the inference ID is set to `my_openai_embedding_model`. Use the inference ID you defined when created the inference task.\n", "- `input_output`: Specifies input and output fields.\n", "- `input_field`: Field name from which the `dense_vector` representation is created.\n", "- `output_field`: Field name which contains inference results. " @@ -348,7 +356,7 @@ " \"field\": \"plot_embedding\",\n", " \"query_vector_builder\": {\n", " \"text_embedding\": {\n", - " \"model_id\": \"my_openai_embedding_model\",\n", + " \"inference_id\": \"my_openai_embedding_model\",\n", " \"model_text\": \"Fighting movie\",\n", " }\n", " },\n", @@ -364,6 +372,20 @@ " plot = hit[\"_source\"][\"plot\"]\n", " print(f\"Score: {score}\\nTitle: {title}\\nPlot: {plot}\\n\")" ] + }, + { + "cell_type": "markdown", + "id": "7e4055ba", + "metadata": {}, + "source": [ + "**NOTE:** If you use Elasticsearch 8.12, you must change `inference_id` in the snippet above to `model_id`." + ] + }, + { + "cell_type": "markdown", + "id": "59220b82", + "metadata": {}, + "source": [] } ], "metadata": {