From 5ca0e5e8ad65f6faacc31b1565b17676b98fcd20 Mon Sep 17 00:00:00 2001 From: Scott Trinh Date: Wed, 19 Feb 2025 16:11:58 -0500 Subject: [PATCH 1/4] cqa - Use :dotgel: directive instead of .esdl file literals - Remove malformed JSON-like response body examples - SSE example code block needed an explicit language --- docs/ai/fastapi_gelai_searchbot.rst | 4 +- docs/ai/quickstart_fastapi_ai.rst | 2 +- docs/ai/reference_http.rst | 103 ++++++++-------------------- 3 files changed, 33 insertions(+), 76 deletions(-) diff --git a/docs/ai/fastapi_gelai_searchbot.rst b/docs/ai/fastapi_gelai_searchbot.rst index 673648a04a7..5d4e439fee1 100644 --- a/docs/ai/fastapi_gelai_searchbot.rst +++ b/docs/ai/fastapi_gelai_searchbot.rst @@ -554,7 +554,7 @@ Defining the schema The database :ref:`schema ` in Gel is defined declaratively. The :ref:`gel project init ` -command has created a file called ``dbchema/default.esdl``, which we're going to +command has created a file called :dotgel:`dbschema/default`, which we're going to use to define our types. .. edb:split-section:: @@ -1432,7 +1432,7 @@ schema. .. edb:split-section:: We begin by enabling the ``ai`` extension by adding the following like on top of - the ``dbschema/default.esdl``: + the :dotgel:`dbschema/default`: .. code-block:: sdl-diff :caption: dbschema/default.esdl diff --git a/docs/ai/quickstart_fastapi_ai.rst b/docs/ai/quickstart_fastapi_ai.rst index 9708e1fc9ae..f7be6a8c677 100644 --- a/docs/ai/quickstart_fastapi_ai.rst +++ b/docs/ai/quickstart_fastapi_ai.rst @@ -52,7 +52,7 @@ Using the built-in RAG AI-related features in |Gel| come packaged in the extension called ``ai``. Let's enable it by adding the following line on top of the - ``dbschema/default.gel`` and running a migration. + :dotgel:`dbschema/default` and running a migration. This does a few things. First, it enables us to use features from the extension by prefixing them with ``ext::ai::``. diff --git a/docs/ai/reference_http.rst b/docs/ai/reference_http.rst index 44d86a95bd4..677bfca6dcf 100644 --- a/docs/ai/reference_http.rst +++ b/docs/ai/reference_http.rst @@ -31,15 +31,6 @@ Request headers Request body ------------ -.. code-block:: json - - { - "model": string, // Required: Name of the embedding model - "inputs": string[], // Required: Array of texts to embed - "dimensions": number, // Optional: Number of dimensions to truncate to - "user": string // Optional: User identifier - } - * ``input`` (array of strings or a single string, required): The text to use as the basis for embeddings generation. @@ -47,6 +38,10 @@ Request body use any of the supported :ref:`embedding models `. +* ``dimensions`` (number, optional): The number of dimensions to truncate to. + +* ``user`` (string, optional): A user identifier for the request. + Example request --------------- @@ -124,83 +119,45 @@ Request headers Request body ------------ -.. code-block:: json - - { - "context": { - "query": string, // Required: EdgeQL query for context retrieval - "variables": object, // Optional: Query variables - "globals": object, // Optional: Query globals - "max_object_count": number // Optional: Max objects to retrieve (default: 5) - }, - "model": string, // Required: Name of the generation model - "query": string, // Required: User query - "stream": boolean, // Optional: Enable streaming (default: false) - "prompt": { - "name": string, // Optional: Name of predefined prompt - "id": string, // Optional: ID of predefined prompt - "custom": [ // Optional: Custom prompt messages - { - "role": string, // "system"|"user"|"assistant"|"tool" - "content": string|object, - "tool_call_id": string, - "tool_calls": array - } - ] - }, - "temperature": number, // Optional: Sampling temperature - "top_p": number, // Optional: Nucleus sampling parameter - "max_tokens": number, // Optional: Maximum tokens to generate - "seed": number, // Optional: Random seed - "safe_prompt": boolean, // Optional: Enable safety features - "top_k": number, // Optional: Top-k sampling parameter - "logit_bias": object, // Optional: Token biasing - "logprobs": number, // Optional: Return token log probabilities - "user": string // Optional: User identifier - } - +* ``context`` (object, required): Settings that define the context of the query. + * ``query`` (string, required): Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query. + * ``variables`` (object, optional): A dictionary of variables for use in the context query. + * ``globals`` (object, optional): A dictionary of globals for use in the context query. + * ``max_object_count`` (number, optional): Maximum number of objects to retrieve; default is 5. * ``model`` (string, required): The name of the text generation model to use. +* ``query`` (string, required): The query string used as the basis for text generation. + +* ``stream`` (boolean, optional): Specifies whether the response should be streamed. Defaults to false. -* ``query`` (string, required): The query string use as the basis for text - generation. +* ``prompt`` (object, optional): Settings that define a prompt. Omit to use the default prompt. + * ``name`` (string, optional): Name of predefined prompt. + * ``id`` (string, optional): ID of predefined prompt. + * ``custom`` (array of objects, optional): Custom prompt messages, each containing a ``role`` and ``content``. If no ``name`` or ``id`` was provided, the custom messages provided here become the prompt. If one of those was provided, these messages will be added to that existing prompt. -* ``context`` (object, required): Settings that define the context of the - query. + * ``role`` (string): "system", "user", "assistant", or "tool". + * ``content`` (string|object): Content of the message. + * ``tool_call_id`` (string): Identifier for tool call. + * ``tool_calls`` (array): Array of tool calls. - * ``query`` (string, required): Specifies an expression to determine the - relevant objects and index to serve as context for text generation. You may - set this to any expression that produces a set of objects, even if it is - not a standalone query. +* ``temperature`` (number, optional): Sampling temperature. - * ``variables`` (object, optional): A dictionary of variables for use in the - context query. +* ``top_p`` (number, optional): Nucleus sampling parameter. - * ``globals`` (object, optional): A dictionary of globals for use in the - context query. +* ``max_tokens`` (number, optional): Maximum tokens to generate. - * ``max_object_count`` (int, optional): Maximum number of objects to return; - default is 5. +* ``seed`` (number, optional): Random seed. -* ``stream`` (boolean, optional): Specifies whether the response should be - streamed. Defaults to false. +* ``safe_prompt`` (boolean, optional): Enable safety features. -* ``prompt`` (object, optional): Settings that define a prompt. Omit to use the - default prompt. +* ``top_k`` (number, optional): Top-k sampling parameter. - You may specify an existing prompt by its ``name`` or ``id``, you may define - a custom prompt inline by sending an array of objects, or you may do both to - augment an existing prompt with additional custom messages. +* ``logit_bias`` (object, optional): Token biasing. - * ``name`` (string, optional) or ``id`` (string, optional): The ``name`` or - ``id`` of an existing custom prompt to use. Provide only one of these if - you want to use or start from an existing prompt. +* ``logprobs`` (number, optional): Return token log probabilities. - * ``custom`` (array of objects, optional): Custom prompt messages, each - containing a ``role`` and ``content``. If no ``name`` or ``id`` was - provided, the custom messages provided here become the prompt. If one of - those was provided, these messages will be added to that existing prompt. +* ``user`` (string, optional): User identifier. Example request @@ -340,7 +297,7 @@ stream. **Example SSE response** -.. code-block:: +.. code-block:: text :class: collapsible event: message_start From 479729858a3bd73b4efc0b4e18b71edfa79f3479 Mon Sep 17 00:00:00 2001 From: Scott Trinh Date: Wed, 19 Feb 2025 16:24:29 -0500 Subject: [PATCH 2/4] Fix nested lists --- docs/ai/reference_http.rst | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/docs/ai/reference_http.rst b/docs/ai/reference_http.rst index 677bfca6dcf..e813629483e 100644 --- a/docs/ai/reference_http.rst +++ b/docs/ai/reference_http.rst @@ -120,10 +120,10 @@ Request body ------------ * ``context`` (object, required): Settings that define the context of the query. - * ``query`` (string, required): Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query. - * ``variables`` (object, optional): A dictionary of variables for use in the context query. - * ``globals`` (object, optional): A dictionary of globals for use in the context query. - * ``max_object_count`` (number, optional): Maximum number of objects to retrieve; default is 5. + * ``query`` (string, required): Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query. + * ``variables`` (object, optional): A dictionary of variables for use in the context query. + * ``globals`` (object, optional): A dictionary of globals for use in the context query. + * ``max_object_count`` (number, optional): Maximum number of objects to retrieve; default is 5. * ``model`` (string, required): The name of the text generation model to use. @@ -132,14 +132,13 @@ Request body * ``stream`` (boolean, optional): Specifies whether the response should be streamed. Defaults to false. * ``prompt`` (object, optional): Settings that define a prompt. Omit to use the default prompt. - * ``name`` (string, optional): Name of predefined prompt. - * ``id`` (string, optional): ID of predefined prompt. - * ``custom`` (array of objects, optional): Custom prompt messages, each containing a ``role`` and ``content``. If no ``name`` or ``id`` was provided, the custom messages provided here become the prompt. If one of those was provided, these messages will be added to that existing prompt. - - * ``role`` (string): "system", "user", "assistant", or "tool". - * ``content`` (string|object): Content of the message. - * ``tool_call_id`` (string): Identifier for tool call. - * ``tool_calls`` (array): Array of tool calls. + * ``name`` (string, optional): Name of predefined prompt. + * ``id`` (string, optional): ID of predefined prompt. + * ``custom`` (array of objects, optional): Custom prompt messages, each containing a ``role`` and ``content``. If no ``name`` or ``id`` was provided, the custom messages provided here become the prompt. If one of those was provided, these messages will be added to that existing prompt. + * ``role`` (string): "system", "user", "assistant", or "tool". + * ``content`` (string|object): Content of the message. + * ``tool_call_id`` (string): Identifier for tool call. + * ``tool_calls`` (array): Array of tool calls. * ``temperature`` (number, optional): Sampling temperature. From 1e4aae13747fc9cc8f795487b5b745c1241ddf71 Mon Sep 17 00:00:00 2001 From: Scott Trinh Date: Wed, 19 Feb 2025 16:24:46 -0500 Subject: [PATCH 3/4] Update for missing or moved docs --- docs/ai/fastapi_gelai_searchbot.rst | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/docs/ai/fastapi_gelai_searchbot.rst b/docs/ai/fastapi_gelai_searchbot.rst index 5d4e439fee1..ec6ccd4f7cc 100644 --- a/docs/ai/fastapi_gelai_searchbot.rst +++ b/docs/ai/fastapi_gelai_searchbot.rst @@ -553,9 +553,8 @@ Defining the schema ------------------- The database :ref:`schema ` in Gel is defined -declaratively. The :ref:`gel project init ` -command has created a file called :dotgel:`dbschema/default`, which we're going to -use to define our types. +declaratively. The :gelcmd:`project init` command has created a file called +:dotgel:`dbschema/default`, which we're going to use to define our types. .. edb:split-section:: @@ -649,9 +648,8 @@ use to define our types. .. edb:split-section:: - Let's use the :ref:`gel migration create ` CLI - command, followed by :ref:`gel migrate ` in order to - migrate to our new schema and proceed to writing some queries. + Let's use the :gelcmd:`migration create` CLI command, followed by :gelcmd:`migrate` in + order to migrate to our new schema and proceed to writing some queries. .. code-block:: bash @@ -765,8 +763,9 @@ use to define our types. .. edb:split-section:: - The :ref:`gel query ` command is one of many ways we can - execute a query in Gel. Now that we've done it, there's stuff in the database. + The :gelcmd:`query` command is one of many ways we can execute a query in Gel. Now + that we've done it, there's stuff in the database. + Let's verify it by running: .. code-block:: bash @@ -784,7 +783,7 @@ With schema in place, it's time to focus on getting the data in and out of the database. In this tutorial we're going to write queries using :ref:`EdgeQL -` and then use :ref:`codegen ` to +` and then use :ref:`codegen ` to generate typesafe function that we can plug directly into out Python code. If you are completely unfamiliar with EdgeQL, now is a good time to check out the basics before proceeding. From 9ec0879999f23a51ee8d03c5d73724fc3c6fe6a1 Mon Sep 17 00:00:00 2001 From: Scott Trinh Date: Wed, 19 Feb 2025 16:25:06 -0500 Subject: [PATCH 4/4] Add Search Bot tutorial to index --- docs/ai/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/ai/index.rst b/docs/ai/index.rst index a23c768bd26..d1b2af1fcbf 100644 --- a/docs/ai/index.rst +++ b/docs/ai/index.rst @@ -15,6 +15,7 @@ Gel AI javascript guide_edgeql guide_python + fastapi_gelai_searchbot :edb-alt-title: Using Gel AI