Skip to content

Commit

Permalink
Merge branch 'new-gel-docs' into fastapi-quickstart
Browse files Browse the repository at this point in the history
  • Loading branch information
beerose committed Feb 19, 2025
2 parents c333220 + 9ec0879 commit f00a879
Show file tree
Hide file tree
Showing 4 changed files with 40 additions and 84 deletions.
19 changes: 9 additions & 10 deletions docs/ai/fastapi_gelai_searchbot.rst
Original file line number Diff line number Diff line change
Expand Up @@ -553,9 +553,8 @@ Defining the schema
-------------------

The database :ref:`schema <ref_datamodel_index>` in Gel is defined
declaratively. The :ref:`gel project init <ref_cli_edgedb_project_init>`
command has created a file called ``dbchema/default.esdl``, which we're going to
use to define our types.
declaratively. The :gelcmd:`project init` command has created a file called
:dotgel:`dbschema/default`, which we're going to use to define our types.

.. edb:split-section::
Expand Down Expand Up @@ -649,9 +648,8 @@ use to define our types.
.. edb:split-section::
Let's use the :ref:`gel migration create <ref_cli_edgedb_migration_create>` CLI
command, followed by :ref:`gel migrate <ref_cli_edgedb_migrate>` in order to
migrate to our new schema and proceed to writing some queries.
Let's use the :gelcmd:`migration create` CLI command, followed by :gelcmd:`migrate` in
order to migrate to our new schema and proceed to writing some queries.

.. code-block:: bash
Expand Down Expand Up @@ -765,8 +763,9 @@ use to define our types.
.. edb:split-section::
The :ref:`gel query <ref_cli_edgedb_query>` command is one of many ways we can
execute a query in Gel. Now that we've done it, there's stuff in the database.
The :gelcmd:`query` command is one of many ways we can execute a query in Gel. Now
that we've done it, there's stuff in the database.
Let's verify it by running:
.. code-block:: bash
Expand All @@ -784,7 +783,7 @@ With schema in place, it's time to focus on getting the data in and out of the
database.
In this tutorial we're going to write queries using :ref:`EdgeQL
<ref_intro_edgeql>` and then use :ref:`codegen <edgedb-python-codegen>` to
<ref_intro_edgeql>` and then use :ref:`codegen <gel-python-codegen>` to
generate typesafe function that we can plug directly into out Python code. If
you are completely unfamiliar with EdgeQL, now is a good time to check out the
basics before proceeding.
Expand Down Expand Up @@ -1432,7 +1431,7 @@ schema.
.. edb:split-section::
We begin by enabling the ``ai`` extension by adding the following like on top of
the ``dbschema/default.esdl``:
the :dotgel:`dbschema/default`:
.. code-block:: sdl-diff
:caption: dbschema/default.esdl
Expand Down
1 change: 1 addition & 0 deletions docs/ai/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Gel AI
javascript
guide_edgeql
guide_python
fastapi_gelai_searchbot

:edb-alt-title: Using Gel AI

Expand Down
2 changes: 1 addition & 1 deletion docs/ai/quickstart_fastapi_ai.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Using the built-in RAG
AI-related features in |Gel| come packaged in the extension called ``ai``.
Let's enable it by adding the following line on top of the
``dbschema/default.gel`` and running a migration.
:dotgel:`dbschema/default` and running a migration.

This does a few things. First, it enables us to use features from the extension by prefixing them with ``ext::ai::``.

Expand Down
102 changes: 29 additions & 73 deletions docs/ai/reference_http.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,22 +31,17 @@ Request headers
Request body
------------

.. code-block:: json
{
"model": string, // Required: Name of the embedding model
"inputs": string[], // Required: Array of texts to embed
"dimensions": number, // Optional: Number of dimensions to truncate to
"user": string // Optional: User identifier
}
* ``input`` (array of strings or a single string, required): The text to use as
the basis for embeddings generation.

* ``model`` (string, required): The name of the embedding model to use. You may
use any of the supported :ref:`embedding models
<ref_ai_extai_reference_embedding_models>`.

* ``dimensions`` (number, optional): The number of dimensions to truncate to.

* ``user`` (string, optional): A user identifier for the request.


Example request
---------------
Expand Down Expand Up @@ -124,83 +119,44 @@ Request headers
Request body
------------

.. code-block:: json
{
"context": {
"query": string, // Required: EdgeQL query for context retrieval
"variables": object, // Optional: Query variables
"globals": object, // Optional: Query globals
"max_object_count": number // Optional: Max objects to retrieve (default: 5)
},
"model": string, // Required: Name of the generation model
"query": string, // Required: User query
"stream": boolean, // Optional: Enable streaming (default: false)
"prompt": {
"name": string, // Optional: Name of predefined prompt
"id": string, // Optional: ID of predefined prompt
"custom": [ // Optional: Custom prompt messages
{
"role": string, // "system"|"user"|"assistant"|"tool"
"content": string|object,
"tool_call_id": string,
"tool_calls": array
}
]
},
"temperature": number, // Optional: Sampling temperature
"top_p": number, // Optional: Nucleus sampling parameter
"max_tokens": number, // Optional: Maximum tokens to generate
"seed": number, // Optional: Random seed
"safe_prompt": boolean, // Optional: Enable safety features
"top_k": number, // Optional: Top-k sampling parameter
"logit_bias": object, // Optional: Token biasing
"logprobs": number, // Optional: Return token log probabilities
"user": string // Optional: User identifier
}
* ``context`` (object, required): Settings that define the context of the query.
* ``query`` (string, required): Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query.
* ``variables`` (object, optional): A dictionary of variables for use in the context query.
* ``globals`` (object, optional): A dictionary of globals for use in the context query.
* ``max_object_count`` (number, optional): Maximum number of objects to retrieve; default is 5.

* ``model`` (string, required): The name of the text generation model to use.

* ``query`` (string, required): The query string used as the basis for text generation.

* ``query`` (string, required): The query string use as the basis for text
generation.
* ``stream`` (boolean, optional): Specifies whether the response should be streamed. Defaults to false.

* ``context`` (object, required): Settings that define the context of the
query.
* ``prompt`` (object, optional): Settings that define a prompt. Omit to use the default prompt.
* ``name`` (string, optional): Name of predefined prompt.
* ``id`` (string, optional): ID of predefined prompt.
* ``custom`` (array of objects, optional): Custom prompt messages, each containing a ``role`` and ``content``. If no ``name`` or ``id`` was provided, the custom messages provided here become the prompt. If one of those was provided, these messages will be added to that existing prompt.
* ``role`` (string): "system", "user", "assistant", or "tool".
* ``content`` (string|object): Content of the message.
* ``tool_call_id`` (string): Identifier for tool call.
* ``tool_calls`` (array): Array of tool calls.

* ``query`` (string, required): Specifies an expression to determine the
relevant objects and index to serve as context for text generation. You may
set this to any expression that produces a set of objects, even if it is
not a standalone query.
* ``temperature`` (number, optional): Sampling temperature.

* ``variables`` (object, optional): A dictionary of variables for use in the
context query.
* ``top_p`` (number, optional): Nucleus sampling parameter.

* ``globals`` (object, optional): A dictionary of globals for use in the
context query.
* ``max_tokens`` (number, optional): Maximum tokens to generate.

* ``max_object_count`` (int, optional): Maximum number of objects to return;
default is 5.
* ``seed`` (number, optional): Random seed.

* ``stream`` (boolean, optional): Specifies whether the response should be
streamed. Defaults to false.
* ``safe_prompt`` (boolean, optional): Enable safety features.

* ``prompt`` (object, optional): Settings that define a prompt. Omit to use the
default prompt.
* ``top_k`` (number, optional): Top-k sampling parameter.

You may specify an existing prompt by its ``name`` or ``id``, you may define
a custom prompt inline by sending an array of objects, or you may do both to
augment an existing prompt with additional custom messages.
* ``logit_bias`` (object, optional): Token biasing.

* ``name`` (string, optional) or ``id`` (string, optional): The ``name`` or
``id`` of an existing custom prompt to use. Provide only one of these if
you want to use or start from an existing prompt.
* ``logprobs`` (number, optional): Return token log probabilities.

* ``custom`` (array of objects, optional): Custom prompt messages, each
containing a ``role`` and ``content``. If no ``name`` or ``id`` was
provided, the custom messages provided here become the prompt. If one of
those was provided, these messages will be added to that existing prompt.
* ``user`` (string, optional): User identifier.


Example request
Expand Down Expand Up @@ -340,7 +296,7 @@ stream.

**Example SSE response**

.. code-block::
.. code-block:: text
:class: collapsible
event: message_start
Expand Down

0 comments on commit f00a879

Please sign in to comment.