Skip to content

Commit

Permalink
add KubeAI integration
Browse files Browse the repository at this point in the history
  • Loading branch information
samos123 committed Sep 6, 2024
1 parent 20b6cd3 commit 93940fb
Show file tree
Hide file tree
Showing 6 changed files with 702 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -549,6 +549,31 @@
# clean up
client.collections.delete("DemoCollection")

# START FullVectorizerKubeAI
from weaviate.classes.config import Configure

client.collections.create(
"DemoCollection",
# highlight-start
vectorizer_config=[
Configure.NamedVectors.text2vec_openai(
name="title_vector",
source_properties=["title"],
# Further options
model="nomic-embed-text-cpu",
dimensions=8192,
base_url="http://kubeai/openai/v1",
)
],
# highlight-end
# Additional parameters not shown
)
# END FullVectorizerKubeAI

# clean up
client.collections.delete("DemoCollection")


# START BasicVectorizerAzureOpenAI
from weaviate.classes.config import Configure

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -606,6 +606,36 @@ await client.collections.create({
// Clean up
await client.collections.delete('DemoCollection');

// START FullVectorizerKubeAI
await client.collections.create({
name: 'DemoCollection',
properties: [
{
name: 'title',
dataType: 'text' as const,
},
],
// highlight-start
vectorizers: [
weaviate.configure.vectorizer.text2VecOpenAI(
{
name: 'title_vector',
sourceProperties: ['title'],
model: 'nomic-embed-text-cpu',
dimensions: 8192,
baseURL: 'http://kubeai/openai/v1',
},
),
],
// highlight-end
// Additional parameters not shown
});
// END FullVectorizerKubeAI

// Clean up
await client.collections.delete('DemoCollection');


// START BasicVectorizerAzureOpenAI
await client.collections.create({
name: 'DemoCollection',
Expand Down
4 changes: 4 additions & 0 deletions developers/weaviate/model-providers/kubeai/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"label": "KubeAI (Locally hosted)",
"position": 320
}
292 changes: 292 additions & 0 deletions developers/weaviate/model-providers/kubeai/embeddings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,292 @@
---
title: Text Embeddings
sidebar_position: 20
# image: og/docs/integrations/provider_integrations_openai.jpg
# tags: ['model providers', 'openai', 'embeddings']
---

# KubeAI Embeddings with Weaviate

import BetaPageNote from '../_includes/beta_pages.md';

<BetaPageNote />

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import FilteredTextBlock from '@site/src/components/Documentation/FilteredTextBlock';
import PyConnect from '!!raw-loader!../_includes/provider.connect.py';
import TSConnect from '!!raw-loader!../_includes/provider.connect.ts';
import PyCode from '!!raw-loader!../_includes/provider.vectorizer.py';
import TSCode from '!!raw-loader!../_includes/provider.vectorizer.ts';

Weaviate's integration with OpenAI's APIs allows you to access KubeAI models' directly from Weaviate.

[KubeAI](https://github.com/substratusai/kubeai) provides private OpenAI compatible API endpoint for OSS or custom LLMs for embeddings.

[Configure a Weaviate vector index](#configure-the-vectorizer) to use an OpenAI embedding model, and Weaviate will generate embeddings for various operations using the specified model. This feature is called the *vectorizer*.

At [import time](#data-import), Weaviate generates text object embeddings and saves them into the index. For [vector](#vector-near-text-search) and [hybrid](#hybrid-search) search operations, Weaviate converts text queries into embeddings.

![Embedding integration illustration](../_includes/integration_openai_embedding.png)

## Requirements

### Weaviate configuration

Your Weaviate instance must be configured with the OpenAI vectorizer integration (`text2vec-openai`) module.

<details>
<summary>For Weaviate Cloud (WCD) users</summary>

This integration is enabled by default on Weaviate Cloud (WCD) serverless instances.

</details>

<details>
<summary>For self-hosted users</summary>

- Check the [cluster metadata](../../config-refs/meta.md) to verify if the module is enabled.
- Follow the [how-to configure modules](../../configuration/modules.md) guide to enable the module in Weaviate.

</details>

### API credentials

You must provide a valid OpenAI API key to Weaviate for this integration. However, KubeAI ignores the OpenAI API key. So you can provide any value for the API key.

Provide the API key to Weaviate using one of the following methods:

- Set the `OPENAI_APIKEY` environment variable that is available to Weaviate.
- Provide the API key at runtime, as shown in the examples below.

<Tabs groupId="languages">

<TabItem value="py" label="Python API v4">
<FilteredTextBlock
text={PyConnect}
startMarker="# START OpenAIInstantiation"
endMarker="# END OpenAIInstantiation"
language="py"
/>
</TabItem>

<TabItem value="js" label="JS/TS API v3">
<FilteredTextBlock
text={TSConnect}
startMarker="// START OpenAIInstantiation"
endMarker="// END OpenAIInstantiation"
language="ts"
/>
</TabItem>

</Tabs>

## Configure the vectorizer

[Configure a Weaviate index](../../manage-data/collections.mdx#specify-a-vectorizer) to use an KubeAI embedding model by setting the vectorizer as follows:

You can specify one of the [available models](#available-models) for the vectorizer to use, as shown in the following configuration examples.

You need to specify a model name for it work with KubeAI. No default model is
configured.

KubeAI comes with a `nomic-embed-text-cpu` model that can be used for text embeddings.
You can enable the model by setting `enabled: true` in the `helm-values.yaml` file.

Create a file named `helm-values.yaml` with the following content:
```yaml
models:
catalog:
nomic-embed-text-cpu:
enabled: true
minReplicas: 1
```
Afterwards apply the new configuration to the KubeAI Helm chart:
```bash
helm repo add kubeai https://www.kubeai.org
helm repo update
helm upgrade --install kubeai kubeai/kubeai \
-f ./helm-values.yaml --reuse-values
```

Now you should be able to configure the vectorizer with the model name `nomic-embed-text-cpu`.

<Tabs groupId="languages">
<TabItem value="py" label="Python API v4">
<FilteredTextBlock
text={PyCode}
startMarker="# START FullVectorizerKubeAI"
endMarker="# END FullVectorizerKubeAI"
language="py"
/>
</TabItem>

<TabItem value="js" label="JS/TS API v3">
<FilteredTextBlock
text={TSCode}
startMarker="// START FullVectorizerKubeAI"
endMarker="// END FullVectorizerKubeAI"
language="ts"
/>
</TabItem>

</Tabs>

## Data import

After configuring the vectorizer, [import data](../../manage-data/import.mdx) into Weaviate. Weaviate generates embeddings for text objects using the specified model.

<Tabs groupId="languages">

<TabItem value="py" label="Python API v4">
<FilteredTextBlock
text={PyCode}
startMarker="# START BatchImportExample"
endMarker="# END BatchImportExample"
language="py"
/>
</TabItem>

<TabItem value="js" label="JS/TS API v3">
<FilteredTextBlock
text={TSCode}
startMarker="// START BatchImportExample"
endMarker="// END BatchImportExample"
language="ts"
/>
</TabItem>

</Tabs>

:::tip Re-use existing vectors
If you already have a compatible model vector available, you can provide it directly to Weaviate. This can be useful if you have already generated embeddings using the same model and want to use them in Weaviate, such as when migrating data from another system.
:::

## Searches

Once the vectorizer is configured, Weaviate will perform vector and hybrid search operations using the specified KubeAI model.

![Embedding integration at search illustration](../_includes/integration_openai_embedding_search.png)

### Vector (near text) search

When you perform a [vector search](../../search/similarity.md#search-with-text), Weaviate converts the text query into an embedding using the specified model and returns the most similar objects from the database.

The query below returns the `n` most similar objects from the database, set by `limit`.

<Tabs groupId="languages">

<TabItem value="py" label="Python API v4">
<FilteredTextBlock
text={PyCode}
startMarker="# START NearTextExample"
endMarker="# END NearTextExample"
language="py"
/>
</TabItem>

<TabItem value="js" label="JS/TS API v3">
<FilteredTextBlock
text={TSCode}
startMarker="// START NearTextExample"
endMarker="// END NearTextExample"
language="ts"
/>
</TabItem>

</Tabs>

### Hybrid search

:::info What is a hybrid search?
A hybrid search performs a vector search and a keyword (BM25) search, before [combining the results](../../search/hybrid.md#change-the-ranking-method) to return the best matching objects from the database.
:::

When you perform a [hybrid search](../../search/hybrid.md), Weaviate converts the text query into an embedding using the specified model and returns the best scoring objects from the database.

The query below returns the `n` best scoring objects from the database, set by `limit`.

<Tabs groupId="languages">

<TabItem value="py" label="Python API v4">
<FilteredTextBlock
text={PyCode}
startMarker="# START HybridExample"
endMarker="# END HybridExample"
language="py"
/>
</TabItem>

<TabItem value="js" label="JS/TS API v3">
<FilteredTextBlock
text={TSCode}
startMarker="// START HybridExample"
endMarker="// END HybridExample"
language="ts"
/>
</TabItem>

</Tabs>

## References

### Vectorizer parameters

- `model`: The KubeAI model name.
- `dimensions`: The number of dimensions for the model.
- `baseURL`: The OpenAI compatible endpoint provided by KubeAI.

In most cases the `baseURL` is `http://kubeai/openai/v1`. Unless you have Weaviate
deployed in a different cluster or namespace.

#### Example configuration

The following examples show how to configure OpenAI-specific options.

<Tabs groupId="languages">
<TabItem value="py" label="Python API v4">
<FilteredTextBlock
text={PyCode}
startMarker="# START FullVectorizerOpenAI"
endMarker="# END FullVectorizerOpenAI"
language="py"
/>
</TabItem>

<TabItem value="js" label="JS/TS API v3">
<FilteredTextBlock
text={TSCode}
startMarker="// START FullVectorizerOpenAI"
endMarker="// END FullVectorizerOpenAI"
language="ts"
/>
</TabItem>

</Tabs>

For further details on model parameters, see the [OpenAI API documentation](https://platform.openai.com/docs/api-reference/embeddings).


## Further resources

### Other integrations

- [KubeAI generative models + Weaviate](./generative.md).

### Code examples

Once the integrations are configured at the collection, the data management and search operations in Weaviate work identically to any other collection. See the following model-agnostic examples:

- The [how-to: manage data](../../manage-data/index.md) guides show how to perform data operations (i.e. create, update, delete).
- The [how-to: search](../../search/index.md) guides show how to perform search operations (i.e. vector, keyword, hybrid) as well as retrieval augmented generation.

### External resources

- OpenAI [Embed API documentation](https://platform.openai.com/docs/api-reference/embeddings)

## Questions and feedback

import DocsFeedback from '/_includes/docs-feedback.mdx';

<DocsFeedback/>
Loading

0 comments on commit 93940fb

Please sign in to comment.