Skip to content

Commit

Permalink
add links
Browse files Browse the repository at this point in the history
  • Loading branch information
micpst committed May 27, 2024
1 parent 6e42ce4 commit 11fdeda
Show file tree
Hide file tree
Showing 11 changed files with 28 additions and 28 deletions.
20 changes: 10 additions & 10 deletions docs/how-to/llms/custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ LLM is one of the main components of the db-ally ecosystem. It handles all inter

## Implementing a Custom LLM

The `LLM` class is an abstract base class that provides a framework for interacting with a Large Language Model (LLM). To create a custom LLM, you need to create a subclass of `LLM` and implement the required methods and properties.
The [`LLM`](../../reference/llms/index.md#dbally.llms.base.LLM) class is an abstract base class that provides a framework for interacting with a Large Language Model. To create a custom LLM, you need to create a subclass of [`LLM`](../../reference/llms/index.md#dbally.llms.base.LLM) and implement the required methods and properties.

Here's a step-by-step guide:

Expand All @@ -20,22 +20,22 @@ class MyLLM(LLM[LiteLLMOptions]):
_options_cls = LiteLLMOptions
```

In this example we will be using `LiteLLMOptions`, which contain all options supported by most popular LLM APIs. If you need a different interface, see [Customising LLM Options](#customising-llm-options) to learn how to implement it.
In this example we will be using [`LiteLLMOptions`](../../reference/llms/litellm.md#dbally.llms.clients.litellm.LiteLLMOptions), which contain all options supported by most popular LLM APIs. If you need a different interface, see [Customising LLM Options](#customising-llm-options) to learn how to implement it.

### Step 2: Create the custom LLM client

The `_client` property is an abstract method that must be implemented in your subclass. This property should return an instance of `LLMClient` that your LLM will use to interact with the model:
The [`client`](../../reference/llms/index.md#dbally.llms.base.LLM.client) property is an abstract method that must be implemented in your subclass. This property should return an instance of [`LLMClient`](../../reference/llms/index.md#dbally.llms.clients.base.LLMClient) that your LLM will use to interact with the model:

```python
class MyLLM(LLM[LiteLLMOptions]):
_options_cls = LiteLLMOptions

@cached_property
def _client(self) -> MyLLMClient:
def client(self) -> MyLLMClient:
return MyLLMClient()
```

`MyLLMClient` should be a class that implements the `LLMClient` interface.
`MyLLMClient` should be a class that implements the [`LLMClient`](../../reference/llms/index.md#dbally.llms.clients.base.LLMClient) interface.

```python
from dbally.llms.clients.base import LLMClient
Expand All @@ -52,11 +52,11 @@ class MyLLMClient(LLMClient[LiteLLMOptions]):
# Your LLM API call
```

The `call` method is an abstract method that must be implemented in your subclass. This method should call the LLM inference API and return the response.
The [`call`](../../reference/llms/index.md#dbally.llms.clients.base.LLMClient.call) method is an abstract method that must be implemented in your subclass. This method should call the LLM inference API and return the response.

### Step 3: Use tokenizer to count tokens

The `count_tokens` method is used to count the number of tokens in the messages. You can override this method in your custom class to use the tokenizer and count tokens specifically for your model.
The [`count_tokens`](../../reference/llms/index.md#dbally.llms.base.LLM.count_tokens) method is used to count the number of tokens in the messages. You can override this method in your custom class to use the tokenizer and count tokens specifically for your model.

```python
class MyLLM(LLM[LiteLLMOptions]):
Expand All @@ -69,20 +69,20 @@ class MyLLM(LLM[LiteLLMOptions]):

### Step 4: Define custom prompt formatting

The `_format_prompt` method is used to apply formatting to the prompt template. You can override this method in your custom class to change how the formatting is performed.
The [`format_prompt`](../../reference/llms/index.md#dbally.llms.base.LLM.format_prompt) method is used to apply formatting to the prompt template. You can override this method in your custom class to change how the formatting is performed.

```python
class MyLLM(LLM[LiteLLMOptions]):

def _format_prompt(self, template: PromptTemplate, fmt: Dict[str, str]) -> ChatFormat:
def format_prompt(self, template: PromptTemplate, fmt: Dict[str, str]) -> ChatFormat:
# Apply custom formatting to the prompt template
```
!!!note
In general, implementation of this method is not required unless the LLM API does not support [OpenAI conversation formatting](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages){:target="_blank"}. If the model API expects a different format, override this method to avoid issues with inference call.

## Customising LLM Options

`LLMOptions` is a class that defines the options your LLM will use. To create a custom options, you need to create a subclass of `LLMOptions` and define the required properties that will be passed to the `LLMClient`.
[`LLMOptions`](../../reference/llms/index.md#dbally.llms.clients.base.LLMOptions) is a class that defines the options your LLM will use. To create a custom options, you need to create a subclass of [`LLMOptions`](../../reference/llms/index.md#dbally.llms.clients.base.LLMOptions) and define the required properties that will be passed to the [`LLMClient`](../../reference/llms/index.md#dbally.llms.clients.base.LLMClient).

```python
from dbally.llms.base import LLMOptions
Expand Down
4 changes: 2 additions & 2 deletions docs/how-to/llms/litellm.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# How-To: Use LiteLLM models

db-ally comes with ready-to-use LLM implementation called `LiteLLM` that uses the litellm package under the hood, providing access to all major LLM APIs such as OpenAI, Anthropic, VertexAI, Hugging Face and more.
db-ally comes with ready-to-use LLM implementation called [`LiteLLM`](../../reference/llms/litellm.md#dbally.llms.litellm.LiteLLM) that uses the litellm package under the hood, providing access to all major LLM APIs such as OpenAI, Anthropic, VertexAI, Hugging Face and more.

## Basic Usage

Expand Down Expand Up @@ -57,7 +57,7 @@ response = await my_collection.ask("Which LLM should I use?")

## Advanced Usage

For more advanced users, you may also want to parametrize your LLM using `LiteLLMOptions`. Here is the list of availabe parameters:
For more advanced users, you may also want to parametrize your LLM using [`LiteLLMOptions`](../../reference/llms/litellm.md#dbally.llms.clients.litellm.LiteLLMOptions). Here is the list of availabe parameters:

- `frequency_penalty`: *number or null (optional)* - It is used to penalize new tokens based on their frequency in the text so far.

Expand Down
8 changes: 4 additions & 4 deletions src/dbally/llms/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ def __init_subclass__(cls) -> None:

@cached_property
@abstractmethod
def _client(self) -> LLMClient:
def client(self) -> LLMClient:
"""
Client for the LLM.
"""

def _format_prompt(self, template: PromptTemplate, fmt: Dict[str, str]) -> ChatFormat:
def format_prompt(self, template: PromptTemplate, fmt: Dict[str, str]) -> ChatFormat:
"""
Applies formatting to the prompt template.
Expand Down Expand Up @@ -88,12 +88,12 @@ async def generate_text(
Text response from LLM.
"""
options = (self.default_options | options) if options else self.default_options
prompt = self._format_prompt(template, fmt)
prompt = self.format_prompt(template, fmt)
event = LLMEvent(prompt=prompt, type=type(template).__name__)
event_tracker = event_tracker or EventTracker()

async with event_tracker.track_event(event) as span:
event.response = await self._client.call(
event.response = await self.client.call(
prompt=prompt,
response_format=template.response_format,
options=options,
Expand Down
2 changes: 1 addition & 1 deletion src/dbally/llms/litellm.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def __init__(
self.api_version = api_version

@cached_property
def _client(self) -> LiteLLMClient:
def client(self) -> LiteLLMClient:
"""
Client for the LLM.
"""
Expand Down
6 changes: 3 additions & 3 deletions tests/integration/test_llm_options.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ async def test_llm_options_propagation():
expected_options = MockLLMOptions(mock_property1=2, mock_property2="default mock")

llm = MockLLM(default_options=default_options)
llm._client.call = AsyncMock(return_value="MockView1")
llm.client.call = AsyncMock(return_value="MockView1")

collection = create_collection(
name="test_collection",
Expand All @@ -30,9 +30,9 @@ async def test_llm_options_propagation():
llm_options=custom_options,
)

assert llm._client.call.call_count == 3
assert llm.client.call.call_count == 3

llm._client.call.assert_has_calls(
llm.client.call.assert_has_calls(
[
call(
prompt=ANY,
Expand Down
2 changes: 1 addition & 1 deletion tests/unit/mocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,5 +84,5 @@ def __init__(self, default_options: Optional[MockLLMOptions] = None) -> None:
super().__init__("mock-llm", default_options)

@cached_property
def _client(self) -> MockLLMClient:
def client(self) -> MockLLMClient:
return MockLLMClient(model_name=self.model_name)
2 changes: 1 addition & 1 deletion tests/unit/test_iql_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def view() -> MockView:
@pytest.fixture
def llm() -> MockLLM:
llm = MockLLM()
llm._client.call = AsyncMock(return_value="LLM IQL mock answer")
llm.client.call = AsyncMock(return_value="LLM IQL mock answer")
return llm


Expand Down
2 changes: 1 addition & 1 deletion tests/unit/test_nl_responder.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
@pytest.fixture
def llm() -> MockLLM:
llm = MockLLM()
llm._client.call = AsyncMock(return_value="db-ally is the best")
llm.client.call = AsyncMock(return_value="db-ally is the best")
return llm


Expand Down
6 changes: 3 additions & 3 deletions tests/unit/test_prompt_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def llm():


def test_default_llm_format_prompt(llm, simple_template):
prompt = llm._format_prompt(
prompt = llm.format_prompt(
template=simple_template,
fmt={"question": "Example user question?"},
)
Expand All @@ -34,7 +34,7 @@ def test_default_llm_format_prompt(llm, simple_template):

def test_missing_format_dict(llm, simple_template):
with pytest.raises(KeyError):
_ = llm._format_prompt(simple_template, fmt={})
_ = llm.format_prompt(simple_template, fmt={})


@pytest.mark.parametrize(
Expand Down Expand Up @@ -66,7 +66,7 @@ def test_chat_order_validation(invalid_chat):
def test_dynamic_few_shot(llm, simple_template):
assert (
len(
llm._format_prompt(
llm.format_prompt(
simple_template.add_assistant_message("assistant message").add_user_message("user message"),
fmt={"question": "user question"},
)
Expand Down
2 changes: 1 addition & 1 deletion tests/unit/test_view_selector.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
def llm() -> LLM:
"""Return a mock LLM client."""
llm = MockLLM()
llm._client.call = AsyncMock(return_value="MockView1")
llm.client.call = AsyncMock(return_value="MockView1")
return llm


Expand Down
2 changes: 1 addition & 1 deletion tests/unit/views/text2sql/test_view.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def sample_db() -> Engine:

async def test_text2sql_view(sample_db: Engine):
llm = MockLLM()
llm._client.call = AsyncMock(return_value="SELECT * FROM customers WHERE city = 'New York'")
llm.client.call = AsyncMock(return_value="SELECT * FROM customers WHERE city = 'New York'")

config = Text2SQLConfig(
tables={
Expand Down

0 comments on commit 11fdeda

Please sign in to comment.