Skip to content

Commit

Permalink
Merge branch 'main' into montebrown/main
Browse files Browse the repository at this point in the history
* main:
  feat: Support for Ollama keep_alive API parameter (brainlid#237)
  support for o1 OpenAI model (brainlid#234)
  minor test cleanup
  phi_4 chat template support fix after merge
  feat: apply chat template from callback (brainlid#231)
  Add Bumblebee Phi-4 (brainlid#233)
  updated changelog
  update version and docs outline (brainlid#229)
  fix: enable verbose_deltas (brainlid#197)
  feat: Enable :inet6 for Req.new (brainlid#227)
  Breaking change: consolidate LLM callback functions (brainlid#228)
  • Loading branch information
brainlid committed Jan 22, 2025
2 parents d66302a + 0c10c33 commit 6f67913
Show file tree
Hide file tree
Showing 33 changed files with 899 additions and 438 deletions.
13 changes: 8 additions & 5 deletions .github/workflows/elixir.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
#
# https://github.com/erlef/setup-beam/tree/v1.18.2

name: Elixir CI

Expand All @@ -17,6 +19,7 @@ env:
GOOGLE_API_KEY: invalid
AWS_ACCESS_KEY_ID: invalid
AWS_SECRET_ACCESS_KEY: invalid
AWS_REGION: invalid

permissions:
contents: read
Expand All @@ -28,12 +31,12 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Set up Elixir
uses: erlef/setup-beam@61e01a43a562a89bfc54c7f9a378ff67b03e4a21 # v1.16.0
- uses: actions/checkout@v4
name: Set up Elixir
- uses: erlef/setup-beam@v1
with:
elixir-version: '1.15.2' # [Required] Define the Elixir version
otp-version: '26.0' # [Required] Define the Erlang/OTP version
elixir-version: 'v1.18.1-otp-27' # [Required] Define the Elixir version
otp-version: 'OTP-27.0' # [Required] Define the Erlang/OTP version
- name: Restore dependencies cache
uses: actions/cache@v3
with:
Expand Down
153 changes: 153 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,158 @@
# Changelog

## v0.3.0-rc.2 (2025-01-08)

### Breaking Changes

How LLM callbacks are registered has changed. The callback function's arguments have also changed.

Specifically, this refers to the callbacks:

- `on_llm_new_delta`
- `on_llm_new_message`
- `on_llm_ratelimit_info`
- `on_llm_token_usage`

The callbacks are still supported, but _how_ they are registered and the arguments passed to the linked functions has changed.

Previously, an LLM callback's first argument was the chat model, it is now the LLMChain that is running it.

A ChatModel still has the `callbacks` struct attribute, but it should be considered private.

#### Why the change
Having some callback functions registered on the chat model and some registered on the chain was confusing. What goes where? Why the difference?

This change moves them all to the same place, removing a source of confusion.

The primary reason for the change is that important information about the **context** of the callback event was not available to the callback function. Information stored in the chain's `custom_context` can be valuable and important, like a user's account ID, but it was not easily accessible in a callback like `on_llm_token_usage` where we might want to record the user's token usage linked to their account.

This important change passes the entire `LLMChain` through to the callback function, giving the function access to the `custom_context`. This makes the LLM (aka chat model) callback functions expect the same arguments as the other chain focused callback functions.

This both unifies how the callbacks operate and what data they have available, and it groups them all together.

#### Adapting to the change
A before example:

```elixir
llm_events = %{
# 1st argument was the chat model
on_llm_new_delta: fn _chat_model, %MessageDelta{} = delta ->
# ...
end,
on_llm_token_usage: fn _chat_model, usage_data ->
# ...
end
}

chain_events = %{
on_message_processed: fn _chain, tool_msg ->
# ...
end
}

# LLM callback events were registered on the chat model
chat_model = ChatOpenAI.new!(%{stream: true, callbacks: [llm_events]})

{:ok, updated_chain} =
%{
llm: chat_model,
custom_context: %{user_id: 123}
}
|> LLMChain.new!()
|> LLMChain.add_message(Message.new_system!())
|> LLMChain.add_message(Message.new_user!("Say hello!"))
# Chain callback events were registered on the chain
|> LLMChain.add_callback(chain_events)
|> LLMChain.run()
```

This is updated to: (comments highlight changes)

```elixir
# Events are all combined together
events = %{
# 1st argument is now the LLMChain
on_llm_new_delta: fn _chain, %MessageDelta{} = delta ->
# ...
end,
on_llm_token_usage: fn %LLMChain{} = chain, usage_data ->
# ... `chain.custom_context` is available
end,
on_message_processed: fn _chain, tool_msg ->
# ...
end
}

# callbacks removed from Chat Model setup
chat_model = ChatOpenAI.new!(%{stream: true})

{:ok, updated_chain} =
%{
llm: chat_model,
custom_context: %{user_id: 123}
}
|> LLMChain.new!()
|> LLMChain.add_message(Message.new_system!())
|> LLMChain.add_message(Message.new_user!("Say hello!"))
# All events are registered through `add_callback`
|> LLMChain.add_callback(events)
|> LLMChain.run()
```

If you still need access to the LLM in the callback functions, it's available in `chain.llm`.

The change is a breaking change, but should be fairly easy to update.

This consolidates how callback events work and them more powerful by exposing important information to the callback functions.

If you were using the `LLMChain.add_llm_callback/2`, the change is even easier:

From:
```elixir
%{
llm: chat_model,
custom_context: %{user_id: 123}
}
|> LLMChain.new!()
# ...
# LLM callback events could be added later this way
|> LLMChain.add_llm_callback(llm_events)
|> LLMChain.run()
```

To:
```elixir
%{
llm: chat_model,
custom_context: %{user_id: 123}
}
|> LLMChain.new!()
# ...
# Use the `add_callback` function instead
|> LLMChain.add_callback(llm_events)
|> LLMChain.run()
```

#### Details of the change
- Removal of the `LangChain.ChatModels.LLMCallbacks` module.
- The LLM-specific callbacks were migrated to `LangChain.Chains.ChainCallbacks`.
- Removal of `LangChain.Chains.LLMChain.add_llm_callback/2`
- `LangChain.ChatModels.ChatOpenAI.new/1` and `LangChain.ChatModels.ChatOpenAI.new!/1` no longer accept `:callbacks` on the chat model.
- Removal of `LangChain.ChatModels.ChatModel.add_callback/2`

### What else Changed
* add explicit message support in summarizer by @brainlid in https://github.com/brainlid/langchain/pull/220
* Change abacus to optional dep by @nallwhy in https://github.com/brainlid/langchain/pull/223
* Remove constraint of alternating user, assistant by @GenericJam in https://github.com/brainlid/langchain/pull/222
* Breaking change: consolidate LLM callback functions by @brainlid in https://github.com/brainlid/langchain/pull/228
* feat: Enable :inet6 for Req.new for Ollama by @mpope9 in https://github.com/brainlid/langchain/pull/227
* fix: enable verbose_deltas by @cristineguadelupe in https://github.com/brainlid/langchain/pull/197

### New Contributors
* @nallwhy made their first contribution in https://github.com/brainlid/langchain/pull/223
* @GenericJam made their first contribution in https://github.com/brainlid/langchain/pull/222
* @mpope9 made their first contribution in https://github.com/brainlid/langchain/pull/227

## v0.3.0-rc.1 (2024-12-15)

### Breaking Changes
Expand Down
8 changes: 4 additions & 4 deletions lib/callbacks.ex
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ defmodule LangChain.Callbacks do
@spec fire([map()], atom(), [any()]) :: :ok | no_return()
def fire(callbacks, callback_name, arguments)

def fire(callbacks, :on_llm_new_message, [model, messages]) when is_list(messages) do
def fire(callbacks, :on_llm_new_message, [messages]) when is_list(messages) do
Enum.each(messages, fn m ->
fire(callbacks, :on_llm_new_message, [model, m])
fire(callbacks, :on_llm_new_message, [m])
end)
end

def fire(callbacks, :on_llm_new_delta, [model, deltas]) when is_list(deltas) do
def fire(callbacks, :on_llm_new_delta, [deltas]) when is_list(deltas) do
Enum.each(deltas, fn d ->
fire(callbacks, :on_llm_new_delta, [model, d])
fire(callbacks, :on_llm_new_delta, [d])
end)
end

Expand Down
103 changes: 94 additions & 9 deletions lib/chains/chain_callbacks.ex
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
defmodule LangChain.Chains.ChainCallbacks do
@moduledoc """
Defines the callbacks fired by an LLMChain.
Defines the callbacks fired by an LLMChain and LLM module.
A callback handler is a map that defines the specific callback event with a
function to execute for that event.
Expand All @@ -13,15 +13,93 @@ defmodule LangChain.Chains.ChainCallbacks do
live_view_pid = self()
my_handlers = %{
handle_chain_error_message_created: fn new_message -> send(live_view_pid, {:received_message, new_message})
on_llm_new_message: fn _context, new_message -> send(live_view_pid, {:received_message, new_message}),
handle_chain_error_message_created: fn _context, new_message -> send(live_view_pid, {:received_message, new_message})
}
model = SomeLLM.new!(%{callbacks: [my_handlers]})
chain = LLMChain.new!(%{llm: model})
model = SomeLLM.new!(%{...})
chain =
%{llm: model}
|> LLMChain.new!()
|> LLMChain.add_callback(my_handlers)
"""

alias LangChain.Chains.LLMChain
alias LangChain.Message
alias LangChain.MessageDelta
alias LangChain.TokenUsage

@typedoc """
Executed when an LLM is streaming a response and a new MessageDelta (or token)
was received.
- `:index` is optionally present if the LLM supports sending `n` versions of a
response.
The return value is discarded.
## Example
A function declaration that matches the signature.
def handle_llm_new_delta(chain, delta) do
IO.write(delta)
end
"""
@type llm_new_delta :: (LLMChain.t(), MessageDelta.t() -> any())

@typedoc """
Executed when an LLM is not streaming and a full message was received.
The return value is discarded.
## Example
A function declaration that matches the signature.
def handle_llm_new_message(chain, message) do
IO.inspect(message)
end
"""
@type llm_new_message :: (LLMChain.t(), Message.t() -> any())

@typedoc """
Executed when an LLM (typically a service) responds with rate limiting
information.
The specific rate limit information depends on the LLM. It returns a map with
all the available information included.
The return value is discarded.
## Example
A function declaration that matches the signature.
def handle_llm_ratelimit_info(chain, %{} = info) do
IO.inspect(info)
end
"""
@type llm_ratelimit_info :: (LLMChain.t(), info :: %{String.t() => any()} -> any())

@typedoc """
Executed when an LLM response reports the token usage in a
`LangChain.TokenUsage` struct. The data returned depends on the LLM.
The return value is discarded.
## Example
A function declaration that matches the signature.
def handle_llm_token_usage(chain, %TokenUsage{} = usage) do
IO.inspect(usage)
end
"""
@type llm_token_usage :: (LLMChain.t(), TokenUsage.t() -> any())

@typedoc """
Executed when an LLMChain has completed processing a received assistant
Expand Down Expand Up @@ -108,10 +186,17 @@ defmodule LangChain.Chains.ChainCallbacks do
The supported set of callbacks for an LLM module.
"""
@type chain_callback_handler :: %{
on_message_processed: chain_message_processed(),
on_message_processing_error: chain_message_processing_error(),
on_error_message_created: chain_error_message_created(),
on_tool_response_created: chain_tool_response_created(),
on_retries_exceeded: chain_retries_exceeded()
# model-level callbacks
optional(:on_llm_new_delta) => llm_new_delta(),
optional(:on_llm_new_message) => llm_new_message(),
optional(:on_llm_ratelimit_info) => llm_ratelimit_info(),
optional(:on_llm_token_usage) => llm_token_usage(),

# Chain-level callbacks
optional(:on_message_processed) => chain_message_processed(),
optional(:on_message_processing_error) => chain_message_processing_error(),
optional(:on_error_message_created) => chain_error_message_created(),
optional(:on_tool_response_created) => chain_tool_response_created(),
optional(:on_retries_exceeded) => chain_retries_exceeded()
}
end
17 changes: 5 additions & 12 deletions lib/chains/llm_chain.ex
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,6 @@ defmodule LangChain.Chains.LLMChain do
use Ecto.Schema
import Ecto.Changeset
require Logger
alias LangChain.ChatModels.ChatModel
alias LangChain.Callbacks
alias LangChain.Chains.ChainCallbacks
alias LangChain.PromptTemplate
Expand Down Expand Up @@ -205,7 +204,7 @@ defmodule LangChain.Chains.LLMChain do
"""
@type message_processor :: (t(), Message.t() -> processor_return())

@create_fields [:llm, :tools, :custom_context, :max_retry_count, :callbacks, :verbose]
@create_fields [:llm, :tools, :custom_context, :max_retry_count, :callbacks, :verbose, :verbose_deltas]
@required_fields [:llm]

@doc """
Expand Down Expand Up @@ -523,8 +522,11 @@ defmodule LangChain.Chains.LLMChain do
# then execute the `.call` function on that module.
%module{} = chain.llm

# wrap and link the model's callbacks.
use_llm = Utils.rewrap_callbacks_for_model(chain.llm, chain.callbacks, chain)

# handle and output response
case module.call(chain.llm, chain.messages, chain.tools) do
case module.call(use_llm, chain.messages, chain.tools) do
{:ok, [%Message{} = message]} ->
if chain.verbose, do: IO.inspect(message, label: "SINGLE MESSAGE RESPONSE")
{:ok, process_message(chain, message)}
Expand Down Expand Up @@ -1023,15 +1025,6 @@ defmodule LangChain.Chains.LLMChain do
%LLMChain{chain | callbacks: callbacks ++ [additional_callback]}
end

@doc """
Add a `LangChain.ChatModels.LLMCallbacks` callback map to the chain's `:llm` model if
it supports the `:callback` key.
"""
@spec add_llm_callback(t(), map()) :: t()
def add_llm_callback(%LLMChain{llm: model} = chain, callback_map) do
%LLMChain{chain | llm: ChatModel.add_callback(model, callback_map)}
end

# a pipe-friendly execution of callbacks that returns the chain
defp fire_callback_and_return(%LLMChain{} = chain, callback_name, additional_arguments)
when is_list(additional_arguments) do
Expand Down
Loading

0 comments on commit 6f67913

Please sign in to comment.