Skip to content

Commit

Permalink
Merge pull request #105 from brainlid/me-update-api-to-tools
Browse files Browse the repository at this point in the history
update api to "tools" concept
  • Loading branch information
brainlid authored Apr 27, 2024
2 parents 3ba0fe1 + 6ac8fcc commit 96aa9f7
Show file tree
Hide file tree
Showing 36 changed files with 4,891 additions and 1,115 deletions.
11 changes: 11 additions & 0 deletions MODEL_BEHAVIORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Model Behaviors

This is to document some of the differences between model behaviors. This is specifically about how one LLM responds differently than another. This information is likely to get out of date quickly as models change. Still, it may be helpful as a reference or starting point.

## Pre-filling the assistant's response

- ChatGPT 3.5 and 4 do not complete a pre-filled assistant response as we might hope. Giving it instructions to provide an answer in an `<answer>{{ANSWER}}</answer>` template, then starting an assistant message with `<answer>` to have it complete it, it will not include the closing `</answer>` tag.
- 2024-04-17

- Anthropic's Claude 3 does respond well to pre-filled assistant responses and it is officially encouraged.
- 2024-04-17
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,3 +205,18 @@ Executing a specific test, whether it is a `live_call` or not, will execute it c
When doing local development on the `LangChain` library itself, rename the `.envrc_template` to `.envrc` and populate it with your private API values. This is only used when running live test when explicitly requested.

Use a tool like [Direnv](https://direnv.net/) or [Dotenv](https://github.com/motdotla/dotenv) to load the API values into the ENV when using the library locally.


TODO: explicitly return JSON formatted data only to a request. Use in an example? Use for data extraction chain? (assuming only OpenAI?)
- https://platform.openai.com/docs/guides/text-generation/json-mode

TODO: For data extraction with OpenAI, can set the "tool_choice" to extra force that function use.

TODO: Multiple tool calls must mean that multiple tool messages can be returned in a sequence.

TODO: Pull the "before" OpenAI module for legacy API support? Allow for using it against OTHER APIs that have OpenAI compatibility and may not be updated yet.
- still need to be updated to use "Tool" related messages. Assistant "tool_calls" would be different too.

TODO: add a `version` to the structs. The version is set by the model. So it using an OpenAI assistant message with tool_calls is either a string date or "v2" or something. It can be pattern matched on when writing to a newer model.

- Perhaps the ChatOpenAI struct has the version number on it. Then it converts a `for_api` call differently for a tool to look like a function when targeting an older version.
17 changes: 14 additions & 3 deletions lib/chains/data_extraction_chain.ex
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ defmodule LangChain.Chains.DataExtractionChain do
require Logger
alias LangChain.PromptTemplate
alias LangChain.Message
alias LangChain.Message.ToolCall
alias LangChain.Chains.LLMChain

@function_name "information_extraction"
Expand Down Expand Up @@ -101,15 +102,19 @@ Passage:
{:ok, chain} = LLMChain.new(%{llm: llm, verbose: verbose})

chain
|> LLMChain.add_functions(build_extract_function(json_schema))
|> LLMChain.add_tools(build_extract_function(json_schema))
|> LLMChain.add_messages(messages)
|> LLMChain.run()
|> case do
{:ok, _updated_chain,
%Message{
role: :assistant,
function_name: @function_name,
arguments: %{"info" => info}
tool_calls: [
%ToolCall{
name: @function_name,
arguments: %{"info" => info}
}
]
}}
when is_list(info) ->
{:ok, info}
Expand All @@ -136,6 +141,12 @@ Passage:
LangChain.Function.new!(%{
name: @function_name,
description: "Extracts the relevant information from the passage.",
function: fn args, _context ->
# NOTE: The function is not executed here because we won't be returning
# this to the LLM. The LLMChain does not run the function, but stops at
# the request for it.
{:ok, args}
end,
parameters_schema: %{
type: "object",
properties: %{
Expand Down
Loading

0 comments on commit 96aa9f7

Please sign in to comment.