diff --git a/.env.example b/.env.example index f2b668f3..79394a6c 100644 --- a/.env.example +++ b/.env.example @@ -1,5 +1,3 @@ ANTHROPIC_API_KEY=... TAVILY_API_KEY=... -LANGCHAIN_TRACING_V2=true -LANGCHAIN_API_KEY=... OPENAI_API_KEY=... diff --git a/README.md b/README.md index 4e28a94f..caa8d761 100644 --- a/README.md +++ b/README.md @@ -1,246 +1,11 @@ -# `langgraph-api` Example +# LangGraph Cloud Example ![](static/agent_ui.png) -This is an example of how to use `langgraph-api` to stand up a REST API for your custom LangGraph StateGraph. This API can be used to interact with your StateGraph from any programming language that can make HTTP requests. +This is an example agent to deploy with LangGraph Cloud. [LangGraph](https://github.com/langchain-ai/langgraph) is a library for building stateful, multi-actor applications with LLMs. The main use cases for LangGraph are conversational agents, and long-running, multi-step LLM applications or any LLM application that would benefit from built-in support for persistent checkpoints, cycles and human-in-the-loop interactions (ie. LLM and human collaboration). -`langgraph-api` shortens the time-to-market for developers using LangGraph, with a one-liner command to start a production-ready HTTP microservice for your LangGraph applications, with built-in persistence. This lets you focus on the logic of your LangGraph graph, and leave the scaling and API design to us. The API is inspired by the OpenAI assistants API, and is designed to fit in alongside your existing services. +LangGraph shortens the time-to-market for developers using LangGraph, with a one-liner command to start a production-ready HTTP microservice for your LangGraph applications, with built-in persistence. This lets you focus on the logic of your LangGraph graph, and leave the scaling and API design to us. The API is inspired by the OpenAI assistants API, and is designed to fit in alongside your existing services. -## API Features - -It has the following features: - -- saved assistants, tracking config for your graphs -- saved threads, tracking state/conversation history -- human in the loop endpoints (interrupt a run, authorize nodes, get thread state, update thread state, get history of past thread states) -- streaming runs (with multiple stream formats, including token-by-token messages, state values and node updates) -- background runs (powered by a built-in task queue with exactly-once semantics, and FIFO ordering, with api for checking status and events, and support for completion webhooks) -- horizontally scalable, both the HTTP server and task queue are designed to run in many machines in parallel, with all state stored in Postgres -- "double texting" modes, fully configurable support to handle new input arriving while a thread still processing previous input, choose from these modes: reject, enqueue, cancel, rollback -- low latency, all interactions with the database have been optimized into a single roundtrip per endpoint, all database ops during runs are backgrounded and batched, and lots of other optimizations from our experience running high performance python services at scale - -We've designed it as a robust server you can run in production at high scale, and also easily test locally. - -## Quickstart - -This will cover how to get started with the example LangGraph application in this repo. -If you already have a LangGraph application and you want to deploy that (rather than this example LangGraph application) see the next section. - -The LangGraph agent we are deploying is a simple Anthropic agent with a single search tool. -You can see the full graph in `agent.py` - -You will need to have Docker running locally in order to use LangGraph Cloud. Download it [here](https://docs.docker.com/desktop/install/mac-install/), open the app, and ensure the Docker engine is running. - -Clone this repo and switch your active directory to the newly created one: - -```bash -git clone https://github.com/langchain-ai/langgraph-example.git -``` - -```bash -cd langgraph-example -``` - -Install the `langgraph-cli` package: - -```bash -pip install langgraph-cli -``` - -Create a `.env` file with the correct environment variables. - -```shell -cp .env.example .env -``` - -Go into `.env` file and add your credentials. -You will need an [Anthropic](https://console.anthropic.com/login?returnTo=%2F%3F), [Tavily](https://docs.tavily.com/), and [LangSmith](https://smith.langchain.com/) API keys. - -Then, run the following command to start the API server: - -```bash -langgraph up -``` - -This will start the API server on `http://localhost:8123`. -You can now interact with your StateGraph using the API or SDK. -For this example we will use the SDK, so let's go into a separate environment and install the SDK. - -### Python - -```shell -pip install langgraph-sdk -``` - -We can now interact with our deployed graph! - -```python -from langgraph_sdk import get_client - -client = get_client() - -# List all assistants -assistants = await client.assistants.search() - -# We auto-create an assistant for each graph you register in config. -agent = assistants[0] - -# Start a new thread -thread = await client.threads.create() - -# Start a streaming run -input = {"messages": [{"role": "human", "content": "whats the weather in la"}]} -async for chunk in client.runs.stream(thread['thread_id'], agent['assistant_id'], input=input): - print(chunk) -``` - -### JS/TS - -```bash -yarn add @langchain/langgraph-sdk -``` - -```js -import { Client } from "@langchain/langgraph-sdk"; - -const client = new Client(); - -// List all assistants -const assistants = await client.assistants.search({ - metadata: null, - offset: 0, - limit: 10, -}); - -// We auto-create an assistant for each graph you register in config. -const agent = assistants[0]; - -// Start a new thread -const thread = await client.threads.create(); - -// Start a streaming run -const messages = [{ role: "human", content: "whats the weather in la" }]; - -const streamResponse = client.runs.stream( - thread["thread_id"], - agent["assistant_id"], - { - input: { messages }, - }, -); - -for await (const chunk of streamResponse) { - console.log(chunk); -} -``` - -There we go! Up and running. -There's still a lot left to learn. - -For more examples of how to interact with the API once it is deployed using the SDK, see the example notebooks in [notebooks](./examples/python/notebooks) - -For an explanation of how the deployment works and how to deploy a custom graph, see the section below. - -## Deploy a custom agent - -The quickstart walked through deploying a simple agent. But what if you want to deploy it for your custom agent? - -### Build your agent - -First: build your agent with LangGraph. See LangGraph documentation [here](https://github.com/langchain-ai/langgraph) for references and examples. - -### Define `langgraph.json` - -Now we will define our `langgraph.json` file. This configuration has three parts: - -#### `graphs` - -In the graphs mapping, the key is the graph_id and the value is the path to the agent (a StateGraph). -The graph_id is used in the API when creating an assistant. -You can declare multiple graphs. - -In the example, we had: - -```json - "graphs": { - "agent": "./agent.py:graph" - }, -``` - -This meant that we were defining an agent with graph_id `agent` and the path was in the `agent.py` file with a variable called `graph`. - -#### `dependencies` - -You can declare local and external python dependencies (which will be installed with pip) here. - -In the example, we had: - -```json - "dependencies": ["."], -``` - -This meant we installed this current directory as a dependency. -That includes anything in `requirements.txt` and any helper files here. - -You can also specify third party packages here. -For example, you could do something like: - -```json - "dependencies": [".", "wikipedia"], -``` - -This would install the current directory (and any requirements files located inside) as well as the `wikipedia` package. - -#### `env` - -This is a path to any environment variables/files to load. - -In our example we had: - -```json - "env": ".env" -``` - -This meant we loaded the environment variables in the `.env` file - -### Launch the LangGraph agent - -We can now use our CLI to launch the LangGraph agent. - -First, we need to install it. We can do this with: - -``` -pip install langgraph-cli -``` - -Once installed, we can then launch the service with: - -```shell -langgraph up -``` - -There are a few extra commands for additional control. For a full list, run `langgraph up --help` - -### Add custom services - -`langgraph up` spins up the LangGraph agent using Docker Compose. If you want to launch other services as part of the same project, you can use the `-d` flag to pass an additional docker compose file to be merged into the same project. - -For instance, if you create a docker compose file at `compose.yml` you can then run `langgraph up -d compose.yml` to spin up both the LangGraph services as well as your custom services. - -## API Reference - -The API reference is available at `http://localhost:8123/docs` when running locally. You can preview it here: [API Reference](https://langchain-ai.github.io/langgraph-example/). - -## Server configuration - -To configure throughput you can use the env vars N_WORKERS (default 2) and N_JOBS_PER_WORKER (default 5). -Throughput for background runs is the product of the two, so by default at most 10 runs can be running at any one time. - -## UI - -Part of LangGraph API includes a UI for interacting with created agents. -After running `langgraph up` you can access this UI by going to [http://localhost:8124](http://localhost:8124). You will be taken to an interactive playground whereby you can visualize and then interact with the agent. - -![](static/agent_ui.png) +In order to deploy this agent to LangGraph Cloud you will want to first fork this repo. After that, you can follow the instructions [here](https://langchain-ai.github.io/langgraph/cloud/) to deploy to LangGraph Cloud. diff --git a/examples/js/.gitignore b/examples/js/.gitignore deleted file mode 100644 index 254e3d4b..00000000 --- a/examples/js/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -yarn.lock -node_modules \ No newline at end of file diff --git a/examples/js/backgroundRun.ts b/examples/js/backgroundRun.ts deleted file mode 100644 index 51ae7d09..00000000 --- a/examples/js/backgroundRun.ts +++ /dev/null @@ -1,74 +0,0 @@ -// How to kick off background runs -// This guide covers how to kick off background runs for your agent. This can be useful for long running jobs. -import { Client } from "@langchain/langgraph-sdk"; - -async function main() { - // Initialize the client - const client = new Client(); - - // List available assistants - const assistants = await client.assistants.search(); - console.log("List available assistants", assistants); - - // Get the first assistant, we will use this one - const assistant = assistants[0]; - console.log("Get first assistant", assistant); - - // Create a new thread - const thread = await client.threads.create(); - console.log("Create new thread", thread); - - // If we list runs on this thread, we can see it is empty - const runs = await client.runs.list(thread.thread_id); - console.log("List runs on the thread", runs); - - // Let's kick off a run - const input = { - messages: [{ role: "human", content: "whats the weather in sf" }], - }; - const run = await client.runs.create( - thread.thread_id, - assistant.assistant_id, - { input } - ); - console.log("Create a single run", run); - - // The first time we poll it, we can see `status=pending` - console.log( - "Poll a single run, status=pending", - await client.runs.get(thread.thread_id, run.run_id) - ); - - // We can list events for the run - console.log( - "List all events for the run", - await client.runs.listEvents(thread.thread_id, run.run_id) - ); - - // Eventually, it should finish and we should see `status=success` - let finalRunStatus = await client.runs.get(thread.thread_id, run.run_id); - while (finalRunStatus.status !== "success") { - await new Promise((resolve) => setTimeout(resolve, 1000)); // Polling every second - finalRunStatus = await client.runs.get(thread.thread_id, run.run_id); - } - console.log( - "Final run status", - await client.runs.get(thread.thread_id, run.run_id) - ); - - // We can get the final results - const results = await client.runs.listEvents(thread.thread_id, run.run_id); - - // The results are sorted by time, so the most recent (final) step is the 0 index - const finalResult = results[0]; - console.log("Final result", finalResult); - - // We can get the content of the final message - const finalMessages = (finalResult.data as Record)["output"][ - "messages" - ]; - const finalMessageContent = finalMessages[finalMessages.length - 1].content; - console.log("Final message content", finalMessageContent); -} - -main(); diff --git a/examples/js/configuration.ts b/examples/js/configuration.ts deleted file mode 100644 index 946cd4b8..00000000 --- a/examples/js/configuration.ts +++ /dev/null @@ -1,70 +0,0 @@ -// How to create agents with configuration -import { Client } from "@langchain/langgraph-sdk"; - -/* -One of the benefits of LangGraph API is that it lets you create agents with different configurations. -This is useful when you want to: - -- Define a cognitive architecture once as a LangGraph -- Let that LangGraph be configurable across some attributes (for example, system message or LLM to use) -- Let users create agents with arbitrary configurations, save them, and then use them in the future - -In this guide we will show how to do that for the default agent we have built in. - -If you look at the agent we defined, you can see that inside the `call_model` node we have created the model based on some configuration. That node looks like: - -```python -def call_model(state, config): - messages = state["messages"] - model_name = config.get('configurable', {}).get("model_name", "anthropic") - model = _get_model(model_name) - response = model.invoke(messages) - # We return a list, because this will get added to the existing list - return {"messages": [response]} -``` - -We are looking inside the config for a `model_name` parameter (which defaults to `anthropic` if none is found). -That means that by default we are using Anthropic as our model provider. -In this example we will see an example of how to create an example agent that is configured to use OpenAI. - -We've also communicated to the graph that it should expect configuration with this key. -We've done this by passing `config_schema` when constructing the graph, eg: - -```python -class GraphConfig(TypedDict): - model_name: Literal["anthropic", "openai"] -*/ - -async function main() { - const client = new Client(); - const baseAssistant = await client.assistants.create({ - graphId: "agent", - }); - console.log("Assistant", baseAssistant); - /* We can now call `.get_schemas` to get schemas associated with this graph*/ - const schemas = await client.assistants.getSchemas(baseAssistant["assistant_id"]) - /* There are multiple types of schemas - We can get the `config_schema` to look at the the configurable parameters */ - // @ts-ignore - console.log("Schema with configurable parameters", schemas["config_schema"]["definitions"]["Configurable"]["properties"]) - const assistant = await client.assistants.create({ - graphId: "agent", - config: { configurable: { model_name: "openai" } }, - }); - // We can see that this assistant has saved the config - console.log("Assistant with config", assistant); - const thread = await client.threads.create(); - const input = { messages: [{ role: "user", content: "who made you?" }] }; - - for await (const event of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { input } - )) { - console.log(`Receiving new event of type: ${event.event}...`); - console.log(JSON.stringify(event.data)); - console.log("\n\n"); - } -} - -main(); diff --git a/examples/js/doubleTexting.ts b/examples/js/doubleTexting.ts deleted file mode 100644 index 91aaac8f..00000000 --- a/examples/js/doubleTexting.ts +++ /dev/null @@ -1,153 +0,0 @@ -// How to handle "double-texting" or concurrent runs in your graph - -/* -You might want to start a new run on a thread while the previous run still haven't finished. We call this "double-texting" or multi-tasking. - -There are several strategies for handling this: - -- `reject`: Reject the new run. -- `interrupt`: Interrupt the current run, keeping steps completed until now, and start a new one. -- `rollback`: Cancel and delete the existing run, rolling back the thread to the state before it had started, then start the new run. -- `enqueue`: Queue up the new run to start after the current run finishes. -*/ - -import { Client } from "@langchain/langgraph-sdk"; - -const sleep = async (ms: number) => - await new Promise((resolve) => setTimeout(resolve, ms)); - -async function main() { - const client = new Client(); - const assistant = await client.assistants.create({ - graphId: "agent", - }); - - // REJECT - console.log("\nREJECT demo\n"); - let thread = await client.threads.create(); - let run = await client.runs.create( - thread["thread_id"], - assistant["assistant_id"], - { - input: { - messages: [{ role: "human", content: "whats the weather in sf?" }], - }, - }, - ); - - // attempt a new run (will be rejected) - await client.runs.create(thread["thread_id"], assistant["assistant_id"], { - input: { - messages: [{ role: "human", content: "whats the weather in nyc?" }], - }, - multitaskStrategy: "reject", - }); - - await client.runs.join(thread["thread_id"], run["run_id"]); - - // We can verify that the original thread finished executing: - let state = await client.threads.getState(thread["thread_id"]); - console.log("Messages", state["values"]["messages"]); - - // INTERRUPT - console.log("\nINTERRUPT demo\n"); - thread = await client.threads.create(); - const interruptedRun = await client.runs.create( - thread["thread_id"], - assistant["assistant_id"], - { - input: { - messages: [{ role: "human", content: "whats the weather in sf?" }], - }, - }, - ); - await sleep(2000); - run = await client.runs.create( - thread["thread_id"], - assistant["assistant_id"], - { - input: { - messages: [{ role: "human", content: "whats the weather in nyc?" }], - }, - multitaskStrategy: "interrupt", - }, - ); - await client.runs.join(thread["thread_id"], run["run_id"]); - - // We can see that the thread has partial data from the first run + data from the second run - state = await client.threads.getState(thread["thread_id"]); - console.log("Messages", state["values"]["messages"]); - - // Verify that the original, canceled run was interrupted - console.log( - "Interrupted run status", - (await client.runs.get(thread["thread_id"], interruptedRun["run_id"]))[ - "status" - ], - ); - - // ROLLBACK - console.log("\nROLLBACK demo\n"); - thread = await client.threads.create(); - const rolledBackRun = await client.runs.create( - thread["thread_id"], - assistant["assistant_id"], - { - input: { - messages: [{ role: "human", content: "whats the weather in sf?" }], - }, - }, - ); - await sleep(2000); - run = await client.runs.create( - thread["thread_id"], - assistant["assistant_id"], - { - input: { - messages: [{ role: "human", content: "whats the weather in nyc?" }], - }, - multitaskStrategy: "rollback", - }, - ); - - await client.runs.join(thread["thread_id"], run["run_id"]); - - // We can see that the thread only has data from the second run - state = await client.threads.getState(thread["thread_id"]); - console.log("Messages", state["values"]["messages"]); - - // Verify that the original, rolled back run was deleted - try { - await client.runs.get(thread["thread_id"], rolledBackRun["run_id"]); - } catch (e) { - console.log("Original run was deleted", e); - } - - // ENQUEUE - console.log("\nENQUEUE demo\n"); - thread = await client.threads.create(); - await client.runs.create(thread["thread_id"], assistant["assistant_id"], { - input: { - messages: [{ role: "human", content: "whats the weather in sf?" }], - sleep: 5, - }, - }); - await sleep(500); - const secondRun = await client.runs.create( - thread["thread_id"], - assistant["assistant_id"], - { - input: { - messages: [{ role: "human", content: "whats the weather in nyc?" }], - }, - multitaskStrategy: "enqueue", - }, - ); - await client.runs.join(thread["thread_id"], secondRun["run_id"]); - - // Verify that the thread has data from both runs - state = await client.threads.getState(thread["thread_id"]); - console.log("Combined messages", state["values"]["messages"]); -} - -main(); diff --git a/examples/js/humanInTheLoop.ts b/examples/js/humanInTheLoop.ts deleted file mode 100644 index c1dd9838..00000000 --- a/examples/js/humanInTheLoop.ts +++ /dev/null @@ -1,147 +0,0 @@ -import { Client } from "@langchain/langgraph-sdk"; - -async function main() { - const client = new Client(); - - const assistant = await client.assistants.create({ graphId: "agent" }); - console.log("Assistant", assistant); - - // Approve a tool call - const thread = await client.threads.create(); - console.log("Thread", thread); - - let runs = await client.runs.list(thread.thread_id); - console.log("Runs", runs); - - // We now want to add a human-in-the-loop step before a tool is called. - // We can do this by adding `interruptBefore=["action"]`, which tells us to interrupt before calling the action node. - // We can do this either when compiling the graph or when kicking off a run. - // Here we will do it when kicking of a run. - let input = { - messages: [{ role: "human", content: "whats the weather in sf" }], - }; - for await (const chunk of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { - input, - streamMode: "updates", - interruptBefore: ["action"], - } - )) { - console.log(`Receiving new event of type: ${chunk.event}...`); - console.log(JSON.stringify(chunk.data)); - console.log("\n\n"); - } - - // We can now kick off a new run on the same thread with `None` as the input in order to just continue the existing thread. - for await (const chunk of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { - input: null, - streamMode: "updates", - interruptBefore: ["action"], - } - )) { - console.log(`Receiving new event of type: ${chunk.event}...`); - console.log(JSON.stringify(chunk.data)); - console.log("\n\n"); - } - - // Edit a tool call - - // What if we want to edit the tool call? - // We can also do that. - // Let's kick off another run, with the same `interruptBefore=['action']` - input = { - messages: [{ role: "human", content: "whats the weather in la?" }], - }; - for await (const chunk of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { - input: input, - streamMode: "updates", - interruptBefore: ["action"], - } - )) { - console.log(`Receiving new event of type: ${chunk.event}...`); - console.log(JSON.stringify(chunk.data)); - console.log("\n\n"); - } - - // Inspect and modify the state of the thread - let threadState = await client.threads.getState(thread.thread_id); - // Let's get the last message of the thread - this is the one we want to update - let lastMessage = threadState.values["messages"].slice(-1)[0]; - - // Let's now modify the tool call to say Louisiana - lastMessage.tool_calls = [ - { - name: "tavily_search_results_json", - args: { query: "weather in Louisiana" }, - id: lastMessage.tool_calls[0].id, - }, - ]; - - await client.threads.updateState(thread.thread_id, { - values: { messages: [lastMessage] }, - }); - - // Check the updated state - threadState = await client.threads.getState(thread.thread_id); - console.log( - "Updated tool calls", - threadState.values["messages"].slice(-1)[0].tool_calls - ); - - // Great! We changed it. If we now resume execution (by kicking off a new run with null inputs on the same thread) it should use that new tool call. - for await (const chunk of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { - input: null, - streamMode: "updates", - interruptBefore: ["action"], - } - )) { - console.log(`Receiving new event of type: ${chunk.event}...`); - console.log(JSON.stringify(chunk.data)); - console.log("\n\n"); - } - - // Edit an old state - - // Let's now imagine we want to go back in time and edit the tool call after we had already made it. - // In order to do this, we can get first get the full history of the thread - const threadHistory = await client.threads.getHistory(thread.thread_id, { - limit: 100, - }); - console.log("History length", threadHistory.length); - - // After that, we can get the correct state we want to be in. The 0th index state is the most recent one, while the -1 index state is the first. - // In this case, we want to go to the state where the last message had the tool calls for `weather in los angeles` - const rewindState = threadHistory[3]; - console.log( - "Rewind state tools calls", - rewindState.values["messages"].slice(-1)[0].tool_calls - ); - - for await (const chunk of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { - input: null, - streamMode: "updates", - interruptBefore: ["action"], - config: rewindState.config, - } - )) { - console.log(`Receiving new event of type: ${chunk.event}...`); - console.log(JSON.stringify(chunk.data)); - console.log("\n\n"); - } -} - -main(); diff --git a/examples/js/package.json b/examples/js/package.json deleted file mode 100644 index cd507532..00000000 --- a/examples/js/package.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "name": "js", - "description": "Examples of LangGraph SDK", - "private": true, - "version": "1.0.0", - "license": "MIT", - "dependencies": { - "@langchain/langgraph-sdk": "*" - }, - "type": "module" -} diff --git a/examples/js/sameThread.ts b/examples/js/sameThread.ts deleted file mode 100644 index d4554df2..00000000 --- a/examples/js/sameThread.ts +++ /dev/null @@ -1,45 +0,0 @@ -// How to run multiple agents on the same thread -import { Client } from "@langchain/langgraph-sdk"; - -/* -In LangGraph API, a thread is not explicitly associated with a particular agent. -This means that you can run multiple agents on the same thread. -In this example, we will create two agents and then call them both on the same thread. -*/ - -async function main() { - const client = new Client(); - // const openaiAssistant = await client.assistants.create({ graphId: "agent", config: { configurable: { model_name: "openai" } }}); - const defaultAssistant = await client.assistants.create({ graphId: "agent" }); - const openaiAssistant = await client.assistants.create({ - graphId: "agent", - config: { configurable: { model_name: "openai" } }, - }); - - console.log("Default assistant", defaultAssistant); - // We can see that this assistant has saved the config - console.log("OpenAI assistant", openaiAssistant); - const thread = await client.threads.create(); - let input = { messages: [{ role: "user", content: "who made you?" }] }; - - for await (const event of client.runs.stream( - thread.thread_id, - openaiAssistant.assistant_id, - { input, streamMode: "updates" } - )) { - console.log(JSON.stringify(event.data)); - console.log("\n\n"); - } - - input = { messages: [{ role: "user", content: "and you?" }] }; - for await (const event of client.runs.stream( - thread.thread_id, - defaultAssistant.assistant_id, - { input, streamMode: "updates" } - )) { - console.log(JSON.stringify(event.data)); - console.log("\n\n"); - } -} - -main(); diff --git a/examples/js/streamMessages.ts b/examples/js/streamMessages.ts deleted file mode 100644 index d9dfcc2f..00000000 --- a/examples/js/streamMessages.ts +++ /dev/null @@ -1,105 +0,0 @@ -// How to stream messages from your graph -import { Client } from "@langchain/langgraph-sdk"; - -/* -There are multiple different streaming modes. - -- values: This streaming mode streams back values of the graph. This is the full state of the graph after each node is called. -- updates: This streaming mode streams back updates to the graph. This is the update to the state of the graph after each node is called. -- messages: This streaming mode streams back messages - both complete messages (at the end of a node) as well as tokens for any messages generated inside a node. This mode is primarily meant for powering chat applications. - -This script covers streaming_mode="messages". - -In order to use this mode, the state of the graph you are interacting with MUST have a messages key that is a list of messages. Eg, the state should look something like: - -from typing import TypedDict, Annotated -from langgraph.graph import add_messages -from langchain_core.messages import AnyMessage - -class State(TypedDict): - messages: Annotated[list[AnyMessage], add_messages] - - OR it should be an instance or subclass of from langgraph.graph import MessageState (MessageState is just a helper type hint equivalent to the above). - -With stream_mode="messages" two things will be streamed back: - -It outputs messages produced by any chat model called inside (unless tagged in a special way) -It outputs messages returned from nodes (to allow for nodes to return ToolMessages and the like -*/ - -async function streamMessages() { - const client = new Client(); - const assistant = await client.assistants.create({ - graphId: "agent", - config: { configurable: { model_name: "openai" } }, - }); - console.log("Assistant", assistant); - - const thread = await client.threads.create(); - console.log("Thread", thread); - - const runs = await client.runs.list(thread.thread_id); - console.log("Runs", runs); - - // Helper function for formatting messages - function formatToolCalls(toolCalls) { - if (toolCalls && toolCalls.length > 0) { - const formattedCalls = toolCalls.map( - (call) => - `Tool Call ID: ${call.id}, Function: ${ - call.name - }, Arguments: ${JSON.stringify(call.args)}` - ); - return formattedCalls.join("\n"); - } - return "No tool calls"; - } - - const input = { - messages: [{ role: "user", content: "whats the weather in sf" }], - }; - - for await (const event of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { input, streamMode: "messages" } - )) { - if (event.event === "metadata") { - const data = event.data as Record; - console.log(`Metadata: Run ID - ${data["run_id"]}`); - } else if (event.event === "messages/partial") { - for (const dataItem of event.data as Record[]) { - if ("role" in dataItem && dataItem.role === "user") { - console.log(`Human: ${dataItem.content}`); - } else { - const toolCalls = dataItem.tool_calls || []; - const invalidToolCalls = dataItem.invalid_tool_calls || []; - const content = dataItem.content || ""; - const responseMetadata = dataItem.response_metadata || {}; - - if (content) { - console.log(`AI: ${content}`); - } - - if (toolCalls.length > 0) { - console.log("Tool Calls:"); - console.log(formatToolCalls(toolCalls)); - } - - if (invalidToolCalls.length > 0) { - console.log("Invalid Tool Calls:"); - console.log(formatToolCalls(invalidToolCalls)); - } - - if (responseMetadata) { - const finishReason = responseMetadata.finish_reason || "N/A"; - console.log(`Response Metadata: Finish Reason - ${finishReason}`); - } - } - } - console.log("-".repeat(50)); - } - } -} - -streamMessages(); diff --git a/examples/js/streamUpdates.ts b/examples/js/streamUpdates.ts deleted file mode 100644 index 57090b2f..00000000 --- a/examples/js/streamUpdates.ts +++ /dev/null @@ -1,43 +0,0 @@ -// How to stream updates from your graph -import { Client } from "@langchain/langgraph-sdk"; - -/* -There are multiple different streaming modes. - -- values: This streaming mode streams back values of the graph. This is the full state of the graph after each node is called. -- updates: This streaming mode streams back updates to the graph. This is the update to the state of the graph after each node is called. -- messages: This streaming mode streams back messages - both complete messages (at the end of a node) as well as tokens for any messages generated inside a node. This mode is primarily meant for powering chat applications. - -This script covers streaming_mode="updates". -*/ - -async function streamUpdates() { - const client = new Client(); - const assistant = await client.assistants.create({ - graphId: "agent", - config: { configurable: { model_name: "openai" } }, - }); - console.log("Assistant", assistant); - - const thread = await client.threads.create(); - console.log("Thread", thread); - - const runs = await client.runs.list(thread.thread_id); - console.log("Runs", runs); - - const input = { - messages: [{ role: "user", content: "whats the weather in sf" }], - }; - - for await (const event of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { input, streamMode: "updates" } - )) { - console.log(`Receiving new event of type: ${event.event}...`); - console.log(JSON.stringify(event.data)); - console.log("\n\n"); - } -} - -streamUpdates(); diff --git a/examples/js/streamValues.ts b/examples/js/streamValues.ts deleted file mode 100644 index c4af4952..00000000 --- a/examples/js/streamValues.ts +++ /dev/null @@ -1,43 +0,0 @@ -// How to stream values from your graph -import { Client } from "@langchain/langgraph-sdk"; - -/* -There are multiple different streaming modes. - -- values: This streaming mode streams back values of the graph. This is the full state of the graph after each node is called. -- updates: This streaming mode streams back updates to the graph. This is the update to the state of the graph after each node is called. -- messages: This streaming mode streams back messages - both complete messages (at the end of a node) as well as tokens for any messages generated inside a node. This mode is primarily meant for powering chat applications. - -This script covers streaming_mode="values". -*/ - -async function streamValues() { - const client = new Client(); - const assistant = await client.assistants.create({ - graphId: "agent", - config: { configurable: { model_name: "openai" } }, - }); - console.log("Assistant", assistant); - - const thread = await client.threads.create(); - console.log("Thread", thread); - - const runs = await client.runs.list(thread.thread_id); - console.log("Runs", runs); - - const input = { - messages: [{ role: "user", content: "whats the weather in sf" }], - }; - - for await (const event of client.runs.stream( - thread.thread_id, - assistant.assistant_id, - { input } - )) { - console.log(`Receiving new event of type: ${event.event}...`); - console.log(JSON.stringify(event.data)); - console.log("\n\n"); - } -} - -streamValues(); diff --git a/examples/python/notebooks/README.md b/examples/python/notebooks/README.md deleted file mode 100644 index 78d1be78..00000000 --- a/examples/python/notebooks/README.md +++ /dev/null @@ -1,55 +0,0 @@ -# How-to guides - -These are how-to guides for interacting with the deployed agents. - - -## Streaming - -When you deploy a graph with LangGraph API, you can stream results from a run in a few different ways. - -- `values`: This streaming mode streams back values of the graph. This is the **full state of the graph** after each node is called. -- `update`: This streaming mode streams back updates to the graph. This is the **update to the state of the graph** after each node is called. -- `messages`: This streaming mode streams back messages - both complete messages (at the end of a node) as well as **tokens** for any messages generated inside a node. This mode is primarily meant for powering chat applications. - -See the following guides for how to use the different streaming modes. - - -- [How to stream values](./stream_values.ipynb) -- [How to stream updates](./stream_updates.ipynb) -- [How to stream messages](./stream_messages.ipynb) - -## Background runs - -When you deploy a graph with LangGraph API, you can also interact with it in a way where you kick off background runs. -You can poll for the status of the run. -This can be useful when the run is particularly long running. - -- [How to kick off a background run](./background_run.ipynb) - -## Human-in-the-loop - -When you deploy a graph with LangGraph API, it is deployed with a persistence layer. -This enables easy human-in-the-loop interactions like approving a tool call, editing a tool call, and returning to and modifying a previous state and resuming execution from there. - -- [How to have a human in the loop](./human-in-the-loop.ipynb) - -## Configuration - -One of the benefits of LangGraph API is that it lets you create agents with different configurations. -This is useful when you want to: - -1. Define a cognitive architecture once as a LangGraph -2. Let that LangGraph be configurable across some attributes (for example, system message or LLM to use) -3. Let users create agents with arbitrary configurations, save them, and then use them in the future - -In this guide we will show how to do that for the default agent we have built in. - -- [How to configure agents](./configuration.ipynb) - -## Multiple Agents, Same Thread - -In LangGraph API, a thread is not explicitly associated with a particular agent. -This means that you can run multiple agents on the same thread. -In this example, we will create two agents and then call them both on the same thread. - -- [How to run multiple agents on the same thread](./same-thread.ipynb) \ No newline at end of file diff --git a/examples/python/notebooks/background_run.ipynb b/examples/python/notebooks/background_run.ipynb deleted file mode 100644 index a43bff2a..00000000 --- a/examples/python/notebooks/background_run.ipynb +++ /dev/null @@ -1,388 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "51466c8d-8ce4-4b3d-be4e-18fdbeda5f53", - "metadata": {}, - "source": [ - "# How to kick off background runs\n", - "\n", - "This guide covers how to kick off background runs for your agent.\n", - "This can be useful for long running jobs." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "b8e6408a-b37e-428f-9567-077fa55d58e8", - "metadata": {}, - "outputs": [], - "source": [ - "# Initialize the client\n", - "from langgraph_sdk import get_client\n", - "\n", - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "4947e9bc-111f-4991-8c41-1041da9bf0ba", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'assistant_id': 'e90fee30-be91-43aa-a33c-d54bd219072e',\n", - " 'graph_id': 'agent',\n", - " 'created_at': '2024-06-18T18:06:55.102231+00:00',\n", - " 'updated_at': '2024-06-18T18:06:55.102231+00:00',\n", - " 'config': {'configurable': {'model_name': 'anthropic'}},\n", - " 'metadata': {}}" - ] - }, - "execution_count": 2, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# List available assistants\n", - "assistants = await client.assistants.search()\n", - "assistants[0]" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "230c0464-a6e5-420f-9e38-ca514e5634ce", - "metadata": {}, - "outputs": [], - "source": [ - "# NOTE: we can use `assistant_id` UUID from the above response, or just pass graph ID instead when creating runs. we'll use graph ID here\n", - "assistant_id = \"agent\"" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "56aa5159-5583-4134-9210-709b969bda6f", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'created_at': '2024-06-21T14:58:02.079462+00:00',\n", - " 'updated_at': '2024-06-21T14:58:02.079462+00:00',\n", - " 'metadata': {}}" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Create a new thread\n", - "thread = await client.threads.create()\n", - "thread" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "147c3f98-f889-4f05-a090-6b31f2a0b291", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[]" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# If we list runs on this thread, we can see it is empty\n", - "runs = await client.runs.list(thread['thread_id'])\n", - "runs" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "8c7b44ef-4816-496d-88a1-2f7327cf576d", - "metadata": {}, - "outputs": [], - "source": [ - "# Let's kick off a run\n", - "input = {\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in sf\"}]}\n", - "run = await client.runs.create(thread['thread_id'], assistant_id, input=input)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "d84b4d80-b0aa-4d9f-a05d-0744b2fe8f72", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'run_id': '1ef2fdea-814c-6165-8b2a-a40e2a028198',\n", - " 'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca',\n", - " 'created_at': '2024-06-21T14:58:02.095911+00:00',\n", - " 'updated_at': '2024-06-21T14:58:02.095911+00:00',\n", - " 'metadata': {},\n", - " 'status': 'pending',\n", - " 'kwargs': {'input': {'messages': [{'role': 'human',\n", - " 'content': 'whats the weather in sf'}]},\n", - " 'config': {'metadata': {'created_by': 'system'},\n", - " 'configurable': {'run_id': '1ef2fdea-814c-6165-8b2a-a40e2a028198',\n", - " 'user_id': '',\n", - " 'graph_id': 'agent',\n", - " 'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'thread_ts': None,\n", - " 'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca'}},\n", - " 'webhook': None,\n", - " 'temporary': False,\n", - " 'stream_mode': ['events'],\n", - " 'feedback_keys': None,\n", - " 'interrupt_after': None,\n", - " 'interrupt_before': None},\n", - " 'multitask_strategy': 'reject'}" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# The first time we poll it, we can see `status=pending`\n", - "await client.runs.get(thread['thread_id'], run['run_id'])" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "3639da3c-bfe5-454c-ab1e-8ed7af394dfe", - "metadata": {}, - "outputs": [], - "source": [ - "# Wait until the run finishes\n", - "await client.runs.join(thread['thread_id'], run['run_id'])" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "8fa206ed-515e-4607-9a80-bebafe76cc24", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'run_id': '1ef2fdea-814c-6165-8b2a-a40e2a028198',\n", - " 'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca',\n", - " 'created_at': '2024-06-21T14:58:02.095911+00:00',\n", - " 'updated_at': '2024-06-21T14:58:02.095911+00:00',\n", - " 'metadata': {},\n", - " 'status': 'success',\n", - " 'kwargs': {'input': {'messages': [{'role': 'human',\n", - " 'content': 'whats the weather in sf'}]},\n", - " 'config': {'metadata': {'created_by': 'system'},\n", - " 'configurable': {'run_id': '1ef2fdea-814c-6165-8b2a-a40e2a028198',\n", - " 'user_id': '',\n", - " 'graph_id': 'agent',\n", - " 'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'thread_ts': None,\n", - " 'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca'}},\n", - " 'webhook': None,\n", - " 'temporary': False,\n", - " 'stream_mode': ['events'],\n", - " 'feedback_keys': None,\n", - " 'interrupt_after': None,\n", - " 'interrupt_before': None},\n", - " 'multitask_strategy': 'reject'}" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# Eventually, it should finish and we should see `status=success`\n", - "await client.runs.get(thread['thread_id'], run['run_id'])" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "8de4495f-7873-487c-b1a8-ad2a78a1ff35", - "metadata": {}, - "outputs": [], - "source": [ - "# We can get the final results\n", - "final_result = await client.threads.get_state(thread['thread_id'])" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "9da76fce-66e4-4f1b-8c24-09759889e50e", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'values': {'messages': [{'content': 'whats the weather in sf',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'human',\n", - " 'name': None,\n", - " 'id': 'bfe07fff-cb40-40be-84d5-a061d2c40006',\n", - " 'example': False},\n", - " {'content': [{'id': 'toolu_01QUzhhfDQkpbPSediUrXvQb',\n", - " 'input': {'query': 'weather in san francisco'},\n", - " 'name': 'tavily_search_results_json',\n", - " 'type': 'tool_use'}],\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'ai',\n", - " 'name': None,\n", - " 'id': 'run-6d8665ca-a77d-4b44-9a7b-4e975b155fb1',\n", - " 'example': False,\n", - " 'tool_calls': [{'name': 'tavily_search_results_json',\n", - " 'args': {'query': 'weather in san francisco'},\n", - " 'id': 'toolu_01QUzhhfDQkpbPSediUrXvQb'}],\n", - " 'invalid_tool_calls': [],\n", - " 'usage_metadata': None},\n", - " {'content': '[{\"url\": \"https://www.timeanddate.com/weather/usa/san-francisco/historic\", \"content\": \"San Francisco Weather History for the Previous 24 Hours Show weather for: Previous 24 hours June 17, 2024 June 16, 2024 June 15, 2024 June 14, 2024 June 13, 2024 June 12, 2024 June 11, 2024 June 10, 2024 June 9, 2024 June 8, 2024 June 7, 2024 June 6, 2024 June 5, 2024 June 4, 2024 June 3, 2024 June 2, 2024\"}]',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'tool',\n", - " 'name': 'tavily_search_results_json',\n", - " 'id': '257a1f29-2f66-4f9e-b35d-c8818dbbaa3f',\n", - " 'tool_call_id': 'toolu_01QUzhhfDQkpbPSediUrXvQb'},\n", - " {'content': [{'text': 'The search results provide historic weather data for San Francisco, but do not give the current weather conditions. To get the current weather forecast for San Francisco, I would need to refine my search query. Here is an updated search:',\n", - " 'type': 'text'},\n", - " {'id': 'toolu_01RLJEcWYRvRoBhiHdrhoRZx',\n", - " 'input': {'query': 'san francisco weather forecast today'},\n", - " 'name': 'tavily_search_results_json',\n", - " 'type': 'tool_use'}],\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'ai',\n", - " 'name': None,\n", - " 'id': 'run-ca41dbf8-7e89-4ff2-a245-87098d7928ba',\n", - " 'example': False,\n", - " 'tool_calls': [{'name': 'tavily_search_results_json',\n", - " 'args': {'query': 'san francisco weather forecast today'},\n", - " 'id': 'toolu_01RLJEcWYRvRoBhiHdrhoRZx'}],\n", - " 'invalid_tool_calls': [],\n", - " 'usage_metadata': None},\n", - " {'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1718981382, \\'localtime\\': \\'2024-06-21 7:49\\'}, \\'current\\': {\\'last_updated_epoch\\': 1718981100, \\'last_updated\\': \\'2024-06-21 07:45\\', \\'temp_c\\': 12.8, \\'temp_f\\': 55.0, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Overcast\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/122.png\\', \\'code\\': 1009}, \\'wind_mph\\': 6.9, \\'wind_kph\\': 11.2, \\'wind_degree\\': 200, \\'wind_dir\\': \\'SSW\\', \\'pressure_mb\\': 1011.0, \\'pressure_in\\': 29.84, \\'precip_mm\\': 0.01, \\'precip_in\\': 0.0, \\'humidity\\': 86, \\'cloud\\': 100, \\'feelslike_c\\': 12.2, \\'feelslike_f\\': 53.9, \\'windchill_c\\': 11.2, \\'windchill_f\\': 52.1, \\'heatindex_c\\': 12.0, \\'heatindex_f\\': 53.5, \\'dewpoint_c\\': 9.4, \\'dewpoint_f\\': 48.8, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 3.0, \\'gust_mph\\': 7.6, \\'gust_kph\\': 12.2}}\"}]',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'tool',\n", - " 'name': 'tavily_search_results_json',\n", - " 'id': 'c80a3720-6a9f-4ff0-9ce2-6112e66a6f81',\n", - " 'tool_call_id': 'toolu_01RLJEcWYRvRoBhiHdrhoRZx'},\n", - " {'content': 'The updated search provides the current weather forecast for San Francisco. According to the results, as of 7:49am on June 21, 2024 in San Francisco, the temperature is 55°F (12.8°C), it is overcast with 100% cloud cover, and there are light winds from the south-southwest around 7 mph (11 km/h). The forecast also shows low precipitation of 0.01 mm, high humidity of 86%, and visibility of 9 miles (16 km).\\n\\nIn summary, the current weather in San Francisco is cool, overcast, and breezy based on this weather forecast data. Let me know if you need any other details!',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'ai',\n", - " 'name': None,\n", - " 'id': 'run-4f23b53d-a8ec-4038-b3ed-08b2560bf81c',\n", - " 'example': False,\n", - " 'tool_calls': [],\n", - " 'invalid_tool_calls': [],\n", - " 'usage_metadata': None}]},\n", - " 'next': [],\n", - " 'config': {'configurable': {'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'thread_ts': '1ef2fdea-f879-65a5-8005-443b6a4039aa'}},\n", - " 'metadata': {'step': 5,\n", - " 'run_id': '1ef2fdea-814c-6165-8b2a-a40e2a028198',\n", - " 'source': 'loop',\n", - " 'writes': {'agent': {'messages': [{'id': 'run-4f23b53d-a8ec-4038-b3ed-08b2560bf81c',\n", - " 'name': None,\n", - " 'type': 'ai',\n", - " 'content': 'The updated search provides the current weather forecast for San Francisco. According to the results, as of 7:49am on June 21, 2024 in San Francisco, the temperature is 55°F (12.8°C), it is overcast with 100% cloud cover, and there are light winds from the south-southwest around 7 mph (11 km/h). The forecast also shows low precipitation of 0.01 mm, high humidity of 86%, and visibility of 9 miles (16 km).\\n\\nIn summary, the current weather in San Francisco is cool, overcast, and breezy based on this weather forecast data. Let me know if you need any other details!',\n", - " 'example': False,\n", - " 'tool_calls': [],\n", - " 'usage_metadata': None,\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'invalid_tool_calls': []}]}},\n", - " 'user_id': '',\n", - " 'graph_id': 'agent',\n", - " 'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'created_by': 'system',\n", - " 'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca'},\n", - " 'created_at': '2024-06-21T14:58:14.591805+00:00',\n", - " 'parent_config': {'configurable': {'thread_id': '5fc20631-47b7-48cd-8aa2-9f2eace9778d',\n", - " 'thread_ts': '1ef2fdea-d44c-6fc4-8004-d2713436777d'}}}" - ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "final_result" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "id": "ddd6e698-4609-4389-b84a-bb8939fff08b", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'The updated search provides the current weather forecast for San Francisco. According to the results, as of 7:49am on June 21, 2024 in San Francisco, the temperature is 55°F (12.8°C), it is overcast with 100% cloud cover, and there are light winds from the south-southwest around 7 mph (11 km/h). The forecast also shows low precipitation of 0.01 mm, high humidity of 86%, and visibility of 9 miles (16 km).\\n\\nIn summary, the current weather in San Francisco is cool, overcast, and breezy based on this weather forecast data. Let me know if you need any other details!'" - ] - }, - "execution_count": 12, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# We can get the content of the final message\n", - "final_result['values']['messages'][-1]['content']" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "langgraph-example-dev", - "language": "python", - "name": "langgraph-example-dev" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/configuration.ipynb b/examples/python/notebooks/configuration.ipynb deleted file mode 100644 index 3d427bd0..00000000 --- a/examples/python/notebooks/configuration.ipynb +++ /dev/null @@ -1,200 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "68c0837d-c40a-4209-9f88-5d08c00c31b0", - "metadata": {}, - "source": [ - "# How to create agents with configuration\n", - "\n", - "One of the benefits of LangGraph API is that it lets you create agents with different configurations.\n", - "This is useful when you want to:\n", - "\n", - "- Define a cognitive architecture once as a LangGraph\n", - "- Let that LangGraph be configurable across some attributes (for example, system message or LLM to use)\n", - "- Let users create agents with arbitrary configurations, save them, and then use them in the future\n", - "\n", - "In this guide we will show how to do that for the default agent we have built in.\n", - "\n", - "If you look at the agent we defined, you can see that inside the `call_model` node we have created the model based on some configuration. That node looks like:\n", - "\n", - "```python\n", - "def call_model(state, config):\n", - " messages = state[\"messages\"]\n", - " model_name = config.get('configurable', {}).get(\"model_name\", \"anthropic\")\n", - " model = _get_model(model_name)\n", - " response = model.invoke(messages)\n", - " # We return a list, because this will get added to the existing list\n", - " return {\"messages\": [response]}\n", - "```\n", - "\n", - "We are looking inside the config for a `model_name` parameter (which defaults to `anthropic` if none is found).\n", - "That means that by default we are using Anthropic as our model provider.\n", - "In this example we will see an example of how to create an example agent that is configured to use OpenAI.\n", - "\n", - "We've also communicated to the graph that it should expect configuration with this key. \n", - "We've done this by passing `config_schema` when constructing the graph, eg:\n", - "\n", - "```python\n", - "class GraphConfig(TypedDict):\n", - " model_name: Literal[\"anthropic\", \"openai\"]\n", - "\n", - "\n", - "# Define a new graph\n", - "workflow = StateGraph(AgentState, config_schema=GraphConfig)\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "id": "f69c9a4f-2ef9-4998-827b-fe86d12bfd76", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client\n", - "\n", - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "id": "9a37bfb5-7331-4004-8054-508838e54f18", - "metadata": {}, - "outputs": [], - "source": [ - "# First, let's check what valid configuration can be\n", - "# We can do this by getting the default assistant\n", - "# There should always be a default assistant with no configuration\n", - "assistants = await client.assistants.search()\n", - "assistants = [a for a in assistants if not a['config']]\n", - "base_assistant = assistants[0]" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "id": "70193a08-127c-44b3-a102-10db260d7e3b", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'model_name': {'title': 'Model Name',\n", - " 'enum': ['anthropic', 'openai'],\n", - " 'type': 'string'}}" - ] - }, - "execution_count": 17, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# We can now call `.get_schemas` to get schemas associated with this graph\n", - "schemas = await client.assistants.get_schemas(assistant_id=base_assistant[\"assistant_id\"])\n", - "# There are multiple types of schemas\n", - "# We can get the `config_schema` to look at the the configurable parameters\n", - "schemas['config_schema']['definitions']['Configurable']['properties']" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "id": "99be5aee-9a6b-4515-b72f-ba135a893c65", - "metadata": {}, - "outputs": [], - "source": [ - "assistant = await client.assistants.create(graph_id=\"agent\", config={\"configurable\": {\"model_name\": \"openai\"}})" - ] - }, - { - "cell_type": "markdown", - "id": "4f10d346-69e6-44f4-8ff0-ef539ba938df", - "metadata": {}, - "source": [ - "We can see that this assistant has saved the config" - ] - }, - { - "cell_type": "code", - "execution_count": 20, - "id": "3898ca35-eb2c-4b12-97ea-e0cc6a7c6a2e", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'assistant_id': '40a3a2bf-5319-4fae-a2ac-05e075615cdc',\n", - " 'graph_id': 'agent',\n", - " 'config': {'configurable': {'model_name': 'openai'}},\n", - " 'created_at': '2024-06-05T23:12:30.519458+00:00',\n", - " 'updated_at': '2024-06-05T23:12:30.519458+00:00',\n", - " 'metadata': {}}" - ] - }, - "execution_count": 20, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "assistant" - ] - }, - { - "cell_type": "code", - "execution_count": 21, - "id": "68ed7a1b-74be-4560-8c55-c76d49d3d348", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "StreamPart(event='metadata', data={'run_id': '1ef23911-c23b-6d8c-b1dc-94bb982ca7b1'})\n", - "StreamPart(event='values', data={'messages': [{'role': 'user', 'content': 'who made you?'}]})\n", - "StreamPart(event='values', data={'messages': [{'content': 'who made you?', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': 'ed93c1c9-80d6-4f2b-a048-ef859ea533f9', 'example': False}, {'content': 'I was created by OpenAI, a research organization focused on developing and advancing artificial intelligence technology.', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'stop'}, 'type': 'ai', 'name': None, 'id': 'run-6560cd65-5c9c-434b-8835-0baadc684760', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None}]})\n", - "StreamPart(event='end', data=None)\n" - ] - } - ], - "source": [ - "thread = await client.threads.create()\n", - "input = {\"messages\": [{\"role\": \"user\", \"content\": \"who made you?\"}]}\n", - "async for event in client.runs.stream(thread['thread_id'], assistant['assistant_id'], input=input):\n", - " print(event)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "666d78f1-019a-433e-839e-52d2ebb3d9c8", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.1" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/double_texting.ipynb b/examples/python/notebooks/double_texting.ipynb deleted file mode 100644 index a0ad9bcd..00000000 --- a/examples/python/notebooks/double_texting.ipynb +++ /dev/null @@ -1,675 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "51466c8d-8ce4-4b3d-be4e-18fdbeda5f53", - "metadata": {}, - "source": [ - "# How to handle \"double-texting\" or concurrent runs in your graph\n", - "\n", - "You might want to start a new run on a thread while the previous run still haven't finished. We call this \"double-texting\" or multi-tasking.\n", - "\n", - "There are several strategies for handling this:\n", - " \n", - "- `reject`: Reject the new run.\n", - "- `interrupt`: Interrupt the current run, keeping steps completed until now, and start a new one.\n", - "- `rollback`: Cancel and delete the existing run, rolling back the thread to the state before it had started, then start the new run.\n", - "- `enqueue`: Queue up the new run to start after the current run finishes." - ] - }, - { - "cell_type": "markdown", - "id": "19fd3d4d-bfe3-40fb-bd47-53ae0e8012b5", - "metadata": {}, - "source": [ - "### Reject" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "676d8d5d-e4be-4f19-b344-7525db8e805b", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client\n", - "from langchain_core.messages import convert_to_messages\n", - "import httpx" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "8a15b47d-d3ac-4aa8-8bf1-35c20fc5067f", - "metadata": {}, - "outputs": [], - "source": [ - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "2b58be44-a311-487e-a91b-353a3e6a4e13", - "metadata": {}, - "outputs": [], - "source": [ - "assistant_id = \"agent\"" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "39cb7234-1fd7-4fda-a708-da26e4f00556", - "metadata": {}, - "outputs": [], - "source": [ - "thread = await client.threads.create()" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "6065166a-337e-4443-9c87-53a865356191", - "metadata": {}, - "outputs": [], - "source": [ - "run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in sf?\"}]}\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "bef2fb51-ece2-4152-8399-d3d902377d95", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Failed to start concurrent run Client error '409 Conflict' for url 'http://localhost:8123/threads/f9e7088b-8028-4e5c-88d2-9cc9a2870e50/runs'\n", - "For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409\n" - ] - } - ], - "source": [ - "try:\n", - " await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in nyc?\"}]},\n", - " multitask_strategy=\"reject\",\n", - " )\n", - "except httpx.HTTPStatusError as e:\n", - " print(\"Failed to start concurrent run\", e)" - ] - }, - { - "cell_type": "markdown", - "id": "cfc9f025-027f-4617-abad-4722ca6fea87", - "metadata": {}, - "source": [ - "We can verify that the original thread finished executing:" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "094d2ee5-d01e-40ad-875b-3213705ee703", - "metadata": {}, - "outputs": [], - "source": [ - "# wait until the original run completes\n", - "await client.runs.join(thread[\"thread_id\"], run[\"run_id\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "2694e2ac-e02c-443c-b70e-748c779103af", - "metadata": {}, - "outputs": [], - "source": [ - "state = await client.threads.get_state(thread[\"thread_id\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "a6ec9609-ab97-4c6d-89f3-717240f244cc", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "================================\u001b[1m Human Message \u001b[0m=================================\n", - "\n", - "whats the weather in sf?\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "[{'id': 'toolu_01CyewEifV2Kmi7EFKHbMDr1', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n", - "Tool Calls:\n", - " tavily_search_results_json (toolu_01CyewEifV2Kmi7EFKHbMDr1)\n", - " Call ID: toolu_01CyewEifV2Kmi7EFKHbMDr1\n", - " Args:\n", - " query: weather in san francisco\n", - "=================================\u001b[1m Tool Message \u001b[0m=================================\n", - "Name: tavily_search_results_json\n", - "\n", - "[{\"url\": \"https://www.accuweather.com/en/us/san-francisco/94103/june-weather/347629\", \"content\": \"Get the monthly weather forecast for San Francisco, CA, including daily high/low, historical averages, to help you plan ahead.\"}]\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "According to the search results from Tavily, the current weather in San Francisco is:\n", - "\n", - "The average high temperature in San Francisco in June is around 65°F (18°C), with average lows around 54°F (12°C). June tends to be one of the cooler and foggier months in San Francisco due to the marine layer of fog that often blankets the city during the summer months.\n", - "\n", - "Some key points about the typical June weather in San Francisco:\n", - "\n", - "- Mild temperatures with highs in the 60s F and lows in the 50s F\n", - "- Foggy mornings that often burn off to sunny afternoons\n", - "- Little to no rainfall, as June falls in the dry season\n", - "- Breezy conditions, with winds off the Pacific Ocean\n", - "- Layers are recommended for changing weather conditions\n", - "\n", - "So in summary, you can expect mild, foggy mornings giving way to sunny but cool afternoons in San Francisco this time of year. The marine layer keeps temperatures moderate compared to other parts of California in June.\n" - ] - } - ], - "source": [ - "for m in convert_to_messages(state[\"values\"][\"messages\"]):\n", - " m.pretty_print()" - ] - }, - { - "cell_type": "markdown", - "id": "73d23fc7-cb94-4378-b64b-85419534b913", - "metadata": {}, - "source": [ - "### Interrupt" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "4f181881-e116-43e2-86de-797714184984", - "metadata": {}, - "outputs": [], - "source": [ - "import asyncio" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "0af0b820-2d88-4e5d-b377-1ac5af4df5da", - "metadata": {}, - "outputs": [], - "source": [ - "thread = await client.threads.create()" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "id": "b582d05f-46de-4cce-bf5c-744fd3253ae9", - "metadata": {}, - "outputs": [], - "source": [ - "# the first run will be interrupted\n", - "interrupted_run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in sf?\"}]},\n", - ")\n", - "await asyncio.sleep(2)\n", - "run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in nyc?\"}]},\n", - " multitask_strategy=\"interrupt\",\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "id": "3b3bee45-cbe6-4903-bed2-067768260677", - "metadata": {}, - "outputs": [], - "source": [ - "# wait until the second run completes\n", - "await client.runs.join(thread[\"thread_id\"], run[\"run_id\"])" - ] - }, - { - "cell_type": "markdown", - "id": "da18ceb0-ed98-49c8-89fb-df4cb0da9c4f", - "metadata": {}, - "source": [ - "We can see that the thread has partial data from the first run + data from the second run" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "id": "71ca7222-6b92-4f8a-ad0f-bf599e5bf9ce", - "metadata": {}, - "outputs": [], - "source": [ - "state = await client.threads.get_state(thread[\"thread_id\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "id": "e1ea9694-e011-4a60-a72e-5f496fc52cd9", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "================================\u001b[1m Human Message \u001b[0m=================================\n", - "\n", - "whats the weather in sf?\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "[{'id': 'toolu_01MjNtVJwEcpujRGrf3x6Pih', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n", - "Tool Calls:\n", - " tavily_search_results_json (toolu_01MjNtVJwEcpujRGrf3x6Pih)\n", - " Call ID: toolu_01MjNtVJwEcpujRGrf3x6Pih\n", - " Args:\n", - " query: weather in san francisco\n", - "=================================\u001b[1m Tool Message \u001b[0m=================================\n", - "Name: tavily_search_results_json\n", - "\n", - "[{\"url\": \"https://www.wunderground.com/hourly/us/ca/san-francisco/KCASANFR2002/date/2024-6-18\", \"content\": \"High 64F. Winds W at 10 to 20 mph. A few clouds from time to time. Low 49F. Winds W at 10 to 20 mph. Temp. San Francisco Weather Forecasts. Weather Underground provides local & long-range weather ...\"}]\n", - "================================\u001b[1m Human Message \u001b[0m=================================\n", - "\n", - "whats the weather in nyc?\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "[{'id': 'toolu_01KtE1m1ifPLQAx4fQLyZL9Q', 'input': {'query': 'weather in new york city'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n", - "Tool Calls:\n", - " tavily_search_results_json (toolu_01KtE1m1ifPLQAx4fQLyZL9Q)\n", - " Call ID: toolu_01KtE1m1ifPLQAx4fQLyZL9Q\n", - " Args:\n", - " query: weather in new york city\n", - "=================================\u001b[1m Tool Message \u001b[0m=================================\n", - "Name: tavily_search_results_json\n", - "\n", - "[{\"url\": \"https://www.accuweather.com/en/us/new-york/10021/june-weather/349727\", \"content\": \"Get the monthly weather forecast for New York, NY, including daily high/low, historical averages, to help you plan ahead.\"}]\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "The search results provide weather forecasts and information for New York City. Based on the top result from AccuWeather, here are some key details about the weather in NYC:\n", - "\n", - "- This is a monthly weather forecast for New York City for the month of June.\n", - "- It includes daily high and low temperatures to help plan ahead.\n", - "- Historical averages for June in NYC are also provided as a reference point.\n", - "- More detailed daily or hourly forecasts with precipitation chances, humidity, wind, etc. can be found by visiting the AccuWeather page.\n", - "\n", - "So in summary, the search provides a convenient overview of the expected weather conditions in New York City over the next month to give you an idea of what to prepare for if traveling or making plans there. Let me know if you need any other details!\n" - ] - } - ], - "source": [ - "for m in convert_to_messages(state[\"values\"][\"messages\"]):\n", - " m.pretty_print()" - ] - }, - { - "cell_type": "markdown", - "id": "aeea6251-f6b7-4395-9c47-7a0537a8a76a", - "metadata": {}, - "source": [ - "Verify that the original, interrupted run was interrupted" - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "id": "f9f275ad-41a5-4b6b-a710-9ca1d0d3e903", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'interrupted'" - ] - }, - "execution_count": 16, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "(await client.runs.get(thread[\"thread_id\"], interrupted_run[\"run_id\"]))[\"status\"]" - ] - }, - { - "cell_type": "markdown", - "id": "a5b91ae9-cb18-454b-a9ca-24f6ef6be8c6", - "metadata": {}, - "source": [ - "### Rollback" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "id": "9df4d417-f645-4ae5-a74a-f243cf223a4d", - "metadata": {}, - "outputs": [], - "source": [ - "thread = await client.threads.create()" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "id": "0637dda6-1cf4-4e28-962a-d12410139575", - "metadata": {}, - "outputs": [], - "source": [ - "# the first run will be interrupted\n", - "rolled_back_run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in sf?\"}]},\n", - ")\n", - "await asyncio.sleep(2)\n", - "run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in nyc?\"}]},\n", - " multitask_strategy=\"rollback\",\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "id": "764390cd-3f76-4d9d-80c5-9d12805f161c", - "metadata": {}, - "outputs": [], - "source": [ - "# wait until the second run completes\n", - "await client.runs.join(thread[\"thread_id\"], run[\"run_id\"])" - ] - }, - { - "cell_type": "markdown", - "id": "490359c8-7faa-441f-90d5-f1f38e20d567", - "metadata": {}, - "source": [ - "We can see that the thread has data only from the second run" - ] - }, - { - "cell_type": "code", - "execution_count": 20, - "id": "794aa45e-50af-4bfc-8baf-f60c456154d6", - "metadata": {}, - "outputs": [], - "source": [ - "state = await client.threads.get_state(thread[\"thread_id\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 21, - "id": "6e2008a2-befa-433f-b893-725c83f35227", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "================================\u001b[1m Human Message \u001b[0m=================================\n", - "\n", - "whats the weather in nyc?\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "[{'id': 'toolu_01JzPqefao1gxwajHQ3Yh3JD', 'input': {'query': 'weather in nyc'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n", - "Tool Calls:\n", - " tavily_search_results_json (toolu_01JzPqefao1gxwajHQ3Yh3JD)\n", - " Call ID: toolu_01JzPqefao1gxwajHQ3Yh3JD\n", - " Args:\n", - " query: weather in nyc\n", - "=================================\u001b[1m Tool Message \u001b[0m=================================\n", - "Name: tavily_search_results_json\n", - "\n", - "[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{'location': {'name': 'New York', 'region': 'New York', 'country': 'United States of America', 'lat': 40.71, 'lon': -74.01, 'tz_id': 'America/New_York', 'localtime_epoch': 1718734479, 'localtime': '2024-06-18 14:14'}, 'current': {'last_updated_epoch': 1718733600, 'last_updated': '2024-06-18 14:00', 'temp_c': 29.4, 'temp_f': 84.9, 'is_day': 1, 'condition': {'text': 'Sunny', 'icon': '//cdn.weatherapi.com/weather/64x64/day/113.png', 'code': 1000}, 'wind_mph': 2.2, 'wind_kph': 3.6, 'wind_degree': 158, 'wind_dir': 'SSE', 'pressure_mb': 1025.0, 'pressure_in': 30.26, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 63, 'cloud': 0, 'feelslike_c': 31.3, 'feelslike_f': 88.3, 'windchill_c': 28.3, 'windchill_f': 82.9, 'heatindex_c': 29.6, 'heatindex_f': 85.3, 'dewpoint_c': 18.4, 'dewpoint_f': 65.2, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 7.0, 'gust_mph': 16.5, 'gust_kph': 26.5}}\"}]\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "The weather API results show that the current weather in New York City is sunny with a temperature of around 85°F (29°C). The wind is light at around 2-3 mph from the south-southeast. Overall it looks like a nice sunny summer day in NYC.\n" - ] - } - ], - "source": [ - "for m in convert_to_messages(state[\"values\"][\"messages\"]):\n", - " m.pretty_print()" - ] - }, - { - "cell_type": "markdown", - "id": "035e5ec4-7ffc-4233-8755-d77109ab2f2e", - "metadata": {}, - "source": [ - "Verify that the original, rolled back run was deleted" - ] - }, - { - "cell_type": "code", - "execution_count": 22, - "id": "7bd408cb-cd8f-4eee-b898-b883fe6a17c6", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Original run was correctly deleted\n" - ] - } - ], - "source": [ - "try:\n", - " await client.runs.get(thread[\"thread_id\"], rolled_back_run[\"run_id\"])\n", - "except httpx.HTTPStatusError as e:\n", - " print(\"Original run was correctly deleted\")" - ] - }, - { - "cell_type": "markdown", - "id": "69accf95-8480-491a-927a-d7d19d498143", - "metadata": {}, - "source": [ - "### Enqueue" - ] - }, - { - "cell_type": "code", - "execution_count": 23, - "id": "86e7269b-f3f8-4dda-8260-f4522394e36a", - "metadata": {}, - "outputs": [], - "source": [ - "thread = await client.threads.create()" - ] - }, - { - "cell_type": "code", - "execution_count": 24, - "id": "3bb7b6d0-0999-4541-98b5-d058a3f45333", - "metadata": {}, - "outputs": [], - "source": [ - "# this run will be interrupted\n", - "first_run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in sf?\"}]}\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": 25, - "id": "4e88b197-5e8c-4d35-ad76-5872b333cc7f", - "metadata": {}, - "outputs": [], - "source": [ - "second_run = await client.runs.create(\n", - " thread[\"thread_id\"],\n", - " assistant_id,\n", - " input={\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in nyc?\"}]},\n", - " multitask_strategy=\"enqueue\",\n", - ")" - ] - }, - { - "cell_type": "markdown", - "id": "0f3a629c-3bfb-436d-a435-d9ce5c715dcb", - "metadata": {}, - "source": [ - "Verify that the thread has data from both runs" - ] - }, - { - "cell_type": "code", - "execution_count": 26, - "id": "79abbf3b-b425-4399-8926-c0af052ac127", - "metadata": {}, - "outputs": [], - "source": [ - "# wait until the second run completes\n", - "await client.runs.join(thread[\"thread_id\"], second_run[\"run_id\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 27, - "id": "e45b129b-2592-4f88-ab45-c63585493110", - "metadata": {}, - "outputs": [], - "source": [ - "state = await client.threads.get_state(thread[\"thread_id\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 28, - "id": "c522cefc-76b7-4c93-81cb-44cd5d7bd98c", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "================================\u001b[1m Human Message \u001b[0m=================================\n", - "\n", - "whats the weather in sf?\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "[{'id': 'toolu_01Dez1sJre4oA2Y7NsKJV6VT', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n", - "Tool Calls:\n", - " tavily_search_results_json (toolu_01Dez1sJre4oA2Y7NsKJV6VT)\n", - " Call ID: toolu_01Dez1sJre4oA2Y7NsKJV6VT\n", - " Args:\n", - " query: weather in san francisco\n", - "=================================\u001b[1m Tool Message \u001b[0m=================================\n", - "Name: tavily_search_results_json\n", - "\n", - "[{\"url\": \"https://www.accuweather.com/en/us/san-francisco/94103/weather-forecast/347629\", \"content\": \"Get the current and future weather conditions for San Francisco, CA, including temperature, precipitation, wind, air quality and more. See the hourly and 10-day outlook, radar maps, alerts and allergy information.\"}]\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "According to AccuWeather, the current weather conditions in San Francisco are:\n", - "\n", - "Temperature: 57°F (14°C)\n", - "Conditions: Mostly Sunny\n", - "Wind: WSW 10 mph\n", - "Humidity: 72%\n", - "\n", - "The forecast for the next few days shows partly sunny skies with highs in the upper 50s to mid 60s F (14-18°C) and lows in the upper 40s to low 50s F (9-11°C). Typical mild, dry weather for San Francisco this time of year.\n", - "\n", - "Some key details from the AccuWeather forecast:\n", - "\n", - "Today: Mostly sunny, high of 62°F (17°C)\n", - "Tonight: Partly cloudy, low of 49°F (9°C) \n", - "Tomorrow: Partly sunny, high of 59°F (15°C)\n", - "Saturday: Mostly sunny, high of 64°F (18°C)\n", - "Sunday: Partly sunny, high of 61°F (16°C)\n", - "\n", - "So in summary, expect seasonable spring weather in San Francisco over the next several days, with a mix of sun and clouds and temperatures ranging from the upper 40s at night to the low 60s during the days. Typical dry conditions with no rain in the forecast.\n", - "================================\u001b[1m Human Message \u001b[0m=================================\n", - "\n", - "whats the weather in nyc?\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "[{'text': 'Here are the current weather conditions and forecast for New York City:', 'type': 'text'}, {'id': 'toolu_01FFft5Sx9oS6AdVJuRWWcGp', 'input': {'query': 'weather in new york city'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]\n", - "Tool Calls:\n", - " tavily_search_results_json (toolu_01FFft5Sx9oS6AdVJuRWWcGp)\n", - " Call ID: toolu_01FFft5Sx9oS6AdVJuRWWcGp\n", - " Args:\n", - " query: weather in new york city\n", - "=================================\u001b[1m Tool Message \u001b[0m=================================\n", - "Name: tavily_search_results_json\n", - "\n", - "[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{'location': {'name': 'New York', 'region': 'New York', 'country': 'United States of America', 'lat': 40.71, 'lon': -74.01, 'tz_id': 'America/New_York', 'localtime_epoch': 1718734479, 'localtime': '2024-06-18 14:14'}, 'current': {'last_updated_epoch': 1718733600, 'last_updated': '2024-06-18 14:00', 'temp_c': 29.4, 'temp_f': 84.9, 'is_day': 1, 'condition': {'text': 'Sunny', 'icon': '//cdn.weatherapi.com/weather/64x64/day/113.png', 'code': 1000}, 'wind_mph': 2.2, 'wind_kph': 3.6, 'wind_degree': 158, 'wind_dir': 'SSE', 'pressure_mb': 1025.0, 'pressure_in': 30.26, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 63, 'cloud': 0, 'feelslike_c': 31.3, 'feelslike_f': 88.3, 'windchill_c': 28.3, 'windchill_f': 82.9, 'heatindex_c': 29.6, 'heatindex_f': 85.3, 'dewpoint_c': 18.4, 'dewpoint_f': 65.2, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 7.0, 'gust_mph': 16.5, 'gust_kph': 26.5}}\"}]\n", - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "According to the weather data from WeatherAPI:\n", - "\n", - "Current Conditions in New York City (as of 2:00 PM local time):\n", - "- Temperature: 85°F (29°C)\n", - "- Conditions: Sunny\n", - "- Wind: 2 mph (4 km/h) from the SSE\n", - "- Humidity: 63%\n", - "- Heat Index: 85°F (30°C)\n", - "\n", - "The forecast shows sunny and warm conditions persisting over the next few days:\n", - "\n", - "Today: Sunny, high of 85°F (29°C)\n", - "Tonight: Clear, low of 68°F (20°C)\n", - "Tomorrow: Sunny, high of 88°F (31°C) \n", - "Thursday: Mostly sunny, high of 90°F (32°C)\n", - "Friday: Partly cloudy, high of 87°F (31°C)\n", - "\n", - "So New York City is experiencing beautiful sunny weather with seasonably warm temperatures in the mid-to-upper 80s Fahrenheit (around 30°C). Humidity is moderate in the 60% range. Overall, ideal late spring/early summer conditions for being outdoors in the city over the next several days.\n" - ] - } - ], - "source": [ - "for m in convert_to_messages(state[\"values\"][\"messages\"]):\n", - " m.pretty_print()" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "langgraph-example-dev", - "language": "python", - "name": "langgraph-example-dev" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/human-in-the-loop.ipynb b/examples/python/notebooks/human-in-the-loop.ipynb deleted file mode 100644 index 5f200c97..00000000 --- a/examples/python/notebooks/human-in-the-loop.ipynb +++ /dev/null @@ -1,655 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "51466c8d-8ce4-4b3d-be4e-18fdbeda5f53", - "metadata": {}, - "source": [ - "# How to have a human in the loop\n", - "\n", - "With it's built in persistence layer, LangGraph API is perfect for human-in-the-loop workflows.\n", - "Here we cover a few such examples:\n", - "\n", - "1. Having a human in the loop to approve a tool call\n", - "2. Having a human in the loop to edit a tool call\n", - "3. Having a human in the loop to edit an old state and resume execution from there\n" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "521d975b-e94b-4c37-bfa1-82d969e2a4dc", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "27a1392b-86c3-464e-99a8-90ffc965f3ec", - "metadata": {}, - "outputs": [], - "source": [ - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "230c0464-a6e5-420f-9e38-ca514e5634ce", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca',\n", - " 'graph_id': 'agent',\n", - " 'config': {},\n", - " 'created_at': '2024-05-18T00:19:39.688822+00:00',\n", - " 'updated_at': '2024-05-18T00:19:39.688822+00:00',\n", - " 'metadata': {'created_by': 'system'}}" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "assistant_id = \"agent\"" - ] - }, - { - "cell_type": "markdown", - "id": "e0209129-239b-452e-a59a-47be716bbf8c", - "metadata": {}, - "source": [ - "## Approve a tool call" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "56aa5159-5583-4134-9210-709b969bda6f", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'thread_id': '54ed0901-6767-46c9-a5f9-b65c1c5fd89c',\n", - " 'created_at': '2024-05-18T22:46:16.724701+00:00',\n", - " 'updated_at': '2024-05-18T22:46:16.724701+00:00',\n", - " 'metadata': {}}" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "thread = await client.threads.create()\n", - "thread" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "147c3f98-f889-4f05-a090-6b31f2a0b291", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[]" - ] - }, - "execution_count": 6, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "runs = await client.runs.list(thread['thread_id'])\n", - "runs" - ] - }, - { - "cell_type": "markdown", - "id": "77dae6ad-bb7b-468d-b7fd-9b8a35f13ccb", - "metadata": {}, - "source": [ - "We now want to add a human-in-the-loop step before a tool is called.\n", - "We can do this by adding `interrupt_before=[\"action\"]`, which tells us to interrupt before calling the action node.\n", - "We can do this either when compiling the graph or when kicking off a run.\n", - "Here we will do it when kicking of a run." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "7da70e20-1a4e-4df2-b996-1927f474c835", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': '3b77ef83-687a-4840-8858-0371f91a92c3'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': [{'id': 'toolu_01HwZqM1ptX6E15A5LAmyZTB', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-e5d17791-4d37-4ad2-815f-a0c4cba62585', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in san francisco'}, 'id': 'toolu_01HwZqM1ptX6E15A5LAmyZTB'}], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = {\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in sf\"}]}\n", - "async for chunk in client.runs.stream(\n", - " thread['thread_id'], assistant_id, input=input, stream_mode=\"updates\", interrupt_before=['action']\n", - "):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "markdown", - "id": "a36ac0d6-7843-4fab-909c-0b5b6e725a7f", - "metadata": {}, - "source": [ - "We can now kick off a new run on the same thread with `None` as the input in order to just continue the existing thread." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "bded66c7-b56e-4db5-809f-fa5a31d8a012", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': 'a46f733d-cf5b-4ee3-9e07-08612468c8df'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'action': {'messages': [{'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716072201, \\'localtime\\': \\'2024-05-18 15:43\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716071400, \\'last_updated\\': \\'2024-05-18 15:30\\', \\'temp_c\\': 18.9, \\'temp_f\\': 66.0, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Partly cloudy\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/116.png\\', \\'code\\': 1003}, \\'wind_mph\\': 18.6, \\'wind_kph\\': 29.9, \\'wind_degree\\': 280, \\'wind_dir\\': \\'W\\', \\'pressure_mb\\': 1015.0, \\'pressure_in\\': 29.96, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 59, \\'cloud\\': 25, \\'feelslike_c\\': 18.9, \\'feelslike_f\\': 66.0, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 5.0, \\'gust_mph\\': 23.0, \\'gust_kph\\': 37.1}}\"}]', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'tavily_search_results_json', 'id': '8be98ff3-6d61-41c5-8384-8db6b7abdbfb', 'tool_call_id': 'toolu_01HwZqM1ptX6E15A5LAmyZTB'}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': \"The weather in San Francisco is currently partly cloudy with a temperature of around 66°F (18.9°C). There are westerly winds of 18.6 mph (29.9 km/h) with gusts up to 23 mph (37.1 km/h). The humidity is 59% and visibility is good at 9 miles (16 km). UV levels are moderate at 5.0.\\n\\nIn summary, it's a nice partly cloudy spring day in San Francisco with comfortable temperatures and a moderate breeze. The weather conditions seem ideal for being outdoors and enjoying the city.\", 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-7a8a2ff8-d0d6-4200-b0a5-926f2b6a4798', 'example': False, 'tool_calls': [], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = None\n", - "async for chunk in client.runs.stream(\n", - " thread['thread_id'], assistant_id, input=input, stream_mode=\"updates\", interrupt_before=['action']\n", - "):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "markdown", - "id": "2072ce5a-8771-42f9-b2de-5d3a7a9c817b", - "metadata": {}, - "source": [ - "## Edit a tool call\n", - "\n", - "What if we want to edit the tool call?\n", - "We can also do that.\n", - "Let's kick off another run, with the same `interrupt_before=['action']`" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "b226b687-02da-4eef-9286-46dba92b17ba", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': 'c7c8e313-dad9-47d9-bd03-e112c94eff9e'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': [{'id': 'toolu_01NGhKmeciaT7TfhBSwUT3mi', 'input': {'query': 'weather in los angeles'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-3d417aa5-e9c1-4b76-90f8-597519c28af9', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in los angeles'}, 'id': 'toolu_01NGhKmeciaT7TfhBSwUT3mi'}], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = {\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in la?\"}]}\n", - "async for chunk in client.runs.stream(\n", - " thread['thread_id'], assistant_id, input=input, stream_mode=\"updates\", interrupt_before=['action']\n", - "):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "markdown", - "id": "ab338423-c18d-446c-9aa3-3ad2f16d742a", - "metadata": {}, - "source": [ - "We can now inspect the state of the thread" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "bd9ca1f4-c3b0-4fa3-8c91-233a9129a142", - "metadata": {}, - "outputs": [], - "source": [ - "thread_state = await client.threads.get_state(thread['thread_id'])" - ] - }, - { - "cell_type": "markdown", - "id": "31e82414-afd2-46c4-a605-ce3eb46df485", - "metadata": {}, - "source": [ - "Let's get the last message of the thread - this is the one we want to update" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "fe832ec1-7ae0-4d11-8408-d4da88d4dced", - "metadata": {}, - "outputs": [], - "source": [ - "last_message = thread_state['values']['messages'][-1]" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "id": "434253fe-7397-45e2-8be8-91d002088a96", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'id': 'toolu_01NGhKmeciaT7TfhBSwUT3mi',\n", - " 'input': {'query': 'weather in los angeles'},\n", - " 'name': 'tavily_search_results_json',\n", - " 'type': 'tool_use'}]" - ] - }, - "execution_count": 12, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "last_message['content']" - ] - }, - { - "cell_type": "markdown", - "id": "6d007b31-c8a2-465c-bc78-a5909ca7931c", - "metadata": {}, - "source": [ - "Let's now modify the tool call to say Louisiana" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "id": "55fcb316-450b-4b8c-9ae9-e7ee395acc55", - "metadata": {}, - "outputs": [], - "source": [ - "last_message['tool_calls'] = [{\n", - " 'id': last_message['tool_calls'][0]['id'],\n", - " 'name': 'tavily_search_results_json',\n", - " # We change the query to say temperature\n", - " 'args': {'query': 'weather in Louisiana'}\n", - "}]\n", - "# last_message['content'] = [{\n", - "# 'id': last_message['content'][0]['id'],\n", - "# 'name': 'tavily_search_results_json',\n", - "# # We change the query to say temperature\n", - "# 'input': {'query': 'weather in Louisiana'},\n", - "# 'type': 'tool_use'\n", - "# }]" - ] - }, - { - "cell_type": "markdown", - "id": "d49be54e-5334-47be-8dfb-78b8a8155e98", - "metadata": {}, - "source": [ - "We can now update the state - we only need to pass in the last updated message because our graph will handle the update." - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "id": "0438f997-bad3-48f6-b532-9ac3a95263c2", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'configurable': {'thread_id': '54ed0901-6767-46c9-a5f9-b65c1c5fd89c',\n", - " 'thread_ts': '1ef15688-1dbd-68f5-8007-75dc0e110124'}}" - ] - }, - "execution_count": 14, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "await client.threads.update_state(thread['thread_id'], values={\"messages\": [last_message]})" - ] - }, - { - "cell_type": "markdown", - "id": "c96668ab-80fa-4ae6-a90b-773a943ba331", - "metadata": {}, - "source": [ - "Let's now check the state of the thread again, and in particular the final message" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "id": "31936711-4af4-4bd1-ac10-9ce52922dd2f", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'name': 'tavily_search_results_json',\n", - " 'args': {'query': 'weather in Louisiana'},\n", - " 'id': 'toolu_01NGhKmeciaT7TfhBSwUT3mi'}]" - ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "thread_state = await client.threads.get_state(thread['thread_id'])\n", - "thread_state['values']['messages'][-1]['tool_calls']" - ] - }, - { - "cell_type": "markdown", - "id": "20aa8ff3-7876-4db2-9333-c5396cd637ac", - "metadata": {}, - "source": [ - "Great! We changed it. If we now resume execution (by kicking off a new run with null inputs on the same thread) it should use that new tool call." - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "id": "8e2c4eeb-2888-4979-9877-aa4a53dec5ea", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': '1a1ebed1-3581-418a-81be-e834b40c5c82'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'action': {'messages': [{'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'Louisiana\\', \\'region\\': \\'Missouri\\', \\'country\\': \\'USA United States of America\\', \\'lat\\': 39.44, \\'lon\\': -91.06, \\'tz_id\\': \\'America/Chicago\\', \\'localtime_epoch\\': 1716072393, \\'localtime\\': \\'2024-05-18 17:46\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716072300, \\'last_updated\\': \\'2024-05-18 17:45\\', \\'temp_c\\': 29.0, \\'temp_f\\': 84.2, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Partly cloudy\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/116.png\\', \\'code\\': 1003}, \\'wind_mph\\': 6.9, \\'wind_kph\\': 11.2, \\'wind_degree\\': 220, \\'wind_dir\\': \\'SW\\', \\'pressure_mb\\': 1011.0, \\'pressure_in\\': 29.86, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 46, \\'cloud\\': 50, \\'feelslike_c\\': 31.4, \\'feelslike_f\\': 88.6, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 7.0, \\'gust_mph\\': 7.4, \\'gust_kph\\': 11.9}}\"}]', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'tavily_search_results_json', 'id': '728f8ac9-729e-4bf7-b560-b332a73c8f47', 'tool_call_id': 'toolu_01NGhKmeciaT7TfhBSwUT3mi'}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': [{'text': 'The search results seem to be for the weather in Louisiana, Missouri rather than Los Angeles, California. Let me try the search again:', 'type': 'text'}, {'id': 'toolu_019YAXWMK33tG9DaxMzrowc8', 'input': {'query': 'weather in los angeles california'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-c42a3b14-2611-4a1d-8907-95dcdb18f07f', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in los angeles california'}, 'id': 'toolu_019YAXWMK33tG9DaxMzrowc8'}], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = None\n", - "async for chunk in client.runs.stream(\n", - " thread['thread_id'], assistant_id, input=input, stream_mode=\"updates\", interrupt_before=['action']\n", - "):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "markdown", - "id": "065f8165-43d8-4876-86af-0cfffd712fee", - "metadata": {}, - "source": [ - "## Edit an old state\n", - "\n", - "Let's now imagine we want to go back in time and edit the tool call after we had already made it.\n", - "In order to do this, we can get first get the full history of the thread." - ] - }, - { - "cell_type": "code", - "execution_count": 46, - "id": "de050efd-73a4-441e-91e0-18e08f773a42", - "metadata": {}, - "outputs": [], - "source": [ - "thread_history = await client.threads.get_history(thread['thread_id'], limit=100)" - ] - }, - { - "cell_type": "code", - "execution_count": 47, - "id": "07e15435-4a5f-4c2a-b748-0e0f7ab02a28", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "11" - ] - }, - "execution_count": 47, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "len(thread_history)" - ] - }, - { - "cell_type": "markdown", - "id": "a292e721-36c4-41b8-85e4-378f0770652a", - "metadata": {}, - "source": [ - "After that, we can get the correct state we want to be in. The 0th index state is the most recent one, while the -1 index state is the first.\n", - "In this case, we want to go to the state where the last message had the tool calls for `weather in los angeles`" - ] - }, - { - "cell_type": "code", - "execution_count": 48, - "id": "132d207c-11cb-4efb-a330-88ebdfc612c8", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'name': 'tavily_search_results_json',\n", - " 'args': {'query': 'weather in los angeles'},\n", - " 'id': 'toolu_01FnuDKhUfagwoqhNfiTYTfS'}]" - ] - }, - "execution_count": 48, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "rewind_state = thread_history[3]\n", - "rewind_state['values']['messages'][-1]['tool_calls']" - ] - }, - { - "cell_type": "code", - "execution_count": 49, - "id": "45e01ddf-2ccf-4029-b431-e5fce2235b59", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'configurable': {'thread_id': 'df85453d-cb86-48c8-ae84-12081faa1bdf',\n", - " 'thread_ts': '1ef15582-3442-6db7-8006-9166bbb0e80f'}}" - ] - }, - "execution_count": 49, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "rewind_state['config']" - ] - }, - { - "cell_type": "markdown", - "id": "d229468e-2f94-4b29-b56b-1d402554dcfb", - "metadata": {}, - "source": [ - "If we want to, we can now resume execution from that place in time" - ] - }, - { - "cell_type": "code", - "execution_count": 50, - "id": "94ebc63e-f2cf-4da1-bc8d-52c4731ab0c6", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': 'a1cc9263-ef0a-4c04-9194-6f01624d0ef0'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'action': {'messages': [{'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'Los Angeles\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 34.05, \\'lon\\': -118.24, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716071728, \\'localtime\\': \\'2024-05-18 15:35\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716071400, \\'last_updated\\': \\'2024-05-18 15:30\\', \\'temp_c\\': 20.0, \\'temp_f\\': 68.0, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Partly cloudy\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/116.png\\', \\'code\\': 1003}, \\'wind_mph\\': 2.2, \\'wind_kph\\': 3.6, \\'wind_degree\\': 226, \\'wind_dir\\': \\'SW\\', \\'pressure_mb\\': 1016.0, \\'pressure_in\\': 29.99, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 61, \\'cloud\\': 50, \\'feelslike_c\\': 20.0, \\'feelslike_f\\': 68.0, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 6.0, \\'gust_mph\\': 12.6, \\'gust_kph\\': 20.3}}\"}]', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'tavily_search_results_json', 'id': '7137b2e5-566b-418b-b642-b3c6b64c5224', 'tool_call_id': 'toolu_01FnuDKhUfagwoqhNfiTYTfS'}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': 'The search results show the current weather conditions in Los Angeles. As of 3:30pm on May 18, 2024, the weather in Los Angeles is partly cloudy with a temperature around 68°F (20°C). Winds are light from the southwest around 2-3 mph. The humidity is 61% and visibility is good at 9 miles. Overall, it appears to be a nice spring day in LA with partly sunny skies and comfortable temperatures in the upper 60s.', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-3966b68a-c381-4933-a852-e6a4697c962c', 'example': False, 'tool_calls': [], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = None\n", - "async for chunk in client.runs.stream(\n", - " thread['thread_id'], \n", - " assistant_id, \n", - " input=input, \n", - " stream_mode=\"updates\", \n", - " interrupt_before=['action'],\n", - " config=rewind_state['config']\n", - "):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "492f1d37-0979-4210-8dd7-bc70cdc308f3", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/same-thread.ipynb b/examples/python/notebooks/same-thread.ipynb deleted file mode 100644 index 94d08a04..00000000 --- a/examples/python/notebooks/same-thread.ipynb +++ /dev/null @@ -1,184 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "68c0837d-c40a-4209-9f88-5d08c00c31b0", - "metadata": {}, - "source": [ - "# How to run multiple agents on the same thread\n", - "\n", - "In LangGraph API, a thread is not explicitly associated with a particular agent.\n", - "This means that you can run multiple agents on the same thread.\n", - "In this example, we will create two agents and then call them both on the same thread." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "e06be1f6-07a5-4e93-8497-02473fc65d4f", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client\n", - "\n", - "client = get_client()\n", - "\n", - "openai_assistant = await client.assistants.create(graph_id=\"agent\", config={\"configurable\": {\"model_name\": \"openai\"}})\n", - "\n", - "# There should always be a default assistant with no configuration\n", - "assistants = await client.assistants.search()\n", - "default_assistant = [a for a in assistants if not a['config']][0]" - ] - }, - { - "cell_type": "markdown", - "id": "4f10d346-69e6-44f4-8ff0-ef539ba938df", - "metadata": {}, - "source": [ - "We can see that these agents are different" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "3898ca35-eb2c-4b12-97ea-e0cc6a7c6a2e", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'assistant_id': '13ecc353-a9a9-474b-a824-b6a343cd74b1',\n", - " 'graph_id': 'agent',\n", - " 'config': {'configurable': {'model_name': 'openai'}},\n", - " 'created_at': '2024-05-21T16:22:59.258447+00:00',\n", - " 'updated_at': '2024-05-21T16:22:59.258447+00:00',\n", - " 'metadata': {}}" - ] - }, - "execution_count": 8, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "openai_assistant" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "a8fa67b2-cb4f-43d3-a1fc-f8b3936c16b6", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca',\n", - " 'graph_id': 'agent',\n", - " 'config': {},\n", - " 'created_at': '2024-05-18T00:19:39.688822+00:00',\n", - " 'updated_at': '2024-05-18T00:19:39.688822+00:00',\n", - " 'metadata': {'created_by': 'system'}}" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "default_assistant" - ] - }, - { - "cell_type": "markdown", - "id": "5e655e61-c2ee-488a-90f6-6189c84841da", - "metadata": {}, - "source": [ - "We can now run it on the OpenAI assistant first." - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "id": "68ed7a1b-74be-4560-8c55-c76d49d3d348", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "StreamPart(event='metadata', data={'run_id': 'f90b3029-8669-4d70-976c-b70368e355d8'})\n", - "StreamPart(event='updates', data={'agent': {'messages': [{'content': 'I was created by OpenAI, a research organization focused on developing and advancing artificial intelligence technology.', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'stop'}, 'type': 'ai', 'name': None, 'id': 'run-9801a5ba-2f3c-43de-89cf-c740debf36fc', 'example': False, 'tool_calls': [], 'invalid_tool_calls': []}]}})\n", - "StreamPart(event='end', data=None)\n" - ] - } - ], - "source": [ - "thread = await client.threads.create()\n", - "input = {\"messages\": [{\"role\": \"user\", \"content\": \"who made you?\"}]}\n", - "async for event in client.runs.stream(thread['thread_id'], openai_assistant['assistant_id'], input=input, stream_mode='updates'):\n", - " print(event)" - ] - }, - { - "cell_type": "markdown", - "id": "c53709e9-ddb2-4429-9042-456eb6c91244", - "metadata": {}, - "source": [ - "Now, we can run it on a different Anthropic-based assistant." - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "id": "666d78f1-019a-433e-839e-52d2ebb3d9c8", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "StreamPart(event='metadata', data={'run_id': 'c3521302-48ae-4c29-a0f2-5eb865cbc6d7'})\n", - "StreamPart(event='updates', data={'agent': {'messages': [{'content': \"I am an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't actually have a physical form or visual representation - I exist as a language model trained to have natural conversations.\", 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-4d05ffd7-0505-43e1-a068-0207c56b7665', 'example': False, 'tool_calls': [], 'invalid_tool_calls': []}]}})\n", - "StreamPart(event='end', data=None)\n" - ] - } - ], - "source": [ - "input = {\"messages\": [{\"role\": \"user\", \"content\": \"and you?\"}]}\n", - "async for event in client.runs.stream(thread['thread_id'], default_assistant['assistant_id'], input=input, stream_mode='updates'):\n", - " print(event)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "4c26df68-c447-4a88-bc94-59df42b117b5", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.1" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/stream_messages.ipynb b/examples/python/notebooks/stream_messages.ipynb deleted file mode 100644 index 2aed7967..00000000 --- a/examples/python/notebooks/stream_messages.ipynb +++ /dev/null @@ -1,389 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "51466c8d-8ce4-4b3d-be4e-18fdbeda5f53", - "metadata": {}, - "source": [ - "# How to stream messages from your graph\n", - "\n", - "There are multiple different streaming modes.\n", - "\n", - "- `values`: This streaming mode streams back values of the graph. This is the **full state of the graph** after each node is called.\n", - "- `updates`: This streaming mode streams back updates to the graph. This is the **update to the state of the graph** after each node is called.\n", - "- `messages`: This streaming mode streams back messages - both complete messages (at the end of a node) as well as **tokens** for any messages generated inside a node. This mode is primarily meant for powering chat applications.\n", - "\n", - "\n", - "This notebook covers `streaming_mode=\"messages\"`.\n", - "\n", - "In order to use this mode, the state of the graph you are interacting with MUST have a messages key that is a list of messages.\n", - "Eg, the state should look something like:\n", - "\n", - "```python\n", - "from typing import TypedDict, Annotated\n", - "from langgraph.graph import add_messages\n", - "from langchain_core.messages import AnyMessage\n", - "\n", - "class State(TypedDict):\n", - " messages: Annotated[list[AnyMessage], add_messages]\n", - "```\n", - "\n", - "OR it should be an instance or subclass of `from langgraph.graph import MessageState` (`MessageState` is just a helper type hint equivalent to the above).\n", - "\n", - "With `stream_mode=\"messages\"` two things will be streamed back:\n", - "\n", - "- It outputs messages produced by any chat model called inside (unless tagged in a special way)\n", - "- It outputs messages returned from nodes (to allow for nodes to return `ToolMessages` and the like" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "521d975b-e94b-4c37-bfa1-82d969e2a4dc", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "27a1392b-86c3-464e-99a8-90ffc965f3ec", - "metadata": {}, - "outputs": [], - "source": [ - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "714e9f92-86b4-4cd8-9d68-cfc45d56ed2c", - "metadata": {}, - "outputs": [], - "source": [ - "assistant_id = \"agent\"" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "56aa5159-5583-4134-9210-709b969bda6f", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'thread_id': 'e1431c95-e241-4d1d-a252-27eceb1e5c86',\n", - " 'created_at': '2024-06-21T15:48:59.808924+00:00',\n", - " 'updated_at': '2024-06-21T15:48:59.808924+00:00',\n", - " 'metadata': {}}" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "thread = await client.threads.create()\n", - "thread" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "147c3f98-f889-4f05-a090-6b31f2a0b291", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[]" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "runs = await client.runs.list(thread['thread_id'])\n", - "runs" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "040795c6-5d9f-4729-9132-f3b0f94d9e94", - "metadata": {}, - "outputs": [], - "source": [ - "# Helper function for formatting messages\n", - "\n", - "def format_tool_calls(tool_calls):\n", - " if tool_calls:\n", - " formatted_calls = []\n", - " for call in tool_calls:\n", - " formatted_calls.append(f\"Tool Call ID: {call['id']}, Function: {call['name']}, Arguments: {call['args']}\")\n", - " return \"\\n\".join(formatted_calls)\n", - " return \"No tool calls\"" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "7da70e20-1a4e-4df2-b996-1927f474c835", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Metadata: Run ID - 1ef2fe5c-6a1d-6575-bc09-d7832711c17e\n", - "--------------------------------------------------\n", - "Invalid Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: \n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': ''}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current'}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current weather'}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current weather in'}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current weather in San'}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current weather in San Francisco'}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current weather in San Francisco'}\n", - "--------------------------------------------------\n", - "Tool Calls:\n", - "Tool Call ID: call_cg14F20jMBqWYrNgEkdWHwB3, Function: tavily_search_results_json, Arguments: {'query': 'current weather in San Francisco'}\n", - "Response Metadata: Finish Reason - tool_calls\n", - "--------------------------------------------------\n", - "--------------------------------------------------\n", - "AI: The\n", - "--------------------------------------------------\n", - "AI: The current\n", - "--------------------------------------------------\n", - "AI: The current weather\n", - "--------------------------------------------------\n", - "AI: The current weather in\n", - "--------------------------------------------------\n", - "AI: The current weather in San\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is over\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of \n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F).\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-s\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-south\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at \n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 k\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph).\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at \n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%,\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is \n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles).\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV index\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV index is\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV index is \n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV index is 3\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV index is 3.\n", - "--------------------------------------------------\n", - "AI: The current weather in San Francisco is overcast with a temperature of 13.9°C (57.0°F). The wind is blowing from the south-southwest at 6.9 mph (11.2 kph). The humidity is at 81%, and the visibility is 16 km (9 miles). The UV index is 3.\n", - "Response Metadata: Finish Reason - stop\n", - "--------------------------------------------------\n" - ] - } - ], - "source": [ - "input = {\"messages\": [{\"role\": \"user\", \"content\": \"whats the weather in sf\"}]}\n", - "config = {\"configurable\": {\"model_name\": \"openai\"}}\n", - "\n", - "async for event in client.runs.stream(thread['thread_id'], assistant_id, input=input, config=config, stream_mode='messages'):\n", - " if event.event == 'metadata':\n", - " print(f\"Metadata: Run ID - {event.data['run_id']}\")\n", - " print(\"-\" * 50)\n", - " elif event.event == 'messages/partial':\n", - " for data_item in event.data:\n", - " if 'role' in data_item and data_item['role'] == 'user':\n", - " print(f\"Human: {data_item['content']}\")\n", - " else:\n", - " tool_calls = data_item.get('tool_calls', [])\n", - " invalid_tool_calls = data_item.get('invalid_tool_calls', [])\n", - " content = data_item.get('content', \"\")\n", - " response_metadata = data_item.get('response_metadata', {})\n", - "\n", - " if content:\n", - " print(f\"AI: {content}\")\n", - " \n", - " if tool_calls:\n", - " print(\"Tool Calls:\")\n", - " print(format_tool_calls(tool_calls))\n", - " \n", - " if invalid_tool_calls:\n", - " print(\"Invalid Tool Calls:\")\n", - " print(format_tool_calls(invalid_tool_calls))\n", - "\n", - " if response_metadata:\n", - " finish_reason = response_metadata.get('finish_reason', 'N/A')\n", - " print(f\"Response Metadata: Finish Reason - {finish_reason}\")\n", - " print(\"-\" * 50)\n", - " " - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "langgraph-example-dev", - "language": "python", - "name": "langgraph-example-dev" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/stream_updates.ipynb b/examples/python/notebooks/stream_updates.ipynb deleted file mode 100644 index 199dd3f7..00000000 --- a/examples/python/notebooks/stream_updates.ipynb +++ /dev/null @@ -1,173 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "51466c8d-8ce4-4b3d-be4e-18fdbeda5f53", - "metadata": {}, - "source": [ - "# How to stream updates from your graph\n", - "\n", - "There are multiple different streaming modes.\n", - "\n", - "- `values`: This streaming mode streams back values of the graph. This is the **full state of the graph** after each node is called.\n", - "- `updates`: This streaming mode streams back updates to the graph. This is the **update to the state of the graph** after each node is called.\n", - "- `messages`: This streaming mode streams back messages - both complete messages (at the end of a node) as well as **tokens** for any messages generated inside a node. This mode is primarily meant for powering chat applications.\n", - "\n", - "\n", - "This notebook covers `streaming_mode=\"updates\"`." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "521d975b-e94b-4c37-bfa1-82d969e2a4dc", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "27a1392b-86c3-464e-99a8-90ffc965f3ec", - "metadata": {}, - "outputs": [], - "source": [ - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "230c0464-a6e5-420f-9e38-ca514e5634ce", - "metadata": {}, - "outputs": [], - "source": [ - "assistant_id = \"agent\"" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "56aa5159-5583-4134-9210-709b969bda6f", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'thread_id': '979e3c89-a702-4882-87c2-7a59a250ce16',\n", - " 'created_at': '2024-06-21T15:22:07.453100+00:00',\n", - " 'updated_at': '2024-06-21T15:22:07.453100+00:00',\n", - " 'metadata': {}}" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "thread = await client.threads.create()\n", - "thread" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "147c3f98-f889-4f05-a090-6b31f2a0b291", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[]" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "runs = await client.runs.list(thread['thread_id'])\n", - "runs" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "7da70e20-1a4e-4df2-b996-1927f474c835", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': 'cfc96c16-ed9a-44bd-b5bb-c30e3c0725f0'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': [{'id': 'toolu_0148tMmDK51iLQfG1yaNwRHM', 'input': {'query': 'weather in los angeles'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-1a9d32b0-7007-4a36-abde-8df812a0ed94', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in los angeles'}, 'id': 'toolu_0148tMmDK51iLQfG1yaNwRHM'}], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'action': {'messages': [{'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'Los Angeles\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 34.05, \\'lon\\': -118.24, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716062239, \\'localtime\\': \\'2024-05-18 12:57\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716061500, \\'last_updated\\': \\'2024-05-18 12:45\\', \\'temp_c\\': 18.9, \\'temp_f\\': 66.0, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Overcast\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/122.png\\', \\'code\\': 1009}, \\'wind_mph\\': 2.2, \\'wind_kph\\': 3.6, \\'wind_degree\\': 10, \\'wind_dir\\': \\'N\\', \\'pressure_mb\\': 1017.0, \\'pressure_in\\': 30.02, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 65, \\'cloud\\': 100, \\'feelslike_c\\': 18.9, \\'feelslike_f\\': 66.0, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 6.0, \\'gust_mph\\': 7.5, \\'gust_kph\\': 12.0}}\"}]', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'tavily_search_results_json', 'id': 'a36e8cd1-0e96-4417-9c15-f10a945d2b42', 'tool_call_id': 'toolu_0148tMmDK51iLQfG1yaNwRHM'}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: data...\n", - "{'agent': {'messages': [{'content': 'The weather in Los Angeles is currently overcast with a temperature of around 66°F (18.9°C). There are light winds from the north at around 2-3 mph. The humidity is 65% and visibility is good at 9 miles. Overall, mild spring weather conditions in LA.', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-d5c1c2f0-b12d-41ce-990b-f36570e7483d', 'example': False, 'tool_calls': [], 'invalid_tool_calls': []}]}}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = {\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in la\"}]}\n", - "async for chunk in client.runs.stream(thread['thread_id'], assistant_id, input=input, stream_mode=\"updates\", ):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "53800469-354a-4739-8e77-b88044c772d5", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/examples/python/notebooks/stream_values.ipynb b/examples/python/notebooks/stream_values.ipynb deleted file mode 100644 index 1286fae6..00000000 --- a/examples/python/notebooks/stream_values.ipynb +++ /dev/null @@ -1,212 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "51466c8d-8ce4-4b3d-be4e-18fdbeda5f53", - "metadata": {}, - "source": [ - "# How to stream values from your graph\n", - "\n", - "There are multiple different streaming modes.\n", - "\n", - "- `values`: This streaming mode streams back values of the graph. This is the **full state of the graph** after each node is called.\n", - "- `updates`: This streaming mode streams back updates to the graph. This is the **update to the state of the graph** after each node is called.\n", - "- `messages`: This streaming mode streams back messages - both complete messages (at the end of a node) as well as **tokens** for any messages generated inside a node. This mode is primarily meant for powering chat applications.\n", - "\n", - "\n", - "This notebook covers `streaming_mode=\"values\"`." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "521d975b-e94b-4c37-bfa1-82d969e2a4dc", - "metadata": {}, - "outputs": [], - "source": [ - "from langgraph_sdk import get_client" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "27a1392b-86c3-464e-99a8-90ffc965f3ec", - "metadata": {}, - "outputs": [], - "source": [ - "client = get_client()" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "230c0464-a6e5-420f-9e38-ca514e5634ce", - "metadata": {}, - "outputs": [], - "source": [ - "assistant_id = \"agent\"" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "7da70e20-1a4e-4df2-b996-1927f474c835", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Receiving new event of type: metadata...\n", - "{'run_id': 'f08791ce-0a3d-44e0-836c-ff62cd2e2786'}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: values...\n", - "{'messages': [{'role': 'human', 'content': 'whats the weather in la'}]}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: values...\n", - "{'messages': [{'content': 'whats the weather in la', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': 'faa15565-8823-4aa1-87af-e21b40526fae', 'example': False}, {'content': [{'id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g', 'input': {'query': 'weather in los angeles'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-3fe1db7a-6b8d-4d83-ba07-8657190ad811', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in los angeles'}, 'id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g'}], 'invalid_tool_calls': []}]}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: values...\n", - "{'messages': [{'content': 'whats the weather in la', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': 'faa15565-8823-4aa1-87af-e21b40526fae', 'example': False}, {'content': [{'id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g', 'input': {'query': 'weather in los angeles'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-3fe1db7a-6b8d-4d83-ba07-8657190ad811', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in los angeles'}, 'id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g'}], 'invalid_tool_calls': []}, {'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'Los Angeles\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 34.05, \\'lon\\': -118.24, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716310320, \\'localtime\\': \\'2024-05-21 9:52\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716309900, \\'last_updated\\': \\'2024-05-21 09:45\\', \\'temp_c\\': 16.7, \\'temp_f\\': 62.1, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Overcast\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/122.png\\', \\'code\\': 1009}, \\'wind_mph\\': 8.1, \\'wind_kph\\': 13.0, \\'wind_degree\\': 250, \\'wind_dir\\': \\'WSW\\', \\'pressure_mb\\': 1015.0, \\'pressure_in\\': 29.97, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 65, \\'cloud\\': 100, \\'feelslike_c\\': 16.7, \\'feelslike_f\\': 62.1, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 5.0, \\'gust_mph\\': 12.5, \\'gust_kph\\': 20.2}}\"}]', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'tavily_search_results_json', 'id': '0d5dab31-5ff8-4ae2-a560-bc4bcba7c9d7', 'tool_call_id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g'}]}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: values...\n", - "{'messages': [{'content': 'whats the weather in la', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': 'faa15565-8823-4aa1-87af-e21b40526fae', 'example': False}, {'content': [{'id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g', 'input': {'query': 'weather in los angeles'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-3fe1db7a-6b8d-4d83-ba07-8657190ad811', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in los angeles'}, 'id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g'}], 'invalid_tool_calls': []}, {'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'Los Angeles\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 34.05, \\'lon\\': -118.24, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716310320, \\'localtime\\': \\'2024-05-21 9:52\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716309900, \\'last_updated\\': \\'2024-05-21 09:45\\', \\'temp_c\\': 16.7, \\'temp_f\\': 62.1, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Overcast\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/122.png\\', \\'code\\': 1009}, \\'wind_mph\\': 8.1, \\'wind_kph\\': 13.0, \\'wind_degree\\': 250, \\'wind_dir\\': \\'WSW\\', \\'pressure_mb\\': 1015.0, \\'pressure_in\\': 29.97, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 65, \\'cloud\\': 100, \\'feelslike_c\\': 16.7, \\'feelslike_f\\': 62.1, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 5.0, \\'gust_mph\\': 12.5, \\'gust_kph\\': 20.2}}\"}]', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'tavily_search_results_json', 'id': '0d5dab31-5ff8-4ae2-a560-bc4bcba7c9d7', 'tool_call_id': 'toolu_01E5mSaZWm5rWJnCqmt63v4g'}, {'content': 'Based on the weather API results, the current weather in Los Angeles is overcast with a temperature of around 62°F (17°C). There are light winds from the west-southwest around 8-13 mph. The humidity is 65% and visibility is good at 9 miles. Overall, mild spring weather conditions in LA.', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-4d6d4c23-5aad-4042-b0d9-19407a9e08e3', 'example': False, 'tool_calls': [], 'invalid_tool_calls': []}]}\n", - "\n", - "\n", - "\n", - "Receiving new event of type: end...\n", - "None\n", - "\n", - "\n", - "\n" - ] - } - ], - "source": [ - "input = {\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in la\"}]}\n", - "thread = await client.threads.create()\n", - "async for chunk in client.runs.stream(thread['thread_id'], assistant_id, input=input):\n", - " print(f\"Receiving new event of type: {chunk.event}...\")\n", - " print(chunk.data)\n", - " print(\"\\n\\n\")" - ] - }, - { - "cell_type": "markdown", - "id": "43e4432d-e96c-4ae4-8085-866fb57bbcb3", - "metadata": {}, - "source": [ - "If we want to just get the final result, we can use this endpoint and just keep track of the last value we received" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "d2560481-d161-4d4f-b385-4977696c4aa1", - "metadata": {}, - "outputs": [], - "source": [ - "input = {\"messages\": [{\"role\": \"human\", \"content\": \"whats the weather in la\"}]}\n", - "thread = await client.threads.create()\n", - "final_answer = None\n", - "async for chunk in client.runs.stream(thread['thread_id'], assistant_id, input=input):\n", - " if chunk.event == \"values\":\n", - " final_answer = chunk.data" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "9c2d60ea-450f-45cd-b867-0cbb162528f6", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "{'messages': [{'content': 'whats the weather in la',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'human',\n", - " 'name': None,\n", - " 'id': 'e78c2f94-d810-42fc-a399-11f6bb1b1092',\n", - " 'example': False},\n", - " {'content': [{'id': 'toolu_01SBMoAGr4U9x3ibztm2UUom',\n", - " 'input': {'query': 'weather in los angeles'},\n", - " 'name': 'tavily_search_results_json',\n", - " 'type': 'tool_use'}],\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'ai',\n", - " 'name': None,\n", - " 'id': 'run-80767ab8-09fc-40ec-9e45-657ddef5e0b1',\n", - " 'example': False,\n", - " 'tool_calls': [{'name': 'tavily_search_results_json',\n", - " 'args': {'query': 'weather in los angeles'},\n", - " 'id': 'toolu_01SBMoAGr4U9x3ibztm2UUom'}],\n", - " 'invalid_tool_calls': []},\n", - " {'content': '[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'Los Angeles\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 34.05, \\'lon\\': -118.24, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716310320, \\'localtime\\': \\'2024-05-21 9:52\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716309900, \\'last_updated\\': \\'2024-05-21 09:45\\', \\'temp_c\\': 16.7, \\'temp_f\\': 62.1, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Overcast\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/122.png\\', \\'code\\': 1009}, \\'wind_mph\\': 8.1, \\'wind_kph\\': 13.0, \\'wind_degree\\': 250, \\'wind_dir\\': \\'WSW\\', \\'pressure_mb\\': 1015.0, \\'pressure_in\\': 29.97, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 65, \\'cloud\\': 100, \\'feelslike_c\\': 16.7, \\'feelslike_f\\': 62.1, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 5.0, \\'gust_mph\\': 12.5, \\'gust_kph\\': 20.2}}\"}]',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'tool',\n", - " 'name': 'tavily_search_results_json',\n", - " 'id': 'af25e94a-c119-48c3-bbd3-096e42f472ac',\n", - " 'tool_call_id': 'toolu_01SBMoAGr4U9x3ibztm2UUom'},\n", - " {'content': 'Based on the weather API results, the current weather in Los Angeles is overcast with a temperature of around 62°F (17°C). There are light winds from the west-southwest around 8-13 mph. The humidity is 65% and visibility is good at 9 miles. Overall, mild spring weather conditions in LA.',\n", - " 'additional_kwargs': {},\n", - " 'response_metadata': {},\n", - " 'type': 'ai',\n", - " 'name': None,\n", - " 'id': 'run-b90f0037-e56a-4f3b-ad92-00d10d079a9e',\n", - " 'example': False,\n", - " 'tool_calls': [],\n", - " 'invalid_tool_calls': []}]}" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "final_answer" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "39cedbff-0a7f-4a3e-bfc1-595797358769", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/static/agent_ui.png b/static/agent_ui.png index 0c90e3a5..372ab22b 100644 Binary files a/static/agent_ui.png and b/static/agent_ui.png differ diff --git a/static/assistants.png b/static/assistants.png deleted file mode 100644 index 39fedfe6..00000000 Binary files a/static/assistants.png and /dev/null differ