Skip to content

Commit

Permalink
feat(langgraph): Adds custom streaming mode (#653)
Browse files Browse the repository at this point in the history
  • Loading branch information
jacoblee93 authored Nov 2, 2024
1 parent 1a7b082 commit 3e3fa13
Show file tree
Hide file tree
Showing 11 changed files with 735 additions and 270 deletions.
29 changes: 16 additions & 13 deletions docs/docs/concepts/streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,19 @@ There are several different modes you can specify when calling these methods (e.

- [`"values"`](/langgraphjs/how-tos/stream-values): This streams the full value of the state after each step of the graph.
- [`"updates"`](/langgraphjs/how-tos/stream-updates): This streams the updates to the state after each step of the graph. If multiple updates are made in the same step (e.g. multiple nodes are run) then those updates are streamed separately.
- [`"custom"`](/langgraphjs/how-tos/streaming-content.ipynb): This streams custom data from inside your graph nodes.
- [`"messages"`](/langgraphjs/how-tos/streaming-tokens.ipynb): This streams LLM tokens and metadata for the graph node where LLM is invoked.
- `"debug"`: This streams as much information as possible throughout the execution of the graph.

The below visualization shows the difference between the `values` and `updates` modes:

![values vs updates](./img/streaming/values_vs_updates.png)


## Streaming LLM tokens and events (`.streamEvents`)

In addition, you can use the [`streamEvents`](/langgraphjs/how-tos/streaming-events-from-within-tools) method to stream back events that happen _inside_ nodes. This is useful for streaming tokens of LLM calls.

This is a standard method on all [LangChain objects](https://js.langchain.com/docs/concepts/#runnable-interface). This means that as the graph is executed, certain events are emitted along the way and can be seen if you run the graph using `.streamEvents`.
This is a standard method on all [LangChain objects](https://js.langchain.com/docs/concepts/#runnable-interface). This means that as the graph is executed, certain events are emitted along the way and can be seen if you run the graph using `.streamEvents`.

All events have (among other things) `event`, `name`, and `data` fields. What do these mean?

Expand All @@ -30,9 +31,9 @@ All events have (among other things) `event`, `name`, and `data` fields. What do

What types of things cause events to be emitted?

* each node (runnable) emits `on_chain_start` when it starts execution, `on_chain_stream` during the node execution and `on_chain_end` when the node finishes. Node events will have the node name in the event's `name` field
* the graph will emit `on_chain_start` in the beginning of the graph execution, `on_chain_stream` after each node execution and `on_chain_end` when the graph finishes. Graph events will have the `LangGraph` in the event's `name` field
* Any writes to state channels (i.e. anytime you update the value of one of your state keys) will emit `on_chain_start` and `on_chain_end` events
- each node (runnable) emits `on_chain_start` when it starts execution, `on_chain_stream` during the node execution and `on_chain_end` when the node finishes. Node events will have the node name in the event's `name` field
- the graph will emit `on_chain_start` in the beginning of the graph execution, `on_chain_stream` after each node execution and `on_chain_end` when the graph finishes. Graph events will have the `LangGraph` in the event's `name` field
- Any writes to state channels (i.e. anytime you update the value of one of your state keys) will emit `on_chain_start` and `on_chain_end` events

Additionally, any events that are created inside your nodes (LLM events, tool events, manually emitted events, etc.) will also be visible in the output of `.streamEvents`.

Expand All @@ -50,18 +51,19 @@ function callModel(state: typeof MessagesAnnotation.State) {
}

const workflow = new StateGraph(MessagesAnnotation)
.addNode("callModel", callModel)
.addEdge("start", "callModel")
.addEdge("callModel", "end");
.addNode("callModel", callModel)
.addEdge("start", "callModel")
.addEdge("callModel", "end");
const app = workflow.compile();

const inputs = [{ role: "user", content: "hi!" }];

for await (const event of app.streamEvents({ messages: inputs })) {
const kind = event.event;
console.log(`${kind}: ${event.name}`);
}
for await (const event of app.streamEvents({ messages: inputs })) {
const kind = event.event;
console.log(`${kind}: ${event.name}`);
}
```

```shell
on_chain_start: LangGraph
on_chain_start: __start__
Expand Down Expand Up @@ -90,7 +92,7 @@ on_chain_end: LangGraph

We start with the overall graph start (`on_chain_start: LangGraph`). We then write to the `__start__` node (this is special node to handle input).
We then start the `callModel` node (`on_chain_start: callModel`). We then start the chat model invocation (`on_chat_model_start: ChatOpenAI`),
stream back token by token (`on_chat_model_stream: ChatOpenAI`) and then finish the chat model (`on_chat_model_end: ChatOpenAI`). From there,
stream back token by token (`on_chat_model_stream: ChatOpenAI`) and then finish the chat model (`on_chat_model_end: ChatOpenAI`). From there,
we write the results back to the channel (`ChannelWrite<callModel,messages>`) and then finish the `callModel` node and then the graph as a whole.

This should hopefully give you a good sense of what events are emitted in a simple graph. But what data do these events contain?
Expand All @@ -117,6 +119,7 @@ These events look like:
'data': {'chunk': AIMessageChunk({ content: 'Hello', id: 'run-3fdbf494-acce-402e-9b50-4eab46403859' })},
'parent_ids': []}
```

We can see that we have the event type and name (which we knew from before).

We also have a bunch of stuff in metadata. Noticeably, `'langgraph_node': 'callModel',` is some really helpful information
Expand Down
1 change: 1 addition & 0 deletions docs/docs/how-tos/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ These guides show how to use different streaming modes.
- [How to configure multiple streaming modes](stream-multiple.ipynb)
- [How to stream LLM tokens](stream-tokens.ipynb)
- [How to stream LLM tokens without LangChain models](streaming-tokens-without-langchain.ipynb)
- [How to stream custom data](streaming-content.ipynb)
- [How to stream events from within a tool](streaming-events-from-within-tools.ipynb)
- [How to stream from the final node](streaming-from-final-node.ipynb)

Expand Down
328 changes: 108 additions & 220 deletions examples/how-tos/stream-tokens.ipynb

Large diffs are not rendered by default.

Loading

0 comments on commit 3e3fa13

Please sign in to comment.