Callbacks not working as expected #924
Unanswered
ChristianEvc
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I'm building a chatbot with a python backend and a react native front end, and a websockets pipe to connect the two.
Everything has been working like a charm, but I'm just adding a feature to provide a source from a RAG pipeline, but I cannot get a callback to work for the source specific chain in Langgraph.
At a high level, I'm using langgraph. If the decision is made to do RAG, the retrieved documents are stored in a state variable, and then two other nodes are triggered:
"Gernerate" - This one simply summarises the documents into a response
"Source" - This one reads the retrieved docs, but summarises the metadata and the Chapter / Subchapter info into a JSON object.
The "Generate" one works perfectly and streams each token to the front end with no issues.
The "Source" one, I don't want to stream token by token as this is a 3 line json object, so its quick to produce. However, there doesn't seemt to be any event I can subscribe to create an appropriate callback.
Below are the node definitions, and the callbacks applied to the LLMs for each:
GENERATE (works fine):
`
def generate(state, streaming_llm):
"""
Generate answer
Args:
state (dict): The current graph state
The "streaming_llm" in this case is set up with the following callback handler (using OPEN AIs ChatOpenAI() ):
`class StreamingLLMCallbackHandler(AsyncCallbackHandler):
"""Callback handler for streaming LLM responses."""
"SOURCE" (Doesn't work)
`def sourceHandling(state, source_llm):
"""
Based on the answer generated, determine the relevant source from the retrieved documents.
The "source_llm" in this case is set up with the following callback handler (using OPEN AIs ChatOpenAI() ):
`class SourceCallbackHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
def init(self, websocket):
self.websocket = websocket
Whilst the "on_llm_end" event is triggering, its not allowing me to do anything useful with the LLMResult response. And no other event seems to even be triggered (for example, the "on_tool_end" event).
Whilst this may seem like a convoluted way to stream the answers back, its been the lowest latency way I've found, and also the only way that allows me to directly stream to websockets.
However, I'm sure there is a better way, so any help or insights would be hugely appreciated!!
Beta Was this translation helpful? Give feedback.
All reactions