langgraph/tutorials/introduction/ #496
Replies: 36 comments 39 replies
-
Großartig! Rudi Ranck, PhD |
Beta Was this translation helpful? Give feedback.
-
Anyone found some glitches with appending SystemMessage to 'state'? They can be correctly appended, by printing out graph.get_state() in the end, but it seems the agent can't be guided by any info in the SystemMessage. |
Beta Was this translation helpful? Give feedback.
-
Hi, I am using AWS Bedrock via Boto3 to replicate this workflow. I have been successful up until Part 4: Human-in-the-loop. When I attempt to run the code with the 'interrupt_before' argument included in the graph compiler, the LLM still executes the tool instead of being interrupted. There are no error messages. For replication purposes, I am using Claude 3 Haiku and using the ChatBedrock function to initialize the model. My environment is as follows:
Any help or guidance you can provide would be much appreciated. Happy to provide more information. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi - Question, can we update the state from a tool as well? I am running into some kind of issue. The state which I have defined is as follows. This is the tool which I have defined for updating the cart. @tool The run completes successfully but when I do a get_state on the graph, it does not include the "cart" key. Just putting part of the message in here |
Beta Was this translation helpful? Give feedback.
-
This is the longest Hello World program I've seen so far.... |
Beta Was this translation helpful? Give feedback.
-
"Table of contents" is not complete: there are 7 parts in this page, but there are only 3 parts in the "Table of contents". |
Beta Was this translation helpful? Give feedback.
-
Hello, I've noticed that starting from the third part, the |
Beta Was this translation helpful? Give feedback.
-
I'd like to ensure that the number of tokens received by the LLM is less than or equal to, for example, 10K. For this, I want the ‘MemorySaver’ to forget old histories. Alternatively, if the number of tokens exceeds the limit when the conversation history is passed from checkpointer to LLM, I want to delete the old history and then pass it on. How can I do this? |
Beta Was this translation helpful? Give feedback.
-
When using another search tool like "GoogleSerperResults" or "GoogleSearchResults" I'm always receiving the error: TypeError: Got an unexpected keyword argument '__arg1' I can't figure out why. |
Beta Was this translation helpful? Give feedback.
-
Hi! PD: It is true as other comments say that maybe is a bit long for a quickstart, however it is didactic and easy to follow. |
Beta Was this translation helpful? Give feedback.
-
hi everyone i want to create langgraph application without using tools just llm and stages like if stage 1 function intract with user like medical bot and after that stage 2nd llm ask user about problems and stage 3rd ask llm about patient age gender and 4th stage predict diagnose so give me idea or code how can i do that |
Beta Was this translation helpful? Give feedback.
-
Great Job Langraph Team !!
|
Beta Was this translation helpful? Give feedback.
-
when I use llama3 for : def getMistralLLMForToolCall():
llm = getMistralLLMForToolCall() I create instance of ChatMistralAI , and use it as llm in tutorial code it works without stucking on tool calling. regards. |
Beta Was this translation helpful? Give feedback.
-
the following implementation in "Part 1: Build a Basic Chatbot" section is throwing an error, probably it should not be used for stream mode as stream mode is returning partial results where assistant response is still empty. With
|
Beta Was this translation helpful? Give feedback.
-
Part 3: Chat bot with memory example doesn't work with ChatGroq client, groq api complains about types, probably because of tool content being an array
|
Beta Was this translation helpful? Give feedback.
-
Hi, great introduction so far. In part 5 it says: However, I got the error "ToolMessage not defined" (which seems to be the case, indeed). What should be used instead? |
Beta Was this translation helpful? Give feedback.
-
The below code dosent seem to give the tool...the call dosent seem to go, Is everyone facing this issue? it dosent seem to get tool.name. I am using OpenAI. class BasicToolNode:
tool_node = BasicToolNode(tools=[tool]) |
Beta Was this translation helpful? Give feedback.
-
Hello, I wanted to tell you that I am very disapointed with your system because I had tried for maybe two hours to run just the beginning of the tutorial but each time my api key was not correct. Eventually I found out that I wasn't trying with the right key, I was trying with an openai key while I had to try with an antropic api key.... When I ran again my code I was told that 'Your credit balance is too low to access the Anthropic API. Please go to Plans & Billing to upgrade or purchase credits.' Seriously ? First of all you could had very point out that we must create an anthropic API. Secondly I'm highly unhappy because I already had paid for OpenAI now I have to pay for another API to understand the course? At least OpenAI allows you to try for free its api for a few times. I don't understand because I already used Langchain with a LLM gpt4 and it worked. I checked many times where these information were written at the start of the tutorial but I didn't see it, if it was there please forgive me. But I'm very confused because I lost a lot of time since I'll have soon an job interview tomorrow. |
Beta Was this translation helpful? Give feedback.
-
LangGraph 🤩 is a groundbreaking tool that beautifully bridges the gap between language and data 😎, making complex analysis remarkably intuitive 👍 😻. |
Beta Was this translation helpful? Give feedback.
-
Getting this issue while run Part 4 Human-in-the-loop. Anybody knows the solution? PS C:\AI_study\LangChain\CatchupAI> & c:/AI_study/LangChain/CatchupAI/.venv/Scripts/python.exe c:/AI_study/LangChain/CatchupAI/LangGraph_Tutorial/LG_QuickStart_04.py |
Beta Was this translation helpful? Give feedback.
-
I copy and past the codes to try part 6, but seems it doesn't work as expected. It never interrupted beofre "human". What could be the reason? |
Beta Was this translation helpful? Give feedback.
-
I've been trying to start with this quick start but no matter what I do it doesn't let me get the latest version, every time I run the pip install -U langgraph It gives me the version 0.0.8, I already update my python version, everything, but I just get messages like this on my terminal: WARNING: The candidate selected for download or install is a yanked version: 'langgraph' candidate (version 0.0.8 at https://files.pythonhosted.org/packages/c5/96/bbb9a8ba05fac954037e2761ebdea4be89eb30da10455ff1c10222b1abe9/langgraph-0.0.8-py3-none-any.whl#sha256=916cb24b211e2b4fc31d7cafcad2f97f49dfab41e833c869e9ea62481eaa310e (from https://pypi.org/simple/langgraph/) (requires-python:>=3.8.1,<4.0)) ERROR: Could not find a version that satisfies the requirement langgraph==0.2.39 (from versions: 0.0.8) what could I've been doing wrong? |
Beta Was this translation helpful? Give feedback.
-
Great tutorial! I have learned a lot about the major features of LangGraph from this notebook. It is also exciting to build an autonomous agent using LangGraph. I am looking forward to trying it out and seeing how it performs. However, when I try to run the code, I encounter an annoying issue about getting the current state under When I was trying to get the state snapshot via StateSnapshot(values={}, next=(), config={'configurable': {'thread_id': '0'}}, metadata=None, created_at=None, parent_config=None, tasks=()) I have made some attempts to see if I can obtain the state snapshot under streaming mode. I found it can be obtained again when I iterate the events of the streaming results first. However, I am not sure if this is the correct way to do it. Does anyone else encounter this issue? Is there any better way to get the state snapshot under streaming mode? Here is my version info:
|
Beta Was this translation helpful? Give feedback.
-
Hi, I have been trying your tutorial with my own use case.
When i get to Part 4. I run one question as in the tutorial to inspect the Unfortunately i can't share a lot of code as it is work related. |
Beta Was this translation helpful? Give feedback.
-
Need help for understanding the Part4: Human in the loop The flow here is the user asks a question and the LLM invokes a tool call to provide an answer, from the example, I see no value provided from the new For the following explanation, it makes things even more confusing, isnt the previous example(Part 2: Enhancing the Chatbot with Tools) tools also? according to this langsmith log (https://smith.langchain.com/public/4fbd7636-25af-4638-9587-5a02fdbb0172/r/2b55e643-f5c8-4ca8-a401-8b5a9f282a43)
Where is the interruption, please? |
Beta Was this translation helpful? Give feedback.
-
Taking the exact code in the example, in Part 3: Adding Memory to the Chatbot, my pretty print output looks like this:
Instead of the expected :
It seems I cannot convince my llm to NOT call the tool on user_inputs. Even trying: So I am wondering, was there any pre-prompting used in this example? Was Part 3: Adding Memory to the Chatbot meant to be done using : |
Beta Was this translation helpful? Give feedback.
-
why using |
Beta Was this translation helpful? Give feedback.
-
In the part 1: I get an error:
|
Beta Was this translation helpful? Give feedback.
-
As a beginner I start Langgraph , while importing START and END from "langgraph.graph" I got received error from langgraph.graph import StateGraph,START,END On checking I observed the START and END constants are in langgraph.graph.graph and langgraph.graph.state can we get corrected in tutorials examples |
Beta Was this translation helpful? Give feedback.
-
Did anyone try to use Azure api key because I am able to run the code using openai api key but not with Azure key |
Beta Was this translation helpful? Give feedback.
-
langgraph/tutorials/introduction/
Build language agents as graphs
https://langchain-ai.github.io/langgraph/tutorials/introduction/
Beta Was this translation helpful? Give feedback.
All reactions