Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update blogs/graphreader /graphreader_langgraph.ipynb (possibly to NOT use ollama_functions.with_structured_output?) #41

Open
windowshopr opened this issue Sep 24, 2024 · 4 comments

Comments

@windowshopr
Copy link

Trying to run the graph reader agent notebook on my Windows 10, Python 3.11, latest version of Ollama and LangChain/Graph, I get the below traceback (I added in some extra printouts to help see the values of variables and what functions are being called when in order to help):

langgraph.invoke({"question":"Did Joan of Arc lose any battles?"})
--------------------
Step: rational_plan
Rational plan: To answer this question, we first need to find information about Joan of Arc's military campaigns and her battles against the English. We will look for details about specific battles where she was defeated or lost control, such as the Siege of Orléans, and compare them to any victories she had during that time.
message: content='' additional_kwargs={} response_metadata={} id='run-71658f84-2a46-456f-9ebe-51c4838994ca-0' tool_calls=[{'name': 'InitialNodes', 'args': {'initial_nodes': [{'key_element': 'Siege of Orléans', 'score': 80}, {'key_element': 'Joan in her only major film role', 'score': 60}, {'key_element': 'capture by Burgundians', 'score': 70}, {'key_element': 'Loire Campaign', 'score': 50}, {'key_element': 'Burgundian troops', 'score': 40}, {'key_element': 'visions from Michael, Margaret, Catherine', 'score': 30}, {'key_element': 'lead the French in battle', 'score': 90}, {'key_element': 'success in routing the English', 'score': 80}, {'key_element': "attempted to restore Dreyer's final cut", 'score': 60}, {'key_element': "Joan's trial", 'score': 70}]}, 'id': 'call_60a160c65f8b46df907ef071ee7acd6d', 'type': 'tool_call'}]
kwargs: {}
tool_calls: [{'name': 'InitialNodes', 'args': {'initial_nodes': [{'key_element': 'Siege of Orléans', 'score': 80}, {'key_element': 'Joan in her only major film role', 'score': 60}, {'key_element': 'capture by Burgundians', 'score': 70}, {'key_element': 'Loire Campaign', 'score': 50}, {'key_element': 'Burgundian troops', 'score': 40}, {'key_element': 'visions from Michael, Margaret, Catherine', 'score': 30}, {'key_element': 'lead the French in battle', 'score': 90}, {'key_element': 'success in routing the English', 'score': 80}, {'key_element': "attempted to restore Dreyer's final cut", 'score': 60}, {'key_element': "Joan's trial", 'score': 70}]}, 'id': 'call_60a160c65f8b46df907ef071ee7acd6d', 'type': 'tool_call'}]
--------------------
Step: atomic_fact_check
Reading atomic facts about: ['lead the French in battle', 'Siege of Orléans', 'success in routing the English', 'capture by Burgundians', "Joan's trial"]
message: content='' additional_kwargs={} response_metadata={} id='run-32a90a8a-772a-4336-a6b9-61a1c6357622-0' tool_calls=[{'name': 'AtomicFactOutput', 'args': {'updated_notebook': '', 'rational_next_action': "read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])", 'chosen_action': ''}, 'id': 'call_b37fa27ffbac4799980188360fe3c722', 'type': 'tool_call'}]
kwargs: {}
tool_calls: [{'name': 'AtomicFactOutput', 'args': {'updated_notebook': '', 'rational_next_action': "read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])", 'chosen_action': ''}, 'id': 'call_b37fa27ffbac4799980188360fe3c722', 'type': 'tool_call'}]
Rational for next action after atomic check: read_chunk(List['82f69cf57d252cb552a2076a7217b3a5'])

parse_function method started...

input_str: 

pattern: (\w+)(?:\((.*)\))?

match: None

No match found...

Ending parse_function...

to_return: None
Chosen action: None
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[32], line 1
----> 1 langgraph.invoke({"question":"Did Joan of Arc lose any battles?"})

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\__init__.py:1545, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
   1543 else:
   1544     chunks = []
-> 1545 for chunk in self.stream(
   1546     input,
   1547     config,
   1548     stream_mode=stream_mode,
   1549     output_keys=output_keys,
   1550     interrupt_before=interrupt_before,
   1551     interrupt_after=interrupt_after,
   1552     debug=debug,
   1553     **kwargs,
   1554 ):
   1555     if stream_mode == "values":
   1556         latest = chunk

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\__init__.py:1278, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
   1267     # Similarly to Bulk Synchronous Parallel / Pregel model
   1268     # computation proceeds in steps, while there are channel updates
   1269     # channel updates from step N are only visible in step N+1
   1270     # channels are guaranteed to be immutable for the duration of the step,
   1271     # with channel updates applied only at the transition between steps
   1272     while loop.tick(
   1273         input_keys=self.input_channels,
   1274         interrupt_before=interrupt_before_,
   1275         interrupt_after=interrupt_after_,
   1276         manager=run_manager,
   1277     ):
-> 1278         for _ in runner.tick(
   1279             loop.tasks.values(),
   1280             timeout=self.step_timeout,
   1281             retry_policy=self.retry_policy,
   1282             get_waiter=get_waiter,
   1283         ):
   1284             # emit output
   1285             yield from output()
   1286 # emit output

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\runner.py:52, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
     50 t = tasks[0]
     51 try:
---> 52     run_with_retry(t, retry_policy)
     53     self.commit(t, None)
     54 except Exception as exc:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\retry.py:29, in run_with_retry(task, retry_policy)
     27 task.writes.clear()
     28 # run the task
---> 29 task.proc.invoke(task.input, config)
     30 # if successful, end
     31 break

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py:385, in RunnableSeq.invoke(self, input, config, **kwargs)
    383 context.run(_set_config_context, config)
    384 if i == 0:
--> 385     input = context.run(step.invoke, input, config, **kwargs)
    386 else:
    387     input = context.run(step.invoke, input, config)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py:167, in RunnableCallable.invoke(self, input, config, **kwargs)
    165 else:
    166     context.run(_set_config_context, config)
--> 167     ret = context.run(self.func, input, **kwargs)
    168 if isinstance(ret, Runnable) and self.recurse:
    169     return ret.invoke(input, config)

Cell In[20], line 45, in atomic_fact_check(state)
     41 chosen_action = parse_function(atomic_facts_results.chosen_action)
     42 print(f"Chosen action: {chosen_action}")
     43 response = {
     44     "notebook": notebook,
---> 45     "chosen_action": chosen_action.get("function_name"),
     46     "check_atomic_facts_queue": [],
     47     "previous_actions": [
     48         f"atomic_fact_check({state.get('check_atomic_facts_queue')})"
     49     ],
     50 }
     51 if chosen_action.get("function_name") == "stop_and_read_neighbor":
     52     neighbors = get_neighbors_by_key_element(
     53         state.get("check_atomic_facts_queue")
     54     )

AttributeError: 'NoneType' object has no attribute 'get'

It appears that the "next action" is not being set properly. I noticed that the notebook makes use of the with_structured_output method which has been deprecated in LangChain v0.2, however the:

from langchain_community.chat_models import ChatOllama
# OR
from langchain_ollama import ChatOllama

from those latest versions don't offer the with_structured_output anymore. So the code would need to be updated to mitigate using this altogether, but just something I'm running into trying to run the code right now.

@tomasonjo
Copy link
Owner

tomasonjo commented Sep 24, 2024 via email

@windowshopr
Copy link
Author

Hey! Sorry yes I am using Ollama:

model = OllamaFunctions(model="llama3.1:8b")
embeddings = OllamaEmbeddings(model="llama3.1:8b")

@csaiedu
Copy link

csaiedu commented Oct 14, 2024

Hi,

Is there anymore done to allow GraphReader to work with ollama locl models?
I have similar issues with with_structured_ouput
thank you

@tomasonjo
Copy link
Owner

tomasonjo commented Oct 14, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants