-
Notifications
You must be signed in to change notification settings - Fork 655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What's the difference between context
and contexts
in the response output?
#755
Comments
The
To get the verbatim context used by the LLM to generate the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
But @dosu: how can I retrieve the literal text from the papers that has been used to generate the context? |
To retrieve the literal text from the papers that has been used to generate the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Hey @n400peanuts. Yes, you're right. The actual call stack is something similar to: Therefore, you can use |
The output of
docs.query()
gives me a python object that contains aformatted_answer
, which is the answer generated by the LLM given the input docs. I wish to know based on which context the answer has been generated. I can see that the output containscontext
andcontexts
what do these variables contain? and how do I get the verbatim context used by the LLM to generate theformatted_answer
from the docs.query?The text was updated successfully, but these errors were encountered: