Context limitation when returning lists at scale #1449
Unanswered
silhouettehustler
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Used gpt-4o
There are a lot more entities than described below, I've only used the example below to describe the problem in the simplest way possible.
Lets say I have 50 users (this could be 1000s of users/large amounts of data) and each one of them has some tasks (large amount of tasks) assigned to them.
A query like "Which user has the highest number of tasks assigned to them?" actually would give wrong results and if you try and retrieve tasks from a particular user it would cut off some of the tasks due to the context limit (it would return 45 let's say instead 51 even though they are clearly connected to the user entity properly, I can see and verify this through GraphRAG visualizer). I assume it would have to iterate through each user and their assigned tasks to answer this question and that's potentially bulking up the Building Context?
Could someone explain in simple terms am I missing something super obvious here? Is this GraphRAG or LLM limitation or something else?
Do I need to go with a potential workaround like for example deploy the knowledge base to a static graph database and then query it directly, combine with the GraphRAG response to get the complete response?
Do I need to add more metadata around the indexed data like for every user the total number of tasks instead of expecting GraphRAG to work it out and that would get added to the summary, but still how would that help me list out the number of tasks at scale?
Can I paginate the list of tasks and return in chunks? Is that possible without some sort of extra grouping or tagging?
Really trying to workout if I'm doing something fundamentally wrong here or just missing something?
Cheers
Beta Was this translation helpful? Give feedback.
All reactions