How to stream results back with using an LLM api directly? #4724
Unanswered
Godrules500
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Since the vercel ai sdk does not have the ability yet to handle other types of grounding provided by vertex ai, I am going to call the Vertex AI api directly. With that, I want to still stream the response back to the front end. How can I do this with the tools provided via the ai sdk?
So the workflow would be:
user asks a question --> pre processing determines that we have a vector db for this question --> call vertex ai api directly --> stream the response back to the ai sdk UI.
Beta Was this translation helpful? Give feedback.
All reactions