-
How can you handle errors in the backend when returning stream as a response with try {
const result = streamText({
model: aiSdkOpenAI(GPT_VERSION),
messages: reqMessages
})
return result.toDataStreamResponse()
} catch (err) {
// TODO: THIS DOES NOT WORK
console.error(err)
throw err
} In this example, how can I capture any errors? If I use the example in the documentation it will work and will log the error but the text will no longer be streamed. try {
const result = streamText({
model: aiSdkOpenAI(GPT_VERSION),
messages: reqMessages
})
for await (const part of result.fullStream) {
switch (part.type) {
// ... handle other part types
case 'error': {
const error = part.error
// This works
console.error(error)
break
}
}
}
return result.toDataStreamResponse()
} catch (err) {
// TODO: THIS DOES NOT WORK
console.error(err)
throw err
} |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
you need to throw an error in your switch case to catch it: for await (const part of result.fullStream) {
switch (part.type) {
case 'error': {
const error = part.error;
throw error;
}
}
} and in the catch: return new Response(error, { status: 500 }); |
Beta Was this translation helpful? Give feedback.
-
You can change / log error messages using the following: https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot#error-messages |
Beta Was this translation helpful? Give feedback.
You can change / log error messages using the following: https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot#error-messages