-
Notifications
You must be signed in to change notification settings - Fork 811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix OpenAI_Request not handling chunked data correctly (AEGHB-971) #467
base: master
Are you sure you want to change the base?
Conversation
👋 Hello ryanrdetzel, we appreciate your contribution to this project! Click to see more instructions ...
Review and merge process you can expect ...
|
b4936f0
to
546a52a
Compare
546a52a
to
9f0c05e
Compare
9f0c05e
to
d8a96c5
Compare
Yes, this change makes a lot of sense. I suggest modifying this part of the code as follows:
|
Thank you very much for your contribution! There are still some additional changes needed:
|
Description
This addresses an issue with openai returning chunked data.
Since there is no content_length when chunk data is returned the initial check fails to allocate and read the data correctly. See this issue: #466
This fixes changes the method for both chunked and non chunked data to dynamically allocate the memory require to store the content and removes the check against content_length.
Related
Fixes #466
Testing
Tested locally against a esp32c3 and esp32s3.
Using the master copy of this component would return this when trying to do a chat completion
After this patch we get the correct response
Checklist
Before submitting a Pull Request, please ensure the following: