Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix OpenAI_Request not handling chunked data correctly (AEGHB-971) #467

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ryanrdetzel
Copy link

@ryanrdetzel ryanrdetzel commented Jan 31, 2025

Description

This addresses an issue with openai returning chunked data.
Since there is no content_length when chunk data is returned the initial check fails to allocate and read the data correctly. See this issue: #466

This fixes changes the method for both chunked and non chunked data to dynamically allocate the memory require to store the content and removes the check against content_length.

Related

Fixes #466

Testing

Tested locally against a esp32c3 and esp32s3.

Using the master copy of this component would return this when trying to do a chat completion

E (8905) OpenAI: ./managed_components/espressif__openai/OpenAI.c:2681 (OpenAI_Request):HTTP client fetch headers failed!
D (8917) event: no handlers have been registered for event ESP_HTTP_CLIENT_EVENT:6 posted to loop 0x3fca053c
D (8929) event: no handlers have been registered for event ESP_HTTP_CLIENT_EVENT:6 posted to loop 0x3fca053c
E (8937) OpenAI: ./managed_components/espressif__openai/OpenAI.c:1308 (OpenAI_ChatCompletionMessage):Empty result!

After this patch we get the correct response

Checklist

Before submitting a Pull Request, please ensure the following:

  • 🚨 This PR does not introduce breaking changes.
  • All CI checks (GH Actions) pass.
  • Documentation is updated as needed.
  • Tests are updated or added as necessary.
  • Code is well-commented, especially in complex areas.
  • Git history is clean — commits are squashed to the minimum necessary.

@CLAassistant
Copy link

CLAassistant commented Jan 31, 2025

CLA assistant check
All committers have signed the CLA.

Copy link

github-actions bot commented Jan 31, 2025

Messages
📖 🎉 Good Job! All checks are passing!

👋 Hello ryanrdetzel, we appreciate your contribution to this project!


Click to see more instructions ...


This automated output is generated by the PR linter DangerJS, which checks if your Pull Request meets the project's requirements and helps you fix potential issues.

DangerJS is triggered with each push event to a Pull Request and modify the contents of this comment.

Please consider the following:
- Danger mainly focuses on the PR structure and formatting and can't understand the meaning behind your code or changes.
- Danger is not a substitute for human code reviews; it's still important to request a code review from your colleagues.
- To manually retry these Danger checks, please navigate to the Actions tab and re-run last Danger workflow.

Review and merge process you can expect ...


We do welcome contributions in the form of bug reports, feature requests and pull requests.

1. An internal issue has been created for the PR, we assign it to the relevant engineer.
2. They review the PR and either approve it or ask you for changes or clarifications.
3. Once the GitHub PR is approved we do the final review, collect approvals from core owners and make sure all the automated tests are passing.
- At this point we may do some adjustments to the proposed change, or extend it by adding tests or documentation.
4. If the change is approved and passes the tests it is merged into the default branch.

Generated by 🚫 dangerJS against d8a96c5

@github-actions github-actions bot changed the title Fix OpenAI_Request not handling chunked data correctly Fix OpenAI_Request not handling chunked data correctly (AEGHB-971) Jan 31, 2025
@lijunru-hub
Copy link
Contributor

lijunru-hub commented Feb 5, 2025

Yes, this change makes a lot of sense. I suggest modifying this part of the code as follows:

    int content_length = esp_http_client_fetch_headers(client);
    OPENAI_ERROR_CHECK_GOTO(content_length < 0, "HTTP client fetch headers failed!", end);
    ESP_LOGD(TAG, "chunk_length=%d", content_length); //4096
    content_length = (content_length > 0) ? content_length : 4096;
    int read_len = 0;
    int output_len = 0;
    do {
        result = (char *)realloc(result, output_len + content_length + 1);
        OPENAI_ERROR_CHECK_GOTO(result != NULL, "Chunk Data reallocated Failed", end);
        read_len = esp_http_client_read_response(client, result + output_len, content_length);
        output_len += read_len;
        ESP_LOGD(TAG, "HTTP_READ:=%d", read_len);
    } while (read_len > 0);
    ESP_LOGD(TAG, "output_len: %d\n", output_len);

@lijunru-hub
Copy link
Contributor

Thank you very much for your contribution! There are still some additional changes needed:

  • Update the version in OpenAI's idf_component.yml to 1.0.2.
  • Add a note in CHANGELOG.md to document the relevant changes.

@leeebo leeebo added the openai label Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

OpenAI chat completion is not working as expected since it's chunked data (IDFGH-14549) (AEGHB-968)
4 participants