Extract basic building blocks from chat_completion/2
to make a more composable interface
#65
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi 👋 and thank you for the amazing library! This PR proposes a little code reshuffling in order to hopefully make main interface more flexible.
Problem
I want to use Instructor's flow with OpenAI's discounted Batch API. However,
Instructor.chat_completion/2
tightly encapsulates the entire workflow, from preparing a prompt to making an API call, evaluating the result, and potentially issuing retries.Solution
If we could expose basic building blocks of
Instructor.chat_completion/2
, the client code would be able to compose them in order to accommodate whatever specific needs it has.After playing with the code for a while I came up with the following new public functions:
Instructor.prepare_prompt/2
. This is a near-pure function that accepts the same parameters and optional config asInstructor.chat_completions/2
and returns a prompt in a form of a map, ready to be passed to an HTTP client. The prompt is adapter-specific, this is why we need to introduce newInstructor.Adapter.prompt/1
callback. And to remove code duplication, it makes sense to changeInstructor.chat_completion/2
callback toInstructor.chat_completion/3
withprompt
as the new first argument.Instructor.consume_response/2
. This is a pure function, that accepts a response fromInstructor.chat_completion/3
and the same parameters that were supplied toInstructor.prepare_prompt
. It validates the response, attempts to cast it to the response_model and either returns an{:ok, response_model}
or{:error, changeset, params}
tuple, with updatedparams
that can be used for the next attempt.With this two functions the client can manage how they want to call the api, alter prompt at any stage, add logging or telemetry, etc – essentially, rebuild
Instructor.chat_completion/2
to their liking.The changes to adapter behaviour are unfortunately not backwards compatible, but this should be ok at this stage I think?
Example
Here's a shortened example of how it works with batch API:
Let me know what you think! If it goes through in any version, I'll add some documentation as well.