Support for gpt-instruct models #93
machinewrapped
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Added support for the
gpt-instruct
models, though I'm not currently recommending their usage. The quality of translation is comparable to the chat models- sometimes a little better, sometimes a little worse in my tests. However the instruct models only support the 4K token window of earlier gpt3.5 models (approx. 40 lines per batch) and have a higher per-token cost, so I think that for most usersgpt-3.5-turbo-16k
with a maximum batch size of about 100 lines will be more efficient and just as good.The main purpose of the exercise was to refactor the code base to support different translation clients, which opens the door to supporting other models and platforms.
This release also contains a fix for error handling in the updated OpenAI APIs in the case of connection errors (this may solve some of the crashes people have reported), and closing the connection to the server when stopping or quitting so that it doesn't have to wait for any active requests to complete before it can exit.
What's Changed
Full Changelog: v0.4.7...v0.5.0
This discussion was created from the release Support for gpt-instruct models.
Beta Was this translation helpful? Give feedback.
All reactions