You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There was previously a discussion that went nowhere about "privacy using local APIs." The closing of that issue, with a possible alternative of using TFI, did not satisfy breadth of that need.
Currently this project is heavily supporting paid third-party hosted models (OpenAPI, HF, Google, etc.) but it appears to be ignoring the local model part of the community. Specifically, the previous ticket was closed because it was "possible" to use TGI to host some models locally. In my case, this is not a solution my business-requirements will support. I need to have OpenAI API compatible support (and I cannot simply host it from a python notebook). The portability of all code from OpenAI to local OpenAI API is paramount.
Since there are multiple options for OpenAI compatible APIs now in the market, including LM Studio and the plugin for Ooba, seems to me that adding support would be appropriate; even relatively easy; assuming a modification to the OpenAI object to support an override url.
I admit, I'm not a Python expert, but I've dug deeply into the code to figure out where the "api.openai.com" URL is being injected (in the hopes of monkeypatching it with my local server path), but could not find it.
If anyone has a workaround I can use until such time as it's made a feature, I'd greatly appreciate it.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
There was previously a discussion that went nowhere about "privacy using local APIs." The closing of that issue, with a possible alternative of using TFI, did not satisfy breadth of that need.
Currently this project is heavily supporting paid third-party hosted models (OpenAPI, HF, Google, etc.) but it appears to be ignoring the local model part of the community. Specifically, the previous ticket was closed because it was "possible" to use TGI to host some models locally. In my case, this is not a solution my business-requirements will support. I need to have OpenAI API compatible support (and I cannot simply host it from a python notebook). The portability of all code from OpenAI to local OpenAI API is paramount.
Since there are multiple options for OpenAI compatible APIs now in the market, including LM Studio and the plugin for Ooba, seems to me that adding support would be appropriate; even relatively easy; assuming a modification to the OpenAI object to support an override url.
I admit, I'm not a Python expert, but I've dug deeply into the code to figure out where the "api.openai.com" URL is being injected (in the hopes of monkeypatching it with my local server path), but could not find it.
If anyone has a workaround I can use until such time as it's made a feature, I'd greatly appreciate it.
Beta Was this translation helpful? Give feedback.
All reactions