Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Support for a custom openai api, using default llama_index.llms.openai (no requirement change)
backend:
frontend:
docker:
CUSTOM_HOST
andCUSTOM_API_KEY
env variables with lm-studio url set by default (http://localhost:1234/v1)enviromental variables:
.env example for lm-studio server:
api-key in most cases is not nedded.
why use custom instead of openai naming?
llama_index uses OPENAI_API_BASE while openai uses OPENAI_BASE_URL
adding the new variable make sure it does not conflict with cloud openai usage.
tested with lm-studio local server using:
preview: