Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix in load_llm.py #1508

Merged
merged 4 commits into from
Dec 19, 2024
Merged

Fix in load_llm.py #1508

merged 4 commits into from
Dec 19, 2024

Conversation

theobgbd
Copy link
Contributor

Description

Fixed an issue where the "proxy" setting was passed to the PublicOpenAPI constructor instead of the "api_base" parameter, disabling the use of on-premise OpenAI-based LLM servers

Related Issues

Issue #1481

Proposed Changes

Passed api_base instead of proxy

Checklist

  • I have tested these changes locally.
  • I have reviewed the code changes.
  • I have updated the documentation (if necessary).
  • I have added appropriate unit tests (if applicable).

Fixed an issue where the "proxy" setting was passed to the PublicOpenAPI constructor instead of the  "api_base" parameter, disabling the use of on-premise OpenAI-based LLM servers
@HenryZ5734
Copy link

Helps a lot👍 With this fix, I can finally run GraphRAG(1.0.0) with local LLM(Qwen-2.5)

@goodpeter-sun
Copy link

goodpeter-sun commented Dec 14, 2024

Thanks for the fix. However, i tried, but it didn't work. Am I missing something?
@theobgbd

@goodpeter-sun
Copy link

Helps a lot👍 With this fix, I can finally run GraphRAG(1.0.0) with local LLM(Qwen-2.5)

Are you also using ollama? Could you shed some light? what is your settings.yaml like?

@xfalcox
Copy link

xfalcox commented Dec 16, 2024

Thanks this fixes using it with a LLM deployed via vLLM

@AlonsoGuevara AlonsoGuevara merged commit e6de713 into microsoft:main Dec 19, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants