-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrates more models #716
Comments
Request to support Qwen-max. Can I modify the code? |
@OXOOOOX Yes, you can modify the code and submit a PR. We will merge it into the code base. |
### What problem does this PR solve? feat: Integrates LLM Azure OpenAI #716 ### Type of change - [x] New Feature (non-breaking change which adds functionality) ### Other It's just the back-end code, the front-end needs to provide the Azure OpenAI model addition form. #### Required parameters - base_url - api_key --------- Co-authored-by: yonghui li <[email protected]>
Would you pls add llama-3.1-70b-versatile and llama-3.1-8b-instant for Groq , for now is only llama3.0 Thank you |
### What problem does this PR solve? feat: Integrates LLM Azure OpenAI infiniflow#716 ### Type of change - [x] New Feature (non-breaking change which adds functionality) ### Other It's just the back-end code, the front-end needs to provide the Azure OpenAI model addition form. #### Required parameters - base_url - api_key --------- Co-authored-by: yonghui li <[email protected]>
Would you please add the GLM models, specifically glm-4-plus for ZHIPU and deekseek V2.5 for deepseek, Qwen/Qwen2.5-72B-Instruct-128K for siliconflow? |
This issue is used to document the LLM, embedding, reranker, etc. models that need to be integrated with RAGFlow.
The text was updated successfully, but these errors were encountered: