Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Warning] 求助,启动服务找不到已有模型,而是自动选择其他模型,debug也无法定位问题根源 #5145

Open
Yuhan798 opened this issue Dec 19, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@Yuhan798
Copy link

问题描述 / Problem Description
在模型配置文件中已经设置了期望的glm4-9b作为LLM模型,但启动服务后一直报warning称找不到,且默认用了一个带后缀的9b模型,如图所示。
image
但是使用apipost是可以成功调到对应接口的。
debug时,在warning出现的对应脚本设置断点,但发现代码根本不会执行到对应脚本,也依然能弹出warning。
image

求助大家是否遇到了类似问题,如何解决?

复现问题的步骤 / Steps to Reproduce
依照源码部署md,start -a后出现相关问题

预期的结果 / Expected Result
没有warning,且可以在后台查到正确的模型调用记录

实际结果 / Actual Result
实际上期望的模型并没有被正确调用,且也不知道项目中自动找到的带后缀模型是从哪里调出来的

环境信息 / Environment Information

  • Langchain-Chatchat 版本 / commit 号:(例如:0.3.1 或 commit 123456) / Langchain-Chatchat version / commit number: (e.g., 0.3.1 or commit 123456)
  • 部署方式(pypi 安装 / 源码部署 / docker 部署):pypi 安装 / Deployment method (pypi installation / dev deployment / docker deployment): pypi installation
  • 使用的模型推理框架(Xinference / Ollama / OpenAI API 等):Xinference / Model serve method(Xinference / Ollama / OpenAI API, etc.): Xinference
  • 使用的 LLM 模型(GLM-4-9B / Qwen2-7B-Instruct 等):GLM-4-9B / LLM used (GLM-4-9B / Qwen2-7B-Instruct, etc.): GLM-4-9B
  • 使用的 Embedding 模型(bge-large-zh-v1.5 / m3e-base 等):bge-large-zh-v1.5 / Embedding model used (bge-large-zh-v1.5 / m3e-base, etc.): bge-large-zh-v1.5
  • 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss / Vector library used (faiss, milvus, pg_vector, etc.): faiss
  • 操作系统及版本 / Operating system and version: MacOS
  • Python 版本 / Python version: 3.8
  • 推理使用的硬件(GPU / CPU / MPS / NPU 等) / Inference hardware (GPU / CPU / MPS / NPU, etc.): GPU
  • 其他相关环境信息 / Other relevant environment information:

附加信息 / Additional Information
添加与问题相关的任何其他信息 / Add any other information related to the issue.

@Yuhan798 Yuhan798 added the bug Something isn't working label Dec 19, 2024
Copy link

dosubot bot commented Dec 19, 2024

@Yuhan798 你好!我是Dosu,一个可以帮助你解决软件漏洞、回答问题并成为贡献者的机器人。我会在等待人类维护者的时候帮助你。期待能与你一起解决问题!🤖

The issue you're encountering, where the service defaults to a model with a suffix like -qUQeFaHA instead of the specified glm4-9b, might be related to how the available models are detected and selected in the codebase.

  1. Model Detection and Selection: The get_default_llm() function is responsible for selecting the default model. It checks if the DEFAULT_LLM_MODEL is in the list of available models. If not found, it logs a warning and defaults to the first available model [1].

  2. Model Suffix: The suffix in model names like -qUQeFaHA might be generated or assigned during the model detection process, possibly to differentiate between similar models or versions. However, the exact mechanism for generating this suffix is not detailed in the available information [2].

  3. Debugging Steps:

    • Ensure that the DEFAULT_LLM_MODEL is correctly specified in your configuration files and matches exactly with the model names listed in your available models.
    • Check the get_config_models function to see how models are being detected and if there are any discrepancies in the model names or configurations [3].
    • Verify if the model detection logic is correctly identifying the models from your configured platforms, such as Xinference.
  4. Potential Solutions:

    • Double-check your configuration files to ensure there are no typos or mismatches in the model names.
    • If the suffix is being added dynamically, investigate the model detection logic to understand why this is happening and adjust the configuration or code as needed.

If these steps do not resolve the issue, further investigation into the model detection and configuration logic might be necessary to pinpoint the source of the problem.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant