From 3610fb49302867af5b2598b218b3011bc9ed52aa Mon Sep 17 00:00:00 2001 From: youkaichao Date: Tue, 4 Mar 2025 20:47:06 +0800 Subject: [PATCH] [doc] add "Failed to infer device type" to faq (#14200) Signed-off-by: youkaichao --- docs/source/getting_started/troubleshooting.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/source/getting_started/troubleshooting.md b/docs/source/getting_started/troubleshooting.md index 92103e65bbbb7..fdfaf9f932698 100644 --- a/docs/source/getting_started/troubleshooting.md +++ b/docs/source/getting_started/troubleshooting.md @@ -254,6 +254,10 @@ ValueError: Model architectures [''] are not supported for now. Supported But you are sure that the model is in the [list of supported models](#supported-models), there may be some issue with vLLM's model resolution. In that case, please follow [these steps](#model-resolution) to explicitly specify the vLLM implementation for the model. +## Failed to infer device type + +If you see an error like `RuntimeError: Failed to infer device type`, it means that vLLM failed to infer the device type of the runtime environment. You can check [the code](gh-file:vllm/platforms/__init__.py) to see how vLLM infers the device type and why it is not working as expected. After [this PR](gh-pr:14195), you can also set the environment variable `VLLM_LOGGING_LEVEL=DEBUG` to see more detailed logs to help debug the issue. + ## Known Issues - In `v0.5.2`, `v0.5.3`, and `v0.5.3.post1`, there is a bug caused by [zmq](https://github.com/zeromq/pyzmq/issues/2000) , which can occasionally cause vLLM to hang depending on the machine configuration. The solution is to upgrade to the latest version of `vllm` to include the [fix](gh-pr:6759).