-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about lmdeploy version and GPU usage #29
Comments
File "/home/user/res/lmdeploy/lmdeploy/serve/async_engine.py", line 521, in getprompt_input
prompt = chat_template.messages2prompt(prompt,
File "/home/user/res/lmdeploy/lmdeploy/model.py", line 223, in messages2prompt
if len(messages) and messages[0]['role'] != 'system':
TypeError: string indices must be integers The error suggests that it's expecting a message format with role-content structure, but our code simply uses (query, image) format. |
+1 |
How to do slide captioning on multiple gpus? |
Use from lmdeploy import pipeline, ChatTemplateConfig, TurbomindEngineConfig
...
backend_config = TurbomindEngineConfig(tp=2)
model = pipeline('Lin-Chen/ShareCaptioner', backend_config=backend_config, chat_template_config=ChatTemplateConfig(model_name='internlm-xcomposer2-4khd'), log_level='INFO') |
@YoungjaeDev Thank you, Do you have the slide captioning batch inference working for the ShareCaptioner-Video model? I'm looking at the code right now, trying to set up inference on a dataset and it seems like there is just no docs for that. |
No. The current code doesn't run well because of package issues |
Hello, I have a few questions regarding the ShareGPT4Video project
lmdeploy
is not specified. Which version should we use?internlm-xcomposer2-4khd
model_name was removed 4 days ago. This seems to require code modifications. How should we address this?ShareGPT4Video/captioner/slide_captioner_lmdeploy.py
Line 140 in 88426fd
slide_captioner_lmdeploy.py
code tested on? I'm currently using multiple GPUs(24Gx2) due to OOM issues. Is it okay to use it this way?The text was updated successfully, but these errors were encountered: