-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to do the inference with the finetune weights / model #83
Comments
Yes, you can. The newest version commit supports directly loading lora model. |
Can you share the script for it please. Do we just have to change the current model path to lora path. I did it but didn't work at all. |
can you share the exact script that we can do the inference with the LoRA weights. please. |
@thisurawz1 Through the following code, I successfully loaded the LoRA fine-tuned model for inference. Hope this helps you.
|
Thank you so much |
Dear author,I used your lora checkpoint folder structure and loading example code(#36) to my fintue_qlora inference code on my own experiment video data, but it still has some errors.The old inference code of readme file is work.I just put your code in the code. Please help me! 1: My fintue_qlora inference code: import torch import sys from videollama2.conversation import conv_templates def inference():
if name == "main": 2: Terminal errors: |
I have already fine-tuned the videollama2 for a custom dataset using qlora. after fine-tuning got the above files. now, how can I make the inference with those weights/ models? how can I use this finetune weights/ model with the inference script you provided?
Looking forward to a solution as soon as possible. thank you.
`
import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init
def inference():
disable_torch_init()
if name == "main":
inference()
`
The text was updated successfully, but these errors were encountered: