We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have installed all the requirements for Qwen2-vl
train_mm_proj_only:True Hello, I want to train the vision adapter and the LLM part with lora. Do I set the train_mm_proj_only as True like this:
model_name_or_path: Qwen/Qwen2-VL-72B-Instruct
stage: sft do_train: true finetuning_type: lora lora_target: all
dataset: mllm_demo,identity # video: mllm_video_demo template: qwen2_vl cutoff_len: 8900 max_samples: 1000 overwrite_cache: true preprocessing_num_workers: 16
output_dir: saves/qwen2_vl-7b/lora/sft logging_steps: 10 save_steps: 500 plot_loss: true overwrite_output_dir: true
per_device_train_batch_size: 1 gradient_accumulation_steps: 16 learning_rate: 1.0e-4 num_train_epochs: 1.0 lr_scheduler_type: cosine warmup_ratio: 0.05 bf16: true ddp_timeout: 180000000 visual_inputs: true deepspeed: examples/deepspeed/ds_z3_config.json train_mm_proj_only:True
lora_alpha: 512 lora_dropout: 0.1 lora_rank: 256
val_size: 0.1 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 500
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Reminder
System Info
Have installed all the requirements for Qwen2-vl
Reproduction
train_mm_proj_only:True
Hello, I want to train the vision adapter and the LLM part with lora. Do I set the train_mm_proj_only as True like this:
model
model_name_or_path: Qwen/Qwen2-VL-72B-Instruct
method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
dataset
dataset: mllm_demo,identity # video: mllm_video_demo
template: qwen2_vl
cutoff_len: 8900
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
output
output_dir: saves/qwen2_vl-7b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
train
per_device_train_batch_size: 1
gradient_accumulation_steps: 16
learning_rate: 1.0e-4
num_train_epochs: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.05
bf16: true
ddp_timeout: 180000000
visual_inputs: true
deepspeed: examples/deepspeed/ds_z3_config.json
train_mm_proj_only:True
lora
lora_alpha: 512
lora_dropout: 0.1
lora_rank: 256
eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: