Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: cannot import name 'Qwen2_5_VLForConditionalGeneration' from 'transformers' #3

Open
tzelalouzeir opened this issue Feb 12, 2025 · 0 comments

Comments

@tzelalouzeir
Copy link

Running download_model.py and getting ImportError: cannot import name 'Qwen2_5_VLForConditionalGeneration' from 'transformers'
You need only change the importing section of Qwen2VLForConditionalGeneration and line 22 as Qwen2VLForConditionalGeneration

Importing

From: from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
To this: from transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLForConditionalGeneration

Line 22

From: ‎model = Qwen2_5_VLForConditionalGeneration.from_pretrained
To this: model = Qwen2VLForConditionalGeneration.from_pretrained

Modified version of code:

from transformers import  AutoProcessor
from transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLForConditionalGeneration

import os
import torch
from accelerate import init_empty_weights
from accelerate.utils import load_and_quantize_model

MODEL_DIR = "models/Qwen2.5-VL-7B-Instruct"

def download_model():
    print(f"Downloading model to {MODEL_DIR}...")
    
    # Create directory if it doesn't exist
    os.makedirs(MODEL_DIR, exist_ok=True)
    
    # Download and save processor first
    print("Downloading and saving processor...")
    processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
    processor.save_pretrained(MODEL_DIR)
    
    print("Downloading and saving model...")
    # Initialize model with better memory handling
    model = Qwen2VLForConditionalGeneration.from_pretrained(
        "Qwen/Qwen2.5-VL-7B-Instruct",
        torch_dtype=torch.float16,
        device_map="auto",
        offload_folder="offload",  # Temporary directory for offloading
        offload_state_dict=True,   # Enable state dict offloading
        low_cpu_mem_usage=True     # Enable low CPU memory usage
    )
    
    print("Saving model...")
    # Save with specific shard size to handle memory better
    model.save_pretrained(
        MODEL_DIR,
        safe_serialization=True,
        max_shard_size="2GB"
    )
    
    # Clean up offload folder if it exists
    if os.path.exists("offload"):
        import shutil
        shutil.rmtree("offload")
    
    print("Model downloaded and saved successfully!")

if __name__ == "__main__":
    download_model()

Got some thoughts from huggingface/transformers#35569 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant