Skip to content

GeWu-Lab/Crab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

36 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€πŸš€ Welcome to the repo of Crab! If our project helps you, please give us a star ⭐ on GitHub to support us. πŸ™πŸ™

ArXiv hf_checkpoint hf_data arXiv Webpage Demo

πŸ“° News

🌟 Unified Audio-Visual Scene Understanding Capabilities

Temporal Localization

AVE Task

-en7GAdXAQk.mp4
label: Male speech, man speaking, [0,8]

Predict: A man is giving a speech from a podium in a classroom. The man speaks from the beginning of the video until the 8th second. So the audible and visible event in the video is Male speech, man speaking , and the time range is 0,8 .

AVVP Task

6aV6c8mY0lg.mp4
Label: Audio: Chainsaw, [0,10] Visual: Chainsaw, [0,1],[3,6],[9,10]

Predict: The video shows a man using a chainsaw to cut a tree. The chainsaw is visible and running from the beginning of the video (0 seconds) to the end (10 seconds). Therefore, visual and audio events are: Chainsaw,(0 10) and Chainsaw,(0 10)

Spatio-temporal reasoning

MUSIC-AVQA Task

00006835.mp4
Question: What is the left instrument of the first sounding instrument?

Answer: saxophone

Pred: In the video, four men are playing musical instruments. The man on the left is playing the saxophone, the man in the middle is playing the double bass, the man on the right is playing the guitar, and the man in the back is playing the drums. At the beginning of the video, the man in the middle starts playing the double bass first, and then the other instruments start playing. So the instrument on the left of the first sounding instrument is the saxophone. So the answer is saxophone .

Spatial Localization & Pixel-level understanding

πŸ› οΈ Requirements and Installation

Basic Dependencies:

  • Python == 3.9
  • Pytorch == 2.1.0
  • transformers == 4.37.2
  • deepspeed == 0.12.6

Install required packages:

git clone [email protected]:GeWu-Lab/Crab.git
cd Crab
pip install -r requirements.txt

Multi-modal Encoder Weights:

LLM Weights:

πŸš€ Quick Start

  1. Download finetune weights in directory ckpt_dir, AVS_finetune_weights and AVSS_finetune_weights in directory avs_ckpt_dir;
  2. Prepare your test samples in data/example.json like this:
[
    {
        "task": "avqa",
        "audio_path": "assets/example/avqa/00006835.mp3",
        "video_path": "assets/example/avqa/00006835.mp4",
        "question": "What is the left instrument of the first sounding instrument?"
    },
    {
        "task": "ave",
        "audio_path": "assets/example/ave/-67UNKFmRLk.mp3",
        "video_path": "assets/example/ave/-67UNKFmRLk.mp4"
    },
    {
        "task": "avvp",
        "audio_path": "assets/example/avvp/6aV6c8mY0lg.mp3",
        "video_path": "assets/example/avvp/6aV6c8mY0lg.mp4"
    },
    {
        "task": "arig",
        "audio_path": "assets/example/arig/audio.wav",
        "image_path": "assets/example/arig/1.jpg"
    },
    {
        "task": "s4",
        "audio_path": "assets/example/s4/audio.wav",
        "image_path": "assets/example/s4/0.jpg",
        "mask_path": "assets/example/s4/0.png"
    },
    {
        "task": "ms3",
        "audio_path": "assets/example/ms3/audio.wav",
        "image_path": "assets/example/ms3/1.jpg",
        "mask_path": "assets/example/ms3/1.png"
    },
    {
        "task": "ref-avs",
        "audio_path": "assets/example/ref-avs/audio.wav",
        "image_path": "assets/example/ref-avs/7.jpg",
        "mask_path": "assets/example/ref-avs/00007.png",
        "exp": "making the loudest sound"
    },
    {
        "task":"avss",
        "audio_path":"assets/example/avss/audio.wav",
        "image_path":"assets/example/avss/0.jpg",
        "mask_path":"assets/example/avss/0.png"
    }
]
  1. Infer.
  • For MUSIC-AVQA task, set avqa_task = True and ckpt_dir = <your ckpt_dir> in scripts/quick_start.sh, then run:
bash scripts/quick_start.sh
  • For S4, MS3 and Ref-AVS tasks, set <your task> = True and avs_ckpt_dir = <your avs_ckpt_dir>.

  • For AVSS task, set avss_task = True and avs_ckpt_dir = <your avss_ckpt_dir>.

πŸ—οΈ Training

  1. Pretrain
  • Use our pre-trained checkpoints: Download audio pretrain checkpoint, visual pretrain checkpoint, segmentation pretrain checkpoint in prtrain_ckpt_dir;

  • Pretrain based on LLaMA-7b-Chat-hf model: download image and video pretrain dataset from Video-LLaVA; download audio pretrain dataset from AudioCaps; download segmentation pretrain dataset from LVIS;

    For visual pretrain, run:

    bash scripts/pretrain/pretrain_visual.sh

    For audio pretrain, run:

    bash scripts/pretrain/pretrain_audio.sh

    For segmentation pretrain, run:

    bash scripts/pretrain/pretrain_seg.sh
  1. Finetune. Download AVUIE dataset annotations and raw data from AVE, AVVP, AVS, Ref-AVS, MUSIC-AVQA, VALOR, modify the data_root in dataset/unified_dataset.py;
  2. Jointly training on all tasks:
bash scripts/finetune/finetun_hyper_lora.sh
  1. Jointly training on AVS tasks

set finetune_ckpt_dir = <your finetune ckpt dir> in step 3, then run:

bash scripts/finetune/finetune_hyper_lora_avs.sh

πŸ€– Inference

bash scripts/finetune/inference_hyper_lora.sh

πŸ“‘ Citation

If you find Crab useful for your research and applications, please cite using this BibTeX:

@article{du2025crab,
  title={Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation},
  author={Du, Henghui and Li, Guangyao and Zhou, Chang and Zhang, Chunjie and Zhao, Alan and Hu, Di},
  journal={arXiv preprint arXiv:2503.13068},
  year={2025}
}

πŸ”’ License

This project is released under the Apache 2.0 license as found in the LICENSE file. Please get in touch with us if you find any potential violations.

About

[CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published