Skip to content
/ IDEA Public

【CVPR2025】IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-modal Object Re-Identification

License

Notifications You must be signed in to change notification settings

924973292/IDEA

Repository files navigation

IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-Modal Object Re-Identification

Description of the image

Yuhao Wang · Yongfeng Lv · Pingping Zhang* · Huchuan Lu

CVPR 2025 Paper

Description of the image

Figure 1: Motivation of IDEA.

RGBNT201 assets

Figure 2: Overall Framework of IDEA.

Abstract 📝

IDEA 🚀 is a novel multi-modal object Re-Identification (ReID) framework that leverages inverted text and cooperative deformable aggregation to address the challenges of complex scenarios in multi-modal imaging. By integrating semantic guidance from text annotations and adaptively aggregating discriminative local features, IDEA achieves state-of-the-art performance on multiple benchmarks.


News 📢

  • We released the IDEA codebase!
  • Great news! Our paper has been accepted to CVPR 2025! 🏆

Table of Contents 📑


Introduction 🌟

Multi-modal object Re-IDentification (ReID) aims to retrieve specific objects by utilizing complementary information from various modalities. However, existing methods often focus solely on fusing visual features while neglecting the potential benefits of text-based semantic information.

To address this issue, we propose IDEA, a novel feature learning framework comprising:

  1. Inverted Multi-modal Feature Extractor (IMFE): Integrates multi-modal features using Modal Prefixes and an InverseNet.
  2. Cooperative Deformable Aggregation (CDA): Adaptively aggregates discriminative local information by generating sampling positions.

Additionally, we construct three text-enhanced multi-modal object ReID benchmarks using a standardized pipeline for structured and concise text annotations with Multi-modal Large Language Models (MLLMs). 📝


Contributions

  • Constructed three text-enhanced multi-modal object ReID benchmarks, providing a structured caption generation pipeline across multiple spectral modalities.
  • Introduced IDEA, a novel feature learning framework with two key components:
    • IMFE: Integrates multi-modal features using Modal Prefixes and an InverseNet.
    • CDA: Adaptively aggregates discriminative local information.
  • Validated the effectiveness of our approach through extensive experiments on three benchmark datasets.

Quick View 📊

Dataset Examples

Overview of Annotations

Dataset Overview

Multi-modal Person ReID Annotations Example

Person ReID Annotations

Multi-modal Vehicle ReID Annotations Example

Vehicle ReID Annotations

Experimental Results

Multi-Modal Person ReID

RGBNT201 assets

Multi-Modal Vehicle ReID

RGBNT100 assets

Parameter Analysis

Params


Visualizations 🖼️

Offsets Visualization

Offsets

Cosine Similarity Visualization

Cosine Similarity

Semantic Guidance Visualization

Semantic Guidance

Rank-list Visualization

Multi-modal Person ReID

Rank-list

Multi-modal Vehicle ReID

Rank-list


Quick Start 🚀

Datasets

Codebase Structure

IDEA_Codes
├── PTH                           # Pre-trained models
│   └── ViT-B-16.pt               # CLIP model
├── DATA                          # Dataset root directory
│   ├── RGBNT201                  # RGBNT201 dataset
│   │   ├── train_171             # Training images (171 classes)
│   │   ├── test                  # Testing images
│   │   ├── text                  # Annotations
│   │   │   ├── train_RGB.json    # Training annotations
│   │   │   ├── test_RGB.json     # Testing annotations
│   │   │   └── ...               # Other annotations
│   ├── RGBNT100                  # RGBNT100 dataset
│   └── MSVR310                   # MSVR310 dataset
├── assets                        # Github assets
├── config                        # Configuration files
├── QwenVL_Anno                   # **YOU SHOULD PUT YOUR ANNOTATIONS TO THE DATA FOLDER**
└── ...                           # Other project files

Pretrained Models

Configuration

  • RGBNT201: configs/RGBNT201/IDEA.yml
  • RGBNT100: configs/RGBNT100/IDEA.yml
  • MSVR310: configs/MSVR310/IDEA.yml

Training

conda create -n IDEA python=3.10.13
conda activate IDEA
pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118
cd ../IDEA_PUBLIC
pip install --upgrade pip
pip install -r requirements.txt
python train.py --config_file ./configs/RGBNT201/IDEA.yml

Star History 🌟

Star History Chart


Citation 📚

If you find IDEA helpful in your research, please consider citing:

@inproceedings{wang2025idea,
  title={IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-Modal Object Re-Identification},
  author={Wang, Yuhao and Lv, Yongfeng and Zhang, Pingping and Lu, Huchuan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2025}
}

About

【CVPR2025】IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-modal Object Re-Identification

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published