IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-Modal Object Re-Identification
Yuhao Wang · Yongfeng Lv · Pingping Zhang* · Huchuan Lu
Figure 1: Motivation of IDEA.
Figure 2: Overall Framework of IDEA.
IDEA 🚀 is a novel multi-modal object Re-Identification (ReID) framework that leverages inverted text and cooperative deformable aggregation to address the challenges of complex scenarios in multi-modal imaging. By integrating semantic guidance from text annotations and adaptively aggregating discriminative local features, IDEA achieves state-of-the-art performance on multiple benchmarks.
- We released the IDEA codebase!
- Great news! Our paper has been accepted to CVPR 2025! 🏆
Multi-modal object Re-IDentification (ReID) aims to retrieve specific objects by utilizing complementary information from various modalities. However, existing methods often focus solely on fusing visual features while neglecting the potential benefits of text-based semantic information.
To address this issue, we propose IDEA, a novel feature learning framework comprising:
- Inverted Multi-modal Feature Extractor (IMFE): Integrates multi-modal features using Modal Prefixes and an InverseNet.
- Cooperative Deformable Aggregation (CDA): Adaptively aggregates discriminative local information by generating sampling positions.
Additionally, we construct three text-enhanced multi-modal object ReID benchmarks using a standardized pipeline for structured and concise text annotations with Multi-modal Large Language Models (MLLMs). 📝
- Constructed three text-enhanced multi-modal object ReID benchmarks, providing a structured caption generation pipeline across multiple spectral modalities.
- Introduced IDEA, a novel feature learning framework with two key components:
- IMFE: Integrates multi-modal features using Modal Prefixes and an InverseNet.
- CDA: Adaptively aggregates discriminative local information.
- Validated the effectiveness of our approach through extensive experiments on three benchmark datasets.
- RGBNT201: Google Drive
- RGBNT100: Baidu Pan (Code:
rjin
) - MSVR310: Google Drive
- Annotations: QwenVL_Anno
IDEA_Codes
├── PTH # Pre-trained models
│ └── ViT-B-16.pt # CLIP model
├── DATA # Dataset root directory
│ ├── RGBNT201 # RGBNT201 dataset
│ │ ├── train_171 # Training images (171 classes)
│ │ ├── test # Testing images
│ │ ├── text # Annotations
│ │ │ ├── train_RGB.json # Training annotations
│ │ │ ├── test_RGB.json # Testing annotations
│ │ │ └── ... # Other annotations
│ ├── RGBNT100 # RGBNT100 dataset
│ └── MSVR310 # MSVR310 dataset
├── assets # Github assets
├── config # Configuration files
├── QwenVL_Anno # **YOU SHOULD PUT YOUR ANNOTATIONS TO THE DATA FOLDER**
└── ... # Other project files
- CLIP: Baidu Pan (Code:
52fu
)
- RGBNT201:
configs/RGBNT201/IDEA.yml
- RGBNT100:
configs/RGBNT100/IDEA.yml
- MSVR310:
configs/MSVR310/IDEA.yml
conda create -n IDEA python=3.10.13
conda activate IDEA
pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118
cd ../IDEA_PUBLIC
pip install --upgrade pip
pip install -r requirements.txt
python train.py --config_file ./configs/RGBNT201/IDEA.yml
If you find IDEA helpful in your research, please consider citing:
@inproceedings{wang2025idea,
title={IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-Modal Object Re-Identification},
author={Wang, Yuhao and Lv, Yongfeng and Zhang, Pingping and Lu, Huchuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}