Skip to content

Official PyTorch code for "Prompt-based Continual Learning for Extending Pretrained CLIP Models' Knowledge (ACMMM Asia 2024)".

Notifications You must be signed in to change notification settings

jiaolifengmi/IT-Prompt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

VQ-Prompt

Official PyTorch code for "Prompt-based Continual Learning for Extending Pretrained CLIP Models' Knowledge (ACMMM Asia2024)".

Abstract

Contrastive Language-Image Pretraining (CLIP) model has demonstrated remarkable performance and strong zero-shot capabilities through its training on text-image datasets using contrastive learning. This has sparked interest in developing continuous learning methods based on CLIP to extend its knowledge to new datasets. However, traditional continuous learning approaches often involve modification to the original parameters of the pretrained CLIP model and consequently compromise its zero-shot capabilities. Additionally, the substantial parameter size of the CLIP model makes it challenging for traditional continuous learning methods due to lengthy training times. To tackle these challenges, we propose Image Text (IT-)Prompt, which leverages the inherent correlation between visual and textual information to train discrete prompts dedicated to individual tasks, serving as repositories for task-specific knowledge. By employing discrete textual prompts as guidance, we ensure the uniqueness of each task's prompt and prevent interference among tasks, thus alleviating catastrophic forgetting during continuous learning. While retaining the pretrained parameters of CLIP, our approach introduces only a small number of additional trainable parameters. This allows us to enhance training efficiency and preserving the original zero-shot capabilities of CLIP. Comparative experiments show that IT-Prompt achieves a performance improvement of at least 10% compared to state-of-the-art methods..

Requirements

  • python=3.8.18
  • torch=2.0.0+cu118
  • torchvision=0.15.1+cu118
  • timm=0.9.12
  • scikit-learn=1.3.2
  • numpy
  • pyaml
  • pillow
  • opencv-python
  • pandas
  • openpyxl (write results to a xlsx file)

Datasets

  • Create a folder datasets/

Checkpoints

  • Create a folder pretrained/

Training

The complete code will be uploaded as soon as possible.

Results

Results will be saved in a folder named output/.

Reference Codes

[1] HiDe-Prompt

Citation

If you find this repository is useful, please cite the following reference.

@article{jiao2024,
  title={Prompt-based Continual Learning for Extending Pretrained CLIP Models’ Knowledge},
  author={Jiao, Li and Cao, Lihong and Wang, Tian},
  journal={ACMMM Asia},
  year={2024}
}

About

Official PyTorch code for "Prompt-based Continual Learning for Extending Pretrained CLIP Models' Knowledge (ACMMM Asia 2024)".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published