Skip to content

yang-ze-kang/AutoMMLab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoMMLab

Automatically generating deployable models from language instructions for computer vision tasks

📖 Overview

AutoMMLab is the first request-to-model AutoML platform for computer vision tasks, which involves understanding the user’s natural language request and execute the entire workflow to output production-ready models. The whole pipeline of AutoMMLab consists of five main stages,including request understanding, data selection, model selection, model training with hyperparameter optimization (HPO), and model deployment. Based on AutoMMLab, we build a benchmark termed LAMP for evaluating end-to-end prompt-based model production, and also studying each component in the whole production pipeline.

🎉 News

Jan. 29, 2024: AutoMMLab is now open source.

💻️ Get Started

Installation the environment

# download the code
git clone [email protected]:yang-ze-kang/AutoMMLab.git

# create python environment
cd AutoMMLab
conda create -n autommlab python=3.9
source activate autommlab
pip install -r requirements.txt

Initialize the dataset zoo

Download datsets of dataset zoo

Task Dataset URL
Cls. ImageNet https://www.image-net.org/challenges/LSVRC/index.php
Det. COCO https://cocodataset.org/#download
Seg. Cityscapes https://www.cityscapes-dataset.com//a>
Kpt.
COCO https://cocodataset.org/#download
AP-10k https://github.com/AlexTheBad/AP-10K?tab=readme-ov-file#download

And change the path of datasets in configuration file (automlab/configs.py) with your path.

DATASET_ZOO = {
    'ImageNet':'sh1984:s3://openmmlab/datasets/classification/imagenet',
    'COCO':'sh1984:s3://openmmlab/datasets/detection/coco',
    'object365': 'sh1984:s3://openmmlab/datasets/detection/Objects365',
    'openimage': 'sh1984:s3://openmmlab/datasets/detection/coco',
    'cityscapes':'s3://openmmlab/datasets/segmentation/cityscapes',
    'ap10k':'sh1986:s3://ap10k/ap-10k/'
}

RU-LLaMA and HPO-LLaMA

  1. Download the base model and LoRA weights: Base Model: https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main LoRA Weights: https://drive.google.com/file/d/136jt458c6rMOHwDwwVS6U4iVrX9cmo1p/view?usp=drive_link

  2. Change the configuration file (autommlab/configs.py) with your path.

    PATH_LLAMA2 = 'llama_weights/llama-2-7b-hf'
    PATH_LORAS = {
        'ru-llama2':'weights/llama2_lora_weights/save_dir_reqparse_v2',
        'hpo-llama2-classification':'weights/llama2_lora_weights/hpo_classification',
        'hpo-llama2-detection':'weights/llama2_lora_weights/hpo_detection',
        'hpo-llama2-segmentation':'weights/llama2_lora_weights/hpo_segmentation',
        'hpo-llama2-pose':'weights/llama2_lora_weights/hpo_pose'
    }

Setting the configuration

Please edit file 'autommlab/configs.py' to modify the configuration of the demo.

URL_LLAMA = "http://127.0.0.1:10068/llama2"
TRAIN_GPU_NUM = 1
RU_MODEL = 'ru-llama2'
HPO_MODEL = 'hpo-llama2'
HPO_MAX_TRY = 3
TENSORBOARD_PORT = 10066
IP_ADDRESS = 'localhost'

Start Demo

export PYTHONPATH=$PYTHONPATH:$(pwd)

# step 1
# If you use RU-LLaMA and HPO-LLaMA, please deploy them first.
CUDA_VISIBLE_DEVICES=0 python autommlab/models/deploy_llama.py 

# step 2
CUDA_VISIBLE_DEVICES=1 python autommlab/main.py 

📺 Demo

demo.mp4

🤝 Acknowledgments

  • MMEngine: OpenMMLab foundational library for training deep learning models.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMPreTrain: OpenMMLab pre-training toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMDeploy: OpenMMLab model deployment framework.

⚖️ License

Codes and data are freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please contact Mr. Sheng Jin ([email protected]). We will send the detail agreement to you.

📝 Citation

To cite AutoMMLab in publications, please use the following BibTeX entrie.

@misc{yang2024autommlabautomaticallygeneratingdeployable,
      title={AutoMMLab: Automatically Generating Deployable Models from Language Instructions for Computer Vision Tasks}, 
      author={Zekang Yang and Wang Zeng and Sheng Jin and Chen Qian and Ping Luo and Wentao Liu},
      year={2024},
      eprint={2402.15351},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2402.15351}, 
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published