Skip to content

FlagEval is an evaluation toolkit for AI large foundation models.

License

Notifications You must be signed in to change notification settings

xuanricheng/FlagEval

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FlagEval 简体中文


Overview

FlagEval is an evaluation toolkit for AI large foundation models. Our goal is to explore and integrate scientific, fair and open foundation model evaluation benchmarks, methods and tools. FlagEval will support multi-dimensional evaluation (such as accuracy, efficiency, robustness, etc.) of foundation models in/cross different modalities (such as NLP, audio, CV and multimodal) in the future. We hope that through the evaluation of the foundation models, we can deepen the understanding of the foundation models and promote related technological innovation and industrial application.

1. mCLIPEval

mCLIPEval is a evaluation toolkit for vision-language models (such as CLIP, Contrastive Language–Image Pre-training).

  • Including Multilingual (12 languages) datasets and monolingual (English/Chinese) datasets.
  • Supporting for Zero-shot classification, Zero-shot retrieval and zeroshot composition tasks.
  • Adapted to FlagAI pretrained models (AltCLIP, EVA-CLIP), OpenCLIP pretrained models, Chinese CLIP models, Multilingual CLIP models, Taiyi Series pretrained models, or customized models.
  • Data preparation from various resources, like torchvision, huggingface, kaggle, etc.
  • Visualization of evaluation results through leaderboard figures or tables, and detailed comparsions between two specific models.

How to use

Environment Preparation:

  • Pytorch version >= 1.8.0
  • Python version >= 3.8
  • For evaluating models on GPUs, you'll also need install CUDA and NCCL

Step:

git clone https://github.com/FlagOpen/FlagEval.git
cd FlagEval/mCLIPEval/
pip install -r requirements.txt

Please refer to mCLIPEval/README.md for more details.

2. ImageEval-prompt

ImageEval-prompt is a set of prompts that evaluate text-to-image (T2I) models at a fine-grained level, including entity, style and detail. By conducting comprehensive evaluations at a fine-grained level, researchers can better understand the strengths and limitations of T2I models, in order to further improve their performance.

  • Including 1,624 English prompts and 339 Chinese prompts.
  • Each prompt is annotated using "double-blind annotation & third-party arbitration" approach, divided into three dimensions: entities, styles, and details.
    • Entity dimension includes five sub-dimensions: object, state, color, quantity, and position;
    • Style dimension includes two sub-dimensions: painting style and cultural style;
    • Detail dimension includes four sub-dimensions: hands, facial features, gender, and illogical knowledge.

Please refer to imageEval/README.md for more details.

Contact us

  • For help and issues associated with FlagEval, or reporting a bug, please open a GitHub Issue or e-mail to [email protected]. Let's build a better & stronger FlagEval together :)
  • We're hiring! If you are interested in working with us on foundation model evaluation, please contact [email protected].
  • Welcome to collaborate with FlagEval! New task or new dataset submissions are encouraged. If you are interested in contributiong new task or new dataset or new tool to FlagEval, please contact [email protected].

The majority of FlagEval is licensed under the Apache 2.0 license, however portions of the project are available under separate license terms:

Misc

↳ Stargazers, thank you for your support!

Stargazers repo roster for @FlagOpen/FlagEval

↳ Forkers, thank you for your support!

Forkers repo roster for @FlagOpen/FlagEval

If you find our work helpful, please consider to star🌟 this repo. Thanks for your support!

About

FlagEval is an evaluation toolkit for AI large foundation models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Other 0.1%