Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(benchmarks) Add LLM evaluation pipeline for general NLP challenge #3767

Merged
merged 50 commits into from
Sep 2, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
6256eb4
Init top-level readme & add general nlp repo
yan-gao-GY Jul 10, 2024
e4abc5f
Update readme title
yan-gao-GY Jul 10, 2024
9d17846
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Jul 10, 2024
aac2c69
Update benchmarks/flowertune-llm/evaluation/README.md
yan-gao-GY Jul 11, 2024
1a597e9
Update benchmarks/flowertune-llm/evaluation/README.md
yan-gao-GY Jul 11, 2024
fbaf264
Update benchmarks/flowertune-llm/evaluation/README.md
yan-gao-GY Jul 11, 2024
00da01d
Update benchmarks/flowertune-llm/evaluation/README.md
yan-gao-GY Jul 11, 2024
a7e6978
Update benchmarks/flowertune-llm/evaluation/general-nlp/README.md
yan-gao-GY Jul 11, 2024
7d4da48
Update readme
yan-gao-GY Jul 11, 2024
7389f38
Update readmes
yan-gao-GY Jul 12, 2024
4080a8b
Update code results
yan-gao-GY Jul 15, 2024
da7444d
Update readme
yan-gao-GY Jul 15, 2024
4debbec
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Jul 15, 2024
d4a91ab
Update readme
yan-gao-GY Jul 15, 2024
b2ec10d
Merge branch 'add-llm-nlp-eval' of https://github.com/adap/flower int…
yan-gao-GY Jul 15, 2024
78a25d7
Update readmes
yan-gao-GY Jul 17, 2024
4ec8f67
Merge branch 'main' into add-llm-nlp-eval
jafermarq Jul 26, 2024
f699003
Merge branch 'main' into add-llm-nlp-eval
jafermarq Jul 31, 2024
2cd9d6f
refactor(benchmarks) Modify parts of benchmark readme files (#3950)
jafermarq Aug 1, 2024
cb300fd
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 6, 2024
9767910
Update banner
yan-gao-GY Aug 6, 2024
94938b1
Simplify evaluation code
yan-gao-GY Aug 6, 2024
6987890
Formatting
yan-gao-GY Aug 6, 2024
34243ae
Update evaluation code
yan-gao-GY Aug 7, 2024
4675759
Update evaluation code
yan-gao-GY Aug 7, 2024
421d8a0
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 7, 2024
3e94c3d
Update readme
yan-gao-GY Aug 7, 2024
3b9fe88
Merge branch 'add-llm-nlp-eval' of https://github.com/adap/flower int…
yan-gao-GY Aug 7, 2024
a58ea64
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 7, 2024
b1e18d8
Merge branch 'main' into add-llm-nlp-eval
danieljanes Aug 8, 2024
4f4eb57
Replace pyproject.toml with requirements.txt
yan-gao-GY Aug 8, 2024
5cf9bab
Merge branch 'main' into add-llm-nlp-eval
danieljanes Aug 9, 2024
eace06c
Apply suggestions from code review
danieljanes Aug 9, 2024
f4d6ceb
Update top readme
yan-gao-GY Aug 13, 2024
3130540
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 13, 2024
864ebdd
Update eval readme
yan-gao-GY Aug 13, 2024
5f7e05c
Merge branch 'main' into add-llm-nlp-eval
jafermarq Aug 15, 2024
9aea5f2
Update license
yan-gao-GY Aug 22, 2024
13aeac0
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 22, 2024
23d4e5a
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 23, 2024
762eafc
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Aug 23, 2024
668b825
Update finance results
yan-gao-GY Aug 23, 2024
cfa7fde
Formatting
yan-gao-GY Aug 23, 2024
0b20181
Fix typo
yan-gao-GY Aug 23, 2024
8bb8214
Update data downloading
yan-gao-GY Sep 2, 2024
05f075b
Merge branch 'main' into add-llm-nlp-eval
yan-gao-GY Sep 2, 2024
f7ecd3e
Merge branch 'main' into add-llm-nlp-eval
jafermarq Sep 2, 2024
30d6a3f
Apply suggestions from code review
jafermarq Sep 2, 2024
0d06df5
Update benchmarks/flowertune-llm/evaluation/README.md
jafermarq Sep 2, 2024
b787dab
Merge branch 'main' into add-llm-nlp-eval
jafermarq Sep 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions benchmarks/flowertune-llm/evaluation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# FlowerTune LLM Evaluation

This repository provides various evaluation metrics to measure the quality of your pre-trained LLMs.
As the final steps to participate in [LLM Leaderboard](https://flower.ai/benchmarks/llm-leaderboard#how-to-participate),
the evaluation scores generated here will be displayed as the definitive values on the LLM Leaderboard.

## How to run
Please check out the individual directory corresponding to your selected challenge (general NLP, finance, medical, and code) to learn how to execute the evaluations.

> [!NOTE]
> If you wish to participate in the LLM Leaderboard, you must not modify the evaluation code and should use the exact command provided in the respective directory to run the evaluation.


## Expected results
The default template generated by `flwr new` for each challenge will produce results as follows, which serve as the lower bound on the LLM Leaderboard.

### General NLP

| | MT-1 | MT-2 | MT-Avg |
|:--------:|:----:|:----:|:------:|
| MT Score | 5.54 | 5.52 | 5.53 |

### Finance

| | FPB | FIQA | TFNS | Avg |
|:-------:|:-----:|:-----:|:-----:|:-----:|
| Acc (%) | 44.55 | 63.64 | 28.77 | 45.65 |

### Medical

| | PubMedQA | MedMCQA | MedQA | Avg |
|:-------:|:--------:|:-------:|:-----:|:-----:|
| Acc (%) | 59.00 | 23.69 | 27.10 | 36.60 |

### Code

| | MBPP | HumanEval | Multiple (JS) | Multiple (C++) | Avg |
|:----------:|:-----:|:---------:|:-------------:|:--------------:|:-----:|
| Pass@1 (%) | 31.40 | 25.00 | 31.68 | 24.84 | 28.23 |
49 changes: 49 additions & 0 deletions benchmarks/flowertune-llm/evaluation/general-nlp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
## Evaluation for General NLP challenge

We leverage MT-bench metric provided by [FastChat](https://github.com/lm-sys/FastChat) to evaluate our trained LLMs.
MT-bench represents a comprehensive suite of multi-turn, open-ended questions designed to evaluate chat assistants.
Strong LLMs, such as GPT-4, serve as judges to assess the quality of responses provided by the chat assistants under examination.

### Step 0. Set up Environment

```shell
git clone --depth=1 https://github.com/adap/flower.git && mv flower/benchmarks/flowertune-llm/evaluation/general-nlp . && rm -rf flower && cd general-nlp
```

Then, install dependencies with:

```shell
# From a new python environment, run:
pip install -e .

# Log in HuggingFace account
huggingface-cli login
```


### Step 1. Generate model answers to MT-bench questions

```bash
python gen_model_answer.py --peft-path=/path/to/pre-trained-model-dir/ # e.g., ./peft_1
```
The answers will be saved to `data/mt_bench/model_answer/[base_model_name].jsonl` in default.


### Step 2. Generate judgments using GPT-4
```bash
export OPENAI_API_KEY=XXXXXX # set the OpenAI API key
python gen_judgement.py --model-list Mistral-7B-v0.3
```

You can specify the base model name via `--model-list`.
The judgments will be saved to `data/mt_bench/model_judgment/gpt-4_single.jsonl` in default.

### Step 3. Show MT-bench scores

```bash
python show_result.py --model-list Mistral-7B-v0.3
```
GPT-4 will give a score on a scale of 10 to the first-turn (MT-1) and second-turn (MT-2) of the conversations, along with an average value as the third score.

> [!NOTE]
> Please ensure that you provide all **three scores** when submitting to the LLM Leaderboard.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it ok to host this .jsonl (and the other two) files. Do we need to add some license or acknowledgement? How do other repos do this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For OpenFedLLM, they just use it: https://github.com/rui-ye/OpenFedLLM/tree/main/evaluation/open_ended, but they have a Citation section at bottom

Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{"name": "pair-v2", "type": "pairwise", "system_prompt": "Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the user's instructions and answers the user's question better. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explanation. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: \"[[A]]\" if assistant A is better, \"[[B]]\" if assistant B is better, and \"[[C]]\" for a tie.", "prompt_template": "[User Question]\n{question}\n\n[The Start of Assistant A's Answer]\n{answer_a}\n[The End of Assistant A's Answer]\n\n[The Start of Assistant B's Answer]\n{answer_b}\n[The End of Assistant B's Answer]", "description": "Prompt for general questions", "category": "general", "output_format": "[[A]]"}
{"name": "pair-v2-multi-turn", "type": "pairwise", "system_prompt": "Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user questions. You should choose the assistant that follows the user's instructions and answers the user's questions better. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. You should focus on who provides a better answer to the second user question. Begin your evaluation by comparing the responses of the two assistants and provide a short explanation. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: \"[[A]]\" if assistant A is better, \"[[B]]\" if assistant B is better, and \"[[C]]\" for a tie.", "prompt_template": "<|The Start of Assistant A's Conversation with User|>\n\n### User:\n{question_1}\n\n### Assistant A:\n{answer_a_1}\n\n### User:\n{question_2}\n\n### Assistant A:\n{answer_a_2}\n\n<|The End of Assistant A's Conversation with User|>\n\n\n<|The Start of Assistant B's Conversation with User|>\n\n### User:\n{question_1}\n\n### Assistant B:\n{answer_b_1}\n\n### User:\n{question_2}\n\n### Assistant B:\n{answer_b_2}\n\n<|The End of Assistant B's Conversation with User|>", "description": "Prompt for multi-turn general questions", "category": "general", "output_format": "[[A]]"}
{"name": "pair-math-v1", "type": "pairwise", "system_prompt": "Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. Your evaluation should consider correctness and helpfulness. You will be given a reference answer, assistant A's answer, and assistant B's answer. Your job is to evaluate which assistant's answer is better. Begin your evaluation by comparing both assistants' answers with the reference answer. Identify and correct any mistakes. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: \"[[A]]\" if assistant A is better, \"[[B]]\" if assistant B is better, and \"[[C]]\" for a tie.", "prompt_template": "[User Question]\n{question}\n\n[The Start of Reference Answer]\n{ref_answer_1}\n[The End of Reference Answer]\n\n[The Start of Assistant A's Answer]\n{answer_a}\n[The End of Assistant A's Answer]\n\n[The Start of Assistant B's Answer]\n{answer_b}\n[The End of Assistant B's Answer]", "description": "Prompt for math questions", "category": "math", "output_format": "[[A]]"}
{"name": "pair-math-v1-multi-turn", "type": "pairwise", "system_prompt": "Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user questions. Your evaluation should consider correctness and helpfulness. You will be given reference answers, the assistant A's answers, the assistant B's answers. Your job is to determine which assistant provides correct and helpful answers to the second user question. Begin your evaluation by comparing both assistants' answers with the reference answers. Identify and correct any mistakes. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: \"[[A]]\" if assistant A is better, \"[[B]]\" if assistant B is better, and \"[[C]]\" for a tie.", "prompt_template": "<|The Start of Reference Answer|>\n\n### User:\n{question_1}\n\n### Reference answer:\n{ref_answer_1}\n\n### User:\n{question_2}\n\n### Reference answer:\n{ref_answer_2}\n\n<|The End of Reference Answer|>\n\n\n<|The Start of Assistant A's Conversation with User|>\n\n### User:\n{question_1}\n\n### Assistant A:\n{answer_a_1}\n\n### User:\n{question_2}\n\n### Assistant A:\n{answer_a_2}\n\n<|The End of Assistant A's Conversation with User|>\n\n\n<|The Start of Assistant B's Conversation with User|>\n\n### User:\n{question_1}\n\n### Assistant B:\n{answer_b_1}\n\n### User:\n{question_2}\n\n### Assistant B:\n{answer_b_2}\n\n<|The End of Assistant B's Conversation with User|>", "description": "Prompt for multi-turn general questions", "category": "general", "output_format": "[[A]]"}
{"name": "single-v1", "type": "single", "system_prompt": "You are a helpful assistant.", "prompt_template": "[Instruction]\nPlease act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of the response. Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \"[[rating]]\", for example: \"Rating: [[5]]\".\n\n[Question]\n{question}\n\n[The Start of Assistant's Answer]\n{answer}\n[The End of Assistant's Answer]", "description": "Prompt for general questions", "category": "general", "output_format": "[[rating]]"}
{"name": "single-math-v1", "type": "single", "system_prompt": "You are a helpful assistant.", "prompt_template": "[Instruction]\nPlease act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. Your evaluation should consider correctness and helpfulness. You will be given a reference answer and the assistant's answer. Begin your evaluation by comparing the assistant's answer with the reference answer. Identify and correct any mistakes. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \"[[rating]]\", for example: \"Rating: [[5]]\".\n\n[Question]\n{question}\n\n[The Start of Reference Answer]\n{ref_answer_1}\n[The End of Reference Answer]\n\n[The Start of Assistant's Answer]\n{answer}\n[The End of Assistant's Answer]", "description": "Prompt for general questions", "category": "math", "output_format": "[[rating]]"}
{"name": "single-v1-multi-turn", "type": "single", "system_prompt": "Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of the response. You evaluation should focus on the assistant's answer to the second user question. Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \"[[rating]]\", for example: \"Rating: [[5]]\".\n\n", "prompt_template": "<|The Start of Assistant A's Conversation with User|>\n\n### User:\n{question_1}\n\n### Assistant A:\n{answer_1}\n\n### User:\n{question_2}\n\n### Assistant A:\n{answer_2}\n\n<|The End of Assistant A's Conversation with User|>", "description": "Prompt for general questions", "category": "general", "output_format": "[[rating]]"}
{"name": "single-math-v1-multi-turn", "type": "single", "system_prompt": "Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question. Your evaluation should consider correctness and helpfulness. You will be given a reference answer and the assistant's answer. You evaluation should focus on the assistant's answer to the second question. Begin your evaluation by comparing the assistant's answer with the reference answer. Identify and correct any mistakes. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \"[[rating]]\", for example: \"Rating: [[5]]\".\n\n", "prompt_template": "<|The Start of Reference Answer|>\n\n### User:\n{question_1}\n\n### Reference answer:\n{ref_answer_1}\n\n### User:\n{question_2}\n\n### Reference answer:\n{ref_answer_2}\n\n<|The End of Reference Answer|>\n\n\n<|The Start of Assistant A's Conversation with User|>\n\n### User:\n{question_1}\n\n### Assistant A:\n{answer_1}\n\n### User:\n{question_2}\n\n### Assistant A:\n{answer_2}\n\n<|The End of Assistant A's Conversation with User|>", "description": "Prompt for general questions", "category": "math", "output_format": "[[rating]]"}
Loading