Skip to content

Commit

Permalink
Update benchmarks/flowertune-llm/evaluation/code/README.md
Browse files Browse the repository at this point in the history
Co-authored-by: Javier <[email protected]>
  • Loading branch information
yan-gao-GY and jafermarq authored Sep 9, 2024
1 parent db1810b commit 9ddd057
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions benchmarks/flowertune-llm/evaluation/code/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git && c
```bash
python main.py \
--model=mistralai/Mistral-7B-v0.3
--peft_model=/path/to/fine-tuned-peft-model-dir/ # e.g., ./peft_1
--max_length_generation=1024 # change to 2048 when running mbpp
--batch_size=4
--allow_code_execution
--save_generations
--save_references
--tasks=humaneval # chosen from [mbpp, humaneval, multiple-js, multiple-cpp]
--model=mistralai/Mistral-7B-v0.3 \
--peft_model=/path/to/fine-tuned-peft-model-dir/ \ # e.g., ./peft_1
--max_length_generation=1024 \ # change to 2048 when running mbpp
--batch_size=4 \
--allow_code_execution \
--save_generations \
--save_references \
--tasks=humaneval \ # chosen from [mbpp, humaneval, multiple-js, multiple-cpp]
--metric_output_path=./evaluation_results_humaneval.json # change dataset name based on your choice
```

Expand Down

0 comments on commit 9ddd057

Please sign in to comment.