Skip to content

Commit

Permalink
fix llama2_70b_lora broken link for Accelerate config file in the readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Hiwot Kassa committed Sep 19, 2024
1 parent cdd928d commit 10651ea
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion llama2_70b_lora/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ accelerate launch --config_file configs/default_config.yaml scripts/train.py \
--seed 1234 \
--lora_target_modules "qkv_proj,o_proj"
```
where the Accelerate config file is [this one](https://github.com/regisss/lora/blob/main/configs/default_config.yaml).
where the Accelerate config file is [this one](https://github.com/mlcommons/training/blob/master/llama2_70b_lora/configs/default_config.yaml).

> Using flash attention with `--use_flash_attn` is necessary for training on 8k-token sequences.
Expand Down

0 comments on commit 10651ea

Please sign in to comment.