Skip to content

Commit

Permalink
[Colossal-LLaMA] Fix sft issue for llama2 (hpcaitech#5719)
Browse files Browse the repository at this point in the history
* fix minor issue

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
TongLi3701 and pre-commit-ci[bot] authored May 15, 2024
1 parent 43995ee commit 913c920
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion applications/Colossal-LLaMA/prepare_sft_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
import os
from multiprocessing import cpu_count

from colossal_llama.dataset.conversation import default_conversation
from colossal_llama.dataset.conversation import LLaMA2_Conv
from colossal_llama.dataset.spliced_and_tokenized_dataset import supervised_tokenize_sft
from datasets import dataset_dict, load_dataset
from transformers import AddedToken, AutoTokenizer
Expand Down Expand Up @@ -78,6 +78,7 @@ def main():
# Fix </s> split issue: https://github.com/huggingface/transformers/issues/23833
if args.llama_version == 2:
tokenizer.add_tokens(AddedToken("</s>", normalized=False, special=True), special_tokens=True)
default_conversation = LLaMA2_Conv

tokenizer.add_bos_token = False
tokenizer.add_eos_token = False
Expand Down

0 comments on commit 913c920

Please sign in to comment.