You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I followed the training steps to train the llama2 model, but encountered the following error. I searched a lot, but still couldn't solve it.
UndefinedError File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 678, in _next_data
: dict object has no element 0
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hs/hl/Medusa/medusa/train/train_legacy.py", line 278, in __getitem__
ret = preprocess([self.raw_data[i]], self.tokenizer)
File "/home/hs/hl/Medusa/medusa/train/train_legacy.py", line 183, in preprocess
prompt = tokenizer.apply_chat_template(conversation, tokenize=False)
File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1833, in apply_chat_template
rendered_chat = compiled_template.render(
File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/jinja2/environment.py", line 1304, in render
self.environment.handle_exception()
File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/jinja2/environment.py", line 939, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 1, in top-level template code
File "/home/hs/anaconda3/envs/onebit/lib/python3.10/site-packages/jinja2/sandbox.py", line 304, in getitem
return obj[argument]
jinja2.exceptions.UndefinedError: dict object has no element 0
0%| | 0/17156 [00:00<?, ?it/s]
I am considering whether it is because I need to perform the following operation on the shareGPT dataset, but I think this step is optional: python create_data.py --input-filename ShareGPT_Vicuna_unfiltered/ShareGPT_V4.3_unfiltered_cleaned_split.json --output-filename mistral.json
The text was updated successfully, but these errors were encountered:
I followed the training steps to train the llama2 model, but encountered the following error. I searched a lot, but still couldn't solve it.
My training script is as follows:
my pip list:
I am considering whether it is because I need to perform the following operation on the shareGPT dataset, but I think this step is optional:
python create_data.py --input-filename ShareGPT_Vicuna_unfiltered/ShareGPT_V4.3_unfiltered_cleaned_split.json --output-filename mistral.json
The text was updated successfully, but these errors were encountered: