You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Facing problem while doing the inferencing of mistral LLM example
Olive/examples/mistral$ python mistral.py --config mistral_fp16_optimize.json --inference --prompt "Language models are very useful"
/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Traceback (most recent call last):
File "/home/z004x2xz/WorkAssignedByMatt/Olive/examples/mistral/mistral.py", line 130, in <module>
main()
File "/home/z004x2xz/WorkAssignedByMatt/Olive/examples/mistral/mistral.py", line 122, in main
output = inference(args.model_id, optimized_model_dir, ep, args.prompt, args.max_length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/z004x2xz/WorkAssignedByMatt/Olive/examples/mistral/mistral.py", line 74, in inference
tokenizer = AutoTokenizer.from_pretrained(model_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 814, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2261, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 124, in __init__
super().__init__(
File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 111, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3
```
**Other information**
- OS: ubuntu 22.04
- Olive version: 0.7.0
- onnxruntime-genai: 0.3.0
- onnxruntime-gpu: 1.18.1
- Python 3.11.9
The text was updated successfully, but these errors were encountered:
Facing problem while doing the inferencing of mistral LLM example
The text was updated successfully, but these errors were encountered: