Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

ModuleNotFoundError: No module named 'neural_compressor.conf' #1689

Closed
ErvinXie opened this issue Aug 19, 2024 · 7 comments
Closed

ModuleNotFoundError: No module named 'neural_compressor.conf' #1689

ErvinXie opened this issue Aug 19, 2024 · 7 comments

Comments

@ErvinXie
Copy link

I followed the quick start guide, and an error occurred when I tried to run the python script. It seems to be an dependency error. I searched the internet and did not find any solution. How to solve this?

Traceback (most recent call last):
  File "/home/xwy/Project/test-transformers/test-intel.py", line 2, in <module>
    from intel_extension_for_transformers.transformers import AutoModelForCausalLM
  File "/home/xwy/.conda/envs/kvc/lib/python3.10/site-packages/intel_extension_for_transformers/transformers/__init__.py", line 19, in <module>
    from .config import (
  File "/home/xwy/.conda/envs/kvc/lib/python3.10/site-packages/intel_extension_for_transformers/transformers/config.py", line 21, in <module>
    from neural_compressor.conf.config import (
ModuleNotFoundError: No module named 'neural_compressor.conf'
@redhavoc
Copy link

I had a similar issue and solved it by creating a fresh venv and installing the packages from scratch. It seems that they have not checked the dependencies properly. I am looking to run things on CPU and these are the exact commands I used

git clone https://github.com/intel/intel-extension-for-transformers/
cd intel-extension-for-transformers/
python3 -m venv intel
source intel/bin/activate
#and install all the requirements
python3 -m pip install -r requirements-cpu.txt
pip install intel-extension-for-transformers accelerate datasets pydantic numba 

@ErvinXie
Copy link
Author

I tried this but it still does not work.

Traceback (most recent call last):
  File "/home/xwy/Project/intel-extension-for-transformers/test/qw.py", line 2, in <module>
    from intel_extension_for_transformers.transformers import AutoModelForCausalLM, GPTQConfig
  File "/home/xwy/Project/intel-extension-for-transformers/intel/lib/python3.11/site-packages/intel_extension_for_transformers/transformers/__init__.py", line 19, in <module>
    from .config import (
  File "/home/xwy/Project/intel-extension-for-transformers/intel/lib/python3.11/site-packages/intel_extension_for_transformers/transformers/config.py", line 21, in <module>
    from neural_compressor.conf.config import (
ModuleNotFoundError: No module named 'neural_compressor.conf'

@redhavoc
Copy link

if you downgrade to version 2.6 of the neural_compressor, this issue will go away because the 'conf' class exists there
pip install --upgrade neural_compressor=2.6
but when I did that I had all sorts of other issues. For me a clean setup fixed everything. Probably somebody more experienced can help with this

@ErvinXie
Copy link
Author

@redhavoc Thank you so much. Downgrade nerual_compressor works!

@ayttop
Copy link

ayttop commented Aug 29, 2024

not run

@ayttop
Copy link

ayttop commented Aug 29, 2024

Microsoft Windows [Version 10.0.22631.4112]
(c) Microsoft Corporation. All rights reserved.

C:\Users\ArabTech>cd C:\Users\ArabTech\Desktop\1

C:\Users\ArabTech\Desktop\1>cd intel-extension-for-transformers

C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>python -m venv intel

C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>source intel/bin/activate
'source' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>intel/bin/activate
'intel' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>cd intel/bin/activate
The system cannot find the path specified.

C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>activate.bat
'activate.bat' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>intel\Scripts\activate

(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>python -m pip install -r requirements-cpu.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cpu
Collecting cmake (from -r requirements-cpu.txt (line 2))
Obtaining dependency information for cmake from https://files.pythonhosted.org/packages/5b/34/a6a1030ec63da17e884bf2916f7ff92ad76f730d5e8edafd948b99c05384/cmake-3.30.2-py3-none-win_amd64.whl.metadata
Downloading cmake-3.30.2-py3-none-win_amd64.whl.metadata (6.1 kB)
Collecting ninja (from -r requirements-cpu.txt (line 3))
Obtaining dependency information for ninja from https://files.pythonhosted.org/packages/b6/2f/a3bc50fa63fc4fe9348e15b53dc8c87febfd4e0c660fcf250c4b19a3aa3b/ninja-1.11.1.1-py2.py3-none-win_amd64.whl.metadata
Downloading ninja-1.11.1.1-py2.py3-none-win_amd64.whl.metadata (5.4 kB)
Collecting torch==2.3.0+cpu (from -r requirements-cpu.txt (line 4))
Downloading https://download.pytorch.org/whl/cpu/torch-2.3.0%2Bcpu-cp311-cp311-win_amd64.whl (161.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 161.8/161.8 MB 3.4 MB/s eta 0:00:00
Collecting filelock (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for filelock from https://files.pythonhosted.org/packages/ae/f0/48285f0262fe47103a4a45972ed2f9b93e4c80b8fd609fa98da78b2a5706/filelock-3.15.4-py3-none-any.whl.metadata
Downloading filelock-3.15.4-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.8.0 (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for typing-extensions>=4.8.0 from https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl.metadata
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for sympy from https://files.pythonhosted.org/packages/c1/f9/6845bf8fca0eaf847da21c5d5bc6cd92797364662824a11d3f836423a1a5/sympy-1.13.2-py3-none-any.whl.metadata
Downloading sympy-1.13.2-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for networkx from https://files.pythonhosted.org/packages/38/e9/5f72929373e1a0e8d142a130f3f97e6ff920070f87f91c4e13e40e0fba5a/networkx-3.3-py3-none-any.whl.metadata
Downloading networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for jinja2 from https://files.pythonhosted.org/packages/31/80/3a54838c3fb461f6fec263ebf3a3a41771bd05190238de3486aae8540c36/jinja2-3.1.4-py3-none-any.whl.metadata
Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for fsspec from https://files.pythonhosted.org/packages/5e/44/73bea497ac69bafde2ee4269292fa3b41f1198f4bb7bbaaabde30ad29d4a/fsspec-2024.6.1-py3-none-any.whl.metadata
Downloading fsspec-2024.6.1-py3-none-any.whl.metadata (11 kB)
Collecting mkl<=2021.4.0,>=2021.1.1 (from torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Downloading https://download.pytorch.org/whl/mkl-2021.4.0-py2.py3-none-win_amd64.whl (228.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 228.5/228.5 MB 3.0 MB/s eta 0:00:00
Collecting intel-openmp==2021.* (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Downloading https://download.pytorch.org/whl/intel_openmp-2021.4.0-py2.py3-none-win_amd64.whl (3.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 3.7 MB/s eta 0:00:00
Collecting tbb==2021.* (from mkl<=2021.4.0,>=2021.1.1->torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Obtaining dependency information for tbb==2021.* from https://files.pythonhosted.org/packages/9b/24/84ce997e8ae6296168a74d0d9c4dde572d90fb23fd7c0b219c30ff71e00e/tbb-2021.13.1-py3-none-win_amd64.whl.metadata
Downloading tbb-2021.13.1-py3-none-win_amd64.whl.metadata (1.1 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl (17 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy->torch==2.3.0+cpu->-r requirements-cpu.txt (line 4))
Downloading https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 3.4 MB/s eta 0:00:00
Downloading cmake-3.30.2-py3-none-win_amd64.whl (35.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 35.6/35.6 MB 3.6 MB/s eta 0:00:00
Downloading ninja-1.11.1.1-py2.py3-none-win_amd64.whl (312 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 313.0/313.0 kB 3.9 MB/s eta 0:00:00
Downloading tbb-2021.13.1-py3-none-win_amd64.whl (286 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 286.9/286.9 kB 3.5 MB/s eta 0:00:00
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Downloading filelock-3.15.4-py3-none-any.whl (16 kB)
Downloading fsspec-2024.6.1-py3-none-any.whl (177 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 177.6/177.6 kB 3.6 MB/s eta 0:00:00
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Downloading networkx-3.3-py3-none-any.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 3.6 MB/s eta 0:00:00
Downloading sympy-1.13.2-py3-none-any.whl (6.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 3.7 MB/s eta 0:00:00
Installing collected packages: tbb, ninja, mpmath, intel-openmp, typing-extensions, sympy, networkx, mkl, MarkupSafe, fsspec, filelock, cmake, jinja2, torch
Successfully installed MarkupSafe-2.1.5 cmake-3.30.2 filelock-3.15.4 fsspec-2024.6.1 intel-openmp-2021.4.0 jinja2-3.1.4 mkl-2021.4.0 mpmath-1.3.0 networkx-3.3 ninja-1.11.1.1 sympy-1.13.2 tbb-2021.13.1 torch-2.3.0+cpu typing-extensions-4.12.2

[notice] A new release of pip is available: 23.2.1 -> 24.2
[notice] To update, run: python.exe -m pip install --upgrade pip

(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>python.exe -m pip install --upgrade pip
Requirement already satisfied: pip in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (23.2.1)
Collecting pip
Obtaining dependency information for pip from https://files.pythonhosted.org/packages/d4/55/90db48d85f7689ec6f81c0db0622d704306c5284850383c090e6c7195a5c/pip-24.2-py3-none-any.whl.metadata
Using cached pip-24.2-py3-none-any.whl.metadata (3.6 kB)
Using cached pip-24.2-py3-none-any.whl (1.8 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 23.2.1
Uninstalling pip-23.2.1:
Successfully uninstalled pip-23.2.1
Successfully installed pip-24.2

(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>pip install intel-extension-for-transformers accelerate datasets pydantic numba
Collecting intel-extension-for-transformers
Using cached intel_extension_for_transformers-1.4.2-cp311-cp311-win_amd64.whl.metadata (26 kB)
Collecting accelerate
Using cached accelerate-0.33.0-py3-none-any.whl.metadata (18 kB)
Collecting datasets
Using cached datasets-2.21.0-py3-none-any.whl.metadata (21 kB)
Collecting pydantic
Using cached pydantic-2.8.2-py3-none-any.whl.metadata (125 kB)
Collecting numba
Downloading numba-0.60.0-cp311-cp311-win_amd64.whl.metadata (2.8 kB)
Collecting packaging (from intel-extension-for-transformers)
Using cached packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
Collecting numpy (from intel-extension-for-transformers)
Using cached numpy-2.1.0-cp311-cp311-win_amd64.whl.metadata (59 kB)
Collecting schema (from intel-extension-for-transformers)
Using cached schema-0.7.7-py2.py3-none-any.whl.metadata (34 kB)
Collecting pyyaml (from intel-extension-for-transformers)
Using cached PyYAML-6.0.2-cp311-cp311-win_amd64.whl.metadata (2.1 kB)
Collecting neural-compressor (from intel-extension-for-transformers)
Using cached neural_compressor-3.0-py3-none-any.whl.metadata (15 kB)
Collecting transformers (from intel-extension-for-transformers)
Using cached transformers-4.44.2-py3-none-any.whl.metadata (43 kB)
Collecting numpy (from intel-extension-for-transformers)
Using cached numpy-1.26.4-cp311-cp311-win_amd64.whl.metadata (61 kB)
Collecting psutil (from accelerate)
Using cached psutil-6.0.0-cp37-abi3-win_amd64.whl.metadata (22 kB)
Requirement already satisfied: torch>=1.10.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from accelerate) (2.3.0+cpu)
Collecting huggingface-hub>=0.21.0 (from accelerate)
Using cached huggingface_hub-0.24.6-py3-none-any.whl.metadata (13 kB)
Collecting safetensors>=0.3.1 (from accelerate)
Using cached safetensors-0.4.4-cp311-none-win_amd64.whl.metadata (3.9 kB)
Requirement already satisfied: filelock in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from datasets) (3.15.4)
Collecting pyarrow>=15.0.0 (from datasets)
Using cached pyarrow-17.0.0-cp311-cp311-win_amd64.whl.metadata (3.4 kB)
Collecting dill<0.3.9,>=0.3.0 (from datasets)
Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB)
Collecting pandas (from datasets)
Using cached pandas-2.2.2-cp311-cp311-win_amd64.whl.metadata (19 kB)
Collecting requests>=2.32.2 (from datasets)
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting tqdm>=4.66.3 (from datasets)
Using cached tqdm-4.66.5-py3-none-any.whl.metadata (57 kB)
Collecting xxhash (from datasets)
Using cached xxhash-3.5.0-cp311-cp311-win_amd64.whl.metadata (13 kB)
Collecting multiprocess (from datasets)
Using cached multiprocess-0.70.16-py311-none-any.whl.metadata (7.2 kB)
Requirement already satisfied: fsspec<=2024.6.1,>=2023.1.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from fsspec[http]<=2024.6.1,>=2023.1.0->datasets) (2024.6.1)
Collecting aiohttp (from datasets)
Using cached aiohttp-3.10.5-cp311-cp311-win_amd64.whl.metadata (7.8 kB)
Collecting annotated-types>=0.4.0 (from pydantic)
Using cached annotated_types-0.7.0-py3-none-any.whl.metadata (15 kB)
Collecting pydantic-core==2.20.1 (from pydantic)
Using cached pydantic_core-2.20.1-cp311-none-win_amd64.whl.metadata (6.7 kB)
Requirement already satisfied: typing-extensions>=4.6.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from pydantic) (4.12.2)
Collecting llvmlite<0.44,>=0.43.0dev0 (from numba)
Downloading llvmlite-0.43.0-cp311-cp311-win_amd64.whl.metadata (4.9 kB)
Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->datasets)
Using cached aiohappyeyeballs-2.4.0-py3-none-any.whl.metadata (5.9 kB)
Collecting aiosignal>=1.1.2 (from aiohttp->datasets)
Using cached aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB)
Collecting attrs>=17.3.0 (from aiohttp->datasets)
Using cached attrs-24.2.0-py3-none-any.whl.metadata (11 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->datasets)
Using cached frozenlist-1.4.1-cp311-cp311-win_amd64.whl.metadata (12 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->datasets)
Using cached multidict-6.0.5-cp311-cp311-win_amd64.whl.metadata (4.3 kB)
Collecting yarl<2.0,>=1.0 (from aiohttp->datasets)
Using cached yarl-1.9.4-cp311-cp311-win_amd64.whl.metadata (32 kB)
Collecting charset-normalizer<4,>=2 (from requests>=2.32.2->datasets)
Using cached charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl.metadata (34 kB)
Collecting idna<4,>=2.5 (from requests>=2.32.2->datasets)
Using cached idna-3.8-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests>=2.32.2->datasets)
Using cached urllib3-2.2.2-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests>=2.32.2->datasets)
Using cached certifi-2024.7.4-py3-none-any.whl.metadata (2.2 kB)
Requirement already satisfied: sympy in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from torch>=1.10.0->accelerate) (1.13.2)
Requirement already satisfied: networkx in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from torch>=1.10.0->accelerate) (3.3)
Requirement already satisfied: jinja2 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from torch>=1.10.0->accelerate) (3.1.4)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from torch>=1.10.0->accelerate) (2021.4.0)
Collecting colorama (from tqdm>=4.66.3->datasets)
Using cached colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB)
Collecting deprecated>=1.2.13 (from neural-compressor->intel-extension-for-transformers)
Using cached Deprecated-1.2.14-py2.py3-none-any.whl.metadata (5.4 kB)
Collecting opencv-python-headless (from neural-compressor->intel-extension-for-transformers)
Using cached opencv_python_headless-4.10.0.84-cp37-abi3-win_amd64.whl.metadata (20 kB)
Collecting Pillow (from neural-compressor->intel-extension-for-transformers)
Using cached pillow-10.4.0-cp311-cp311-win_amd64.whl.metadata (9.3 kB)
Collecting prettytable (from neural-compressor->intel-extension-for-transformers)
Using cached prettytable-3.11.0-py3-none-any.whl.metadata (30 kB)
Collecting py-cpuinfo (from neural-compressor->intel-extension-for-transformers)
Using cached py_cpuinfo-9.0.0-py3-none-any.whl.metadata (794 bytes)
Collecting scikit-learn (from neural-compressor->intel-extension-for-transformers)
Using cached scikit_learn-1.5.1-cp311-cp311-win_amd64.whl.metadata (12 kB)
Collecting pycocotools (from neural-compressor->intel-extension-for-transformers)
Using cached pycocotools-2.0.8-cp311-cp311-win_amd64.whl.metadata (1.1 kB)
Collecting python-dateutil>=2.8.2 (from pandas->datasets)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas->datasets)
Using cached pytz-2024.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas->datasets)
Using cached tzdata-2024.1-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting regex!=2019.12.17 (from transformers->intel-extension-for-transformers)
Using cached regex-2024.7.24-cp311-cp311-win_amd64.whl.metadata (41 kB)
Collecting tokenizers<0.20,>=0.19 (from transformers->intel-extension-for-transformers)
Using cached tokenizers-0.19.1-cp311-none-win_amd64.whl.metadata (6.9 kB)
Collecting wrapt<2,>=1.10 (from deprecated>=1.2.13->neural-compressor->intel-extension-for-transformers)
Using cached wrapt-1.16.0-cp311-cp311-win_amd64.whl.metadata (6.8 kB)
Requirement already satisfied: intel-openmp==2021.* in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch>=1.10.0->accelerate) (2021.4.0)
Requirement already satisfied: tbb==2021.* in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch>=1.10.0->accelerate) (2021.13.1)
Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas->datasets)
Using cached six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from jinja2->torch>=1.10.0->accelerate) (2.1.5)
Collecting wcwidth (from prettytable->neural-compressor->intel-extension-for-transformers)
Using cached wcwidth-0.2.13-py2.py3-none-any.whl.metadata (14 kB)
Collecting matplotlib>=2.1.0 (from pycocotools->neural-compressor->intel-extension-for-transformers)
Using cached matplotlib-3.9.2-cp311-cp311-win_amd64.whl.metadata (11 kB)
Collecting scipy>=1.6.0 (from scikit-learn->neural-compressor->intel-extension-for-transformers)
Using cached scipy-1.14.1-cp311-cp311-win_amd64.whl.metadata (60 kB)
Collecting joblib>=1.2.0 (from scikit-learn->neural-compressor->intel-extension-for-transformers)
Using cached joblib-1.4.2-py3-none-any.whl.metadata (5.4 kB)
Collecting threadpoolctl>=3.1.0 (from scikit-learn->neural-compressor->intel-extension-for-transformers)
Using cached threadpoolctl-3.5.0-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from sympy->torch>=1.10.0->accelerate) (1.3.0)
Collecting contourpy>=1.0.1 (from matplotlib>=2.1.0->pycocotools->neural-compressor->intel-extension-for-transformers)
Using cached contourpy-1.3.0-cp311-cp311-win_amd64.whl.metadata (5.4 kB)
Collecting cycler>=0.10 (from matplotlib>=2.1.0->pycocotools->neural-compressor->intel-extension-for-transformers)
Using cached cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib>=2.1.0->pycocotools->neural-compressor->intel-extension-for-transformers)
Using cached fonttools-4.53.1-cp311-cp311-win_amd64.whl.metadata (165 kB)
Collecting kiwisolver>=1.3.1 (from matplotlib>=2.1.0->pycocotools->neural-compressor->intel-extension-for-transformers)
Using cached kiwisolver-1.4.5-cp311-cp311-win_amd64.whl.metadata (6.5 kB)
Collecting pyparsing>=2.3.1 (from matplotlib>=2.1.0->pycocotools->neural-compressor->intel-extension-for-transformers)
Using cached pyparsing-3.1.4-py3-none-any.whl.metadata (5.1 kB)
Using cached intel_extension_for_transformers-1.4.2-cp311-cp311-win_amd64.whl (11.0 MB)
Using cached accelerate-0.33.0-py3-none-any.whl (315 kB)
Using cached datasets-2.21.0-py3-none-any.whl (527 kB)
Using cached pydantic-2.8.2-py3-none-any.whl (423 kB)
Using cached pydantic_core-2.20.1-cp311-none-win_amd64.whl (1.9 MB)
Downloading numba-0.60.0-cp311-cp311-win_amd64.whl (2.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.7/2.7 MB 3.9 MB/s eta 0:00:00
Using cached annotated_types-0.7.0-py3-none-any.whl (13 kB)
Using cached dill-0.3.8-py3-none-any.whl (116 kB)
Using cached aiohttp-3.10.5-cp311-cp311-win_amd64.whl (379 kB)
Using cached huggingface_hub-0.24.6-py3-none-any.whl (417 kB)
Downloading llvmlite-0.43.0-cp311-cp311-win_amd64.whl (28.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 28.1/28.1 MB 3.7 MB/s eta 0:00:00
Using cached numpy-1.26.4-cp311-cp311-win_amd64.whl (15.8 MB)
Using cached packaging-24.1-py3-none-any.whl (53 kB)
Using cached pyarrow-17.0.0-cp311-cp311-win_amd64.whl (25.2 MB)
Using cached PyYAML-6.0.2-cp311-cp311-win_amd64.whl (161 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached safetensors-0.4.4-cp311-none-win_amd64.whl (285 kB)
Using cached tqdm-4.66.5-py3-none-any.whl (78 kB)
Using cached multiprocess-0.70.16-py311-none-any.whl (143 kB)
Using cached neural_compressor-3.0-py3-none-any.whl (1.7 MB)
Using cached pandas-2.2.2-cp311-cp311-win_amd64.whl (11.6 MB)
Using cached psutil-6.0.0-cp37-abi3-win_amd64.whl (257 kB)
Using cached schema-0.7.7-py2.py3-none-any.whl (18 kB)
Using cached transformers-4.44.2-py3-none-any.whl (9.5 MB)
Using cached xxhash-3.5.0-cp311-cp311-win_amd64.whl (30 kB)
Using cached aiohappyeyeballs-2.4.0-py3-none-any.whl (12 kB)
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Using cached attrs-24.2.0-py3-none-any.whl (63 kB)
Using cached certifi-2024.7.4-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl (99 kB)
Using cached Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB)
Using cached frozenlist-1.4.1-cp311-cp311-win_amd64.whl (50 kB)
Using cached idna-3.8-py3-none-any.whl (66 kB)
Using cached multidict-6.0.5-cp311-cp311-win_amd64.whl (28 kB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached pytz-2024.1-py2.py3-none-any.whl (505 kB)
Using cached regex-2024.7.24-cp311-cp311-win_amd64.whl (269 kB)
Using cached tokenizers-0.19.1-cp311-none-win_amd64.whl (2.2 MB)
Using cached tzdata-2024.1-py2.py3-none-any.whl (345 kB)
Using cached urllib3-2.2.2-py3-none-any.whl (121 kB)
Using cached yarl-1.9.4-cp311-cp311-win_amd64.whl (76 kB)
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Using cached opencv_python_headless-4.10.0.84-cp37-abi3-win_amd64.whl (38.8 MB)
Using cached pillow-10.4.0-cp311-cp311-win_amd64.whl (2.6 MB)
Using cached prettytable-3.11.0-py3-none-any.whl (28 kB)
Using cached py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Using cached pycocotools-2.0.8-cp311-cp311-win_amd64.whl (85 kB)
Using cached scikit_learn-1.5.1-cp311-cp311-win_amd64.whl (11.0 MB)
Using cached joblib-1.4.2-py3-none-any.whl (301 kB)
Using cached matplotlib-3.9.2-cp311-cp311-win_amd64.whl (7.8 MB)
Using cached scipy-1.14.1-cp311-cp311-win_amd64.whl (44.8 MB)
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached threadpoolctl-3.5.0-py3-none-any.whl (18 kB)
Using cached wrapt-1.16.0-cp311-cp311-win_amd64.whl (37 kB)
Using cached wcwidth-0.2.13-py2.py3-none-any.whl (34 kB)
Using cached contourpy-1.3.0-cp311-cp311-win_amd64.whl (217 kB)
Using cached cycler-0.12.1-py3-none-any.whl (8.3 kB)
Using cached fonttools-4.53.1-cp311-cp311-win_amd64.whl (2.2 MB)
Using cached kiwisolver-1.4.5-cp311-cp311-win_amd64.whl (56 kB)
Using cached pyparsing-3.1.4-py3-none-any.whl (104 kB)
Installing collected packages: wcwidth, schema, pytz, py-cpuinfo, xxhash, wrapt, urllib3, tzdata, threadpoolctl, six, safetensors, regex, pyyaml, pyparsing, pydantic-core, psutil, prettytable, Pillow, packaging, numpy, multidict, llvmlite, kiwisolver, joblib, idna, frozenlist, fonttools, dill, cycler, colorama, charset-normalizer, certifi, attrs, annotated-types, aiohappyeyeballs, yarl, tqdm, scipy, requests, python-dateutil, pydantic, pyarrow, opencv-python-headless, numba, multiprocess, deprecated, contourpy, aiosignal, scikit-learn, pandas, matplotlib, huggingface-hub, aiohttp, tokenizers, pycocotools, accelerate, transformers, neural-compressor, datasets, intel-extension-for-transformers
Successfully installed Pillow-10.4.0 accelerate-0.33.0 aiohappyeyeballs-2.4.0 aiohttp-3.10.5 aiosignal-1.3.1 annotated-types-0.7.0 attrs-24.2.0 certifi-2024.7.4 charset-normalizer-3.3.2 colorama-0.4.6 contourpy-1.3.0 cycler-0.12.1 datasets-2.21.0 deprecated-1.2.14 dill-0.3.8 fonttools-4.53.1 frozenlist-1.4.1 huggingface-hub-0.24.6 idna-3.8 intel-extension-for-transformers-1.4.2 joblib-1.4.2 kiwisolver-1.4.5 llvmlite-0.43.0 matplotlib-3.9.2 multidict-6.0.5 multiprocess-0.70.16 neural-compressor-3.0 numba-0.60.0 numpy-1.26.4 opencv-python-headless-4.10.0.84 packaging-24.1 pandas-2.2.2 prettytable-3.11.0 psutil-6.0.0 py-cpuinfo-9.0.0 pyarrow-17.0.0 pycocotools-2.0.8 pydantic-2.8.2 pydantic-core-2.20.1 pyparsing-3.1.4 python-dateutil-2.9.0.post0 pytz-2024.1 pyyaml-6.0.2 regex-2024.7.24 requests-2.32.3 safetensors-0.4.4 schema-0.7.7 scikit-learn-1.5.1 scipy-1.14.1 six-1.16.0 threadpoolctl-3.5.0 tokenizers-0.19.1 tqdm-4.66.5 transformers-4.44.2 tzdata-2024.1 urllib3-2.2.2 wcwidth-0.2.13 wrapt-1.16.0 xxhash-3.5.0 yarl-1.9.4

(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>pip install --upgrade neural_compressor==2.6
Collecting neural_compressor==2.6
Using cached neural_compressor-2.6-py3-none-any.whl.metadata (15 kB)
Requirement already satisfied: deprecated>=1.2.13 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (1.2.14)
Requirement already satisfied: numpy<2.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (1.26.4)
Requirement already satisfied: opencv-python-headless in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (4.10.0.84)
Requirement already satisfied: pandas in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (2.2.2)
Requirement already satisfied: Pillow in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (10.4.0)
Requirement already satisfied: prettytable in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (3.11.0)
Requirement already satisfied: psutil in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (6.0.0)
Requirement already satisfied: py-cpuinfo in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (9.0.0)
Requirement already satisfied: pyyaml in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (6.0.2)
Requirement already satisfied: requests in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (2.32.3)
Requirement already satisfied: schema in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (0.7.7)
Requirement already satisfied: scikit-learn in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (1.5.1)
Requirement already satisfied: pycocotools in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from neural_compressor==2.6) (2.0.8)
Requirement already satisfied: wrapt<2,>=1.10 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from deprecated>=1.2.13->neural_compressor==2.6) (1.16.0)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from pandas->neural_compressor==2.6) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from pandas->neural_compressor==2.6) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from pandas->neural_compressor==2.6) (2024.1)
Requirement already satisfied: wcwidth in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from prettytable->neural_compressor==2.6) (0.2.13)
Requirement already satisfied: matplotlib>=2.1.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from pycocotools->neural_compressor==2.6) (3.9.2)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from requests->neural_compressor==2.6) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from requests->neural_compressor==2.6) (3.8)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from requests->neural_compressor==2.6) (2.2.2)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from requests->neural_compressor==2.6) (2024.7.4)
Requirement already satisfied: scipy>=1.6.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from scikit-learn->neural_compressor==2.6) (1.14.1)
Requirement already satisfied: joblib>=1.2.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from scikit-learn->neural_compressor==2.6) (1.4.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from scikit-learn->neural_compressor==2.6) (3.5.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from matplotlib>=2.1.0->pycocotools->neural_compressor==2.6) (1.3.0)
Requirement already satisfied: cycler>=0.10 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from matplotlib>=2.1.0->pycocotools->neural_compressor==2.6) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from matplotlib>=2.1.0->pycocotools->neural_compressor==2.6) (4.53.1)
Requirement already satisfied: kiwisolver>=1.3.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from matplotlib>=2.1.0->pycocotools->neural_compressor==2.6) (1.4.5)
Requirement already satisfied: packaging>=20.0 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from matplotlib>=2.1.0->pycocotools->neural_compressor==2.6) (24.1)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from matplotlib>=2.1.0->pycocotools->neural_compressor==2.6) (3.1.4)
Requirement already satisfied: six>=1.5 in c:\users\arabtech\desktop\1\intel-extension-for-transformers\intel\lib\site-packages (from python-dateutil>=2.8.2->pandas->neural_compressor==2.6) (1.16.0)
Using cached neural_compressor-2.6-py3-none-any.whl (1.5 MB)
Installing collected packages: neural_compressor
Attempting uninstall: neural_compressor
Found existing installation: neural_compressor 3.0
Uninstalling neural_compressor-3.0:
Successfully uninstalled neural_compressor-3.0
Successfully installed neural_compressor-2.6

(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>
(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>python run_translation.py --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" --do_predict --source_lang en --target_lang ro --source_prefix "translate English to Romanian: " --input_file input.txt --output_file output.txt --per_device_eval_batch_size 4 --predict_with_generate
C:\Users\ArabTech\AppData\Local\Programs\Python\Python311\python.exe: can't open file 'C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\run_translation.py': [Errno 2] No such file or directory

(intel) C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers>cd..

(intel) C:\Users\ArabTech\Desktop\1>
(intel) C:\Users\ArabTech\Desktop\1>python run_translation.py --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" --do_predict --source_lang en --target_lang ro --source_prefix "translate English to Romanian: " --input_file input.txt --output_file output.txt --per_device_eval_batch_size 4 --predict_with_generate
usage: run_translation.py [-h] --model_name_or_path MODEL_NAME_OR_PATH [--config_name CONFIG_NAME]
[--tokenizer_name TOKENIZER_NAME] [--cache_dir CACHE_DIR]
[--use_fast_tokenizer [USE_FAST_TOKENIZER]] [--no_use_fast_tokenizer]
[--model_revision MODEL_REVISION] [--use_auth_token [USE_AUTH_TOKEN]]
[--source_lang SOURCE_LANG] [--target_lang TARGET_LANG] [--dataset_name DATASET_NAME]
[--dataset_config_name DATASET_CONFIG_NAME] [--train_file TRAIN_FILE]
[--validation_file VALIDATION_FILE] [--test_file TEST_FILE]
[--overwrite_cache [OVERWRITE_CACHE]]
[--preprocessing_num_workers PREPROCESSING_NUM_WORKERS]
[--max_source_length MAX_SOURCE_LENGTH] [--max_target_length MAX_TARGET_LENGTH]
[--val_max_target_length VAL_MAX_TARGET_LENGTH] [--pad_to_max_length [PAD_TO_MAX_LENGTH]]
[--max_train_samples MAX_TRAIN_SAMPLES] [--max_eval_samples MAX_EVAL_SAMPLES]
[--max_predict_samples MAX_PREDICT_SAMPLES] [--num_beams NUM_BEAMS]
[--ignore_pad_token_for_loss [IGNORE_PAD_TOKEN_FOR_LOSS]] [--no_ignore_pad_token_for_loss]
[--source_prefix SOURCE_PREFIX] [--forced_bos_token FORCED_BOS_TOKEN] --output_dir
OUTPUT_DIR [--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]]
[--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] [--eval_strategy {no,steps,epoch}]
[--prediction_loss_only [PREDICTION_LOSS_ONLY]]
[--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]
[--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]
[--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE]
[--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE]
[--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
[--eval_accumulation_steps EVAL_ACCUMULATION_STEPS] [--eval_delay EVAL_DELAY]
[--torch_empty_cache_steps TORCH_EMPTY_CACHE_STEPS] [--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2]
[--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM]
[--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS]
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup,inverse_sqrt,reduce_lr_on_plateau,cosine_with_min_lr,warmup_stable_decay}]
[--lr_scheduler_kwargs LR_SCHEDULER_KWARGS] [--warmup_ratio WARMUP_RATIO]
[--warmup_steps WARMUP_STEPS]
[--log_level {detail,debug,info,warning,error,critical,passive}]
[--log_level_replica {detail,debug,info,warning,error,critical,passive}]
[--log_on_each_node [LOG_ON_EACH_NODE]] [--no_log_on_each_node] [--logging_dir LOGGING_DIR]
[--logging_strategy {no,steps,epoch}] [--logging_first_step [LOGGING_FIRST_STEP]]
[--logging_steps LOGGING_STEPS] [--logging_nan_inf_filter [LOGGING_NAN_INF_FILTER]]
[--no_logging_nan_inf_filter] [--save_strategy {no,steps,epoch}] [--save_steps SAVE_STEPS]
[--save_total_limit SAVE_TOTAL_LIMIT] [--save_safetensors [SAVE_SAFETENSORS]]
[--no_save_safetensors] [--save_on_each_node [SAVE_ON_EACH_NODE]]
[--save_only_model [SAVE_ONLY_MODEL]]
[--restore_callback_states_from_checkpoint [RESTORE_CALLBACK_STATES_FROM_CHECKPOINT]]
[--no_cuda [NO_CUDA]] [--use_cpu [USE_CPU]] [--use_mps_device [USE_MPS_DEVICE]]
[--seed SEED] [--data_seed DATA_SEED] [--jit_mode_eval [JIT_MODE_EVAL]]
[--use_ipex [USE_IPEX]] [--bf16 [BF16]] [--fp16 [FP16]] [--fp16_opt_level FP16_OPT_LEVEL]
[--half_precision_backend {auto,apex,cpu_amp}] [--bf16_full_eval [BF16_FULL_EVAL]]
[--fp16_full_eval [FP16_FULL_EVAL]] [--tf32 TF32] [--local_rank LOCAL_RANK]
[--ddp_backend {nccl,gloo,mpi,ccl,hccl,cncl}] [--tpu_num_cores TPU_NUM_CORES]
[--tpu_metrics_debug [TPU_METRICS_DEBUG]] [--debug DEBUG [DEBUG ...]]
[--dataloader_drop_last [DATALOADER_DROP_LAST]] [--eval_steps EVAL_STEPS]
[--dataloader_num_workers DATALOADER_NUM_WORKERS]
[--dataloader_prefetch_factor DATALOADER_PREFETCH_FACTOR] [--past_index PAST_INDEX]
[--run_name RUN_NAME] [--disable_tqdm DISABLE_TQDM]
[--remove_unused_columns [REMOVE_UNUSED_COLUMNS]] [--no_remove_unused_columns]
[--label_names LABEL_NAMES [LABEL_NAMES ...]]
[--load_best_model_at_end [LOAD_BEST_MODEL_AT_END]]
[--metric_for_best_model METRIC_FOR_BEST_MODEL] [--greater_is_better GREATER_IS_BETTER]
[--ignore_data_skip [IGNORE_DATA_SKIP]] [--fsdp FSDP]
[--fsdp_min_num_params FSDP_MIN_NUM_PARAMS] [--fsdp_config FSDP_CONFIG]
[--fsdp_transformer_layer_cls_to_wrap FSDP_TRANSFORMER_LAYER_CLS_TO_WRAP]
[--accelerator_config ACCELERATOR_CONFIG] [--deepspeed DEEPSPEED]
[--label_smoothing_factor LABEL_SMOOTHING_FACTOR]
[--optim {adamw_hf,adamw_torch,adamw_torch_fused,adamw_torch_xla,adamw_torch_npu_fused,adamw_apex_fused,adafactor,adamw_anyprecision,sgd,adagrad,adamw_bnb_8bit,adamw_8bit,lion_8bit,lion_32bit,paged_adamw_32bit,paged_adamw_8bit,paged_lion_32bit,paged_lion_8bit,rmsprop,rmsprop_bnb,rmsprop_bnb_8bit,rmsprop_bnb_32bit,galore_adamw,galore_adamw_8bit,galore_adafactor,galore_adamw_layerwise,galore_adamw_8bit_layerwise,galore_adafactor_layerwise,lomo,adalomo}]
[--optim_args OPTIM_ARGS] [--adafactor [ADAFACTOR]] [--group_by_length [GROUP_BY_LENGTH]]
[--length_column_name LENGTH_COLUMN_NAME] [--report_to REPORT_TO]
[--ddp_find_unused_parameters DDP_FIND_UNUSED_PARAMETERS]
[--ddp_bucket_cap_mb DDP_BUCKET_CAP_MB] [--ddp_broadcast_buffers DDP_BROADCAST_BUFFERS]
[--dataloader_pin_memory [DATALOADER_PIN_MEMORY]] [--no_dataloader_pin_memory]
[--dataloader_persistent_workers [DATALOADER_PERSISTENT_WORKERS]]
[--skip_memory_metrics [SKIP_MEMORY_METRICS]] [--no_skip_memory_metrics]
[--use_legacy_prediction_loop [USE_LEGACY_PREDICTION_LOOP]] [--push_to_hub [PUSH_TO_HUB]]
[--resume_from_checkpoint RESUME_FROM_CHECKPOINT] [--hub_model_id HUB_MODEL_ID]
[--hub_strategy {end,every_save,checkpoint,all_checkpoints}] [--hub_token HUB_TOKEN]
[--hub_private_repo [HUB_PRIVATE_REPO]] [--hub_always_push [HUB_ALWAYS_PUSH]]
[--gradient_checkpointing [GRADIENT_CHECKPOINTING]]
[--gradient_checkpointing_kwargs GRADIENT_CHECKPOINTING_KWARGS]
[--include_inputs_for_metrics [INCLUDE_INPUTS_FOR_METRICS]]
[--eval_do_concat_batches [EVAL_DO_CONCAT_BATCHES]] [--no_eval_do_concat_batches]
[--fp16_backend {auto,apex,cpu_amp}] [--evaluation_strategy {no,steps,epoch}]
[--push_to_hub_model_id PUSH_TO_HUB_MODEL_ID]
[--push_to_hub_organization PUSH_TO_HUB_ORGANIZATION]
[--push_to_hub_token PUSH_TO_HUB_TOKEN] [--mp_parameters MP_PARAMETERS]
[--auto_find_batch_size [AUTO_FIND_BATCH_SIZE]] [--full_determinism [FULL_DETERMINISM]]
[--torchdynamo TORCHDYNAMO] [--ray_scope RAY_SCOPE] [--ddp_timeout DDP_TIMEOUT]
[--torch_compile [TORCH_COMPILE]] [--torch_compile_backend TORCH_COMPILE_BACKEND]
[--torch_compile_mode TORCH_COMPILE_MODE] [--dispatch_batches DISPATCH_BATCHES]
[--split_batches SPLIT_BATCHES] [--include_tokens_per_second [INCLUDE_TOKENS_PER_SECOND]]
[--include_num_input_tokens_seen [INCLUDE_NUM_INPUT_TOKENS_SEEN]]
[--neftune_noise_alpha NEFTUNE_NOISE_ALPHA] [--optim_target_modules OPTIM_TARGET_MODULES]
[--batch_eval_metrics [BATCH_EVAL_METRICS]] [--eval_on_start [EVAL_ON_START]]
[--eval_use_gather_object [EVAL_USE_GATHER_OBJECT]] [--sortish_sampler [SORTISH_SAMPLER]]
[--predict_with_generate [PREDICT_WITH_GENERATE]]
[--generation_max_length GENERATION_MAX_LENGTH]
[--generation_num_beams GENERATION_NUM_BEAMS] [--generation_config GENERATION_CONFIG]
[--tune [TUNE]] [--quantization_approach QUANTIZATION_APPROACH] [--metric_name METRIC_NAME]
[--is_relative [IS_RELATIVE]] [--no_is_relative] [--perf_tol PERF_TOL]
[--benchmark [BENCHMARK]] [--benchmark_only [BENCHMARK_ONLY]] [--int8 [INT8]]
[--accuracy_only [ACCURACY_ONLY]] [--cores_per_instance CORES_PER_INSTANCE]
[--num_of_instance NUM_OF_INSTANCE]
run_translation.py: error: the following arguments are required: --output_dir

(intel) C:\Users\ArabTech\Desktop\1>
(intel) C:\Users\ArabTech\Desktop\1>python run_translation.py ^
More? --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" ^
More? --do_predict ^
More? --source_lang en ^
More? --target_lang ro ^
More? --source_prefix "translate English to Romanian: " ^
More? --input_file input.txt ^
More? --output_file output.txt ^
More? --per_device_eval_batch_size 4 ^
More? --predict_with_generate
usage: run_translation.py [-h] --model_name_or_path MODEL_NAME_OR_PATH [--config_name CONFIG_NAME]
[--tokenizer_name TOKENIZER_NAME] [--cache_dir CACHE_DIR]
[--use_fast_tokenizer [USE_FAST_TOKENIZER]] [--no_use_fast_tokenizer]
[--model_revision MODEL_REVISION] [--use_auth_token [USE_AUTH_TOKEN]]
[--source_lang SOURCE_LANG] [--target_lang TARGET_LANG] [--dataset_name DATASET_NAME]
[--dataset_config_name DATASET_CONFIG_NAME] [--train_file TRAIN_FILE]
[--validation_file VALIDATION_FILE] [--test_file TEST_FILE]
[--overwrite_cache [OVERWRITE_CACHE]]
[--preprocessing_num_workers PREPROCESSING_NUM_WORKERS]
[--max_source_length MAX_SOURCE_LENGTH] [--max_target_length MAX_TARGET_LENGTH]
[--val_max_target_length VAL_MAX_TARGET_LENGTH] [--pad_to_max_length [PAD_TO_MAX_LENGTH]]
[--max_train_samples MAX_TRAIN_SAMPLES] [--max_eval_samples MAX_EVAL_SAMPLES]
[--max_predict_samples MAX_PREDICT_SAMPLES] [--num_beams NUM_BEAMS]
[--ignore_pad_token_for_loss [IGNORE_PAD_TOKEN_FOR_LOSS]] [--no_ignore_pad_token_for_loss]
[--source_prefix SOURCE_PREFIX] [--forced_bos_token FORCED_BOS_TOKEN] --output_dir
OUTPUT_DIR [--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]]
[--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] [--eval_strategy {no,steps,epoch}]
[--prediction_loss_only [PREDICTION_LOSS_ONLY]]
[--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]
[--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]
[--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE]
[--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE]
[--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
[--eval_accumulation_steps EVAL_ACCUMULATION_STEPS] [--eval_delay EVAL_DELAY]
[--torch_empty_cache_steps TORCH_EMPTY_CACHE_STEPS] [--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2]
[--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM]
[--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS]
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup,inverse_sqrt,reduce_lr_on_plateau,cosine_with_min_lr,warmup_stable_decay}]
[--lr_scheduler_kwargs LR_SCHEDULER_KWARGS] [--warmup_ratio WARMUP_RATIO]
[--warmup_steps WARMUP_STEPS]
[--log_level {detail,debug,info,warning,error,critical,passive}]
[--log_level_replica {detail,debug,info,warning,error,critical,passive}]
[--log_on_each_node [LOG_ON_EACH_NODE]] [--no_log_on_each_node] [--logging_dir LOGGING_DIR]
[--logging_strategy {no,steps,epoch}] [--logging_first_step [LOGGING_FIRST_STEP]]
[--logging_steps LOGGING_STEPS] [--logging_nan_inf_filter [LOGGING_NAN_INF_FILTER]]
[--no_logging_nan_inf_filter] [--save_strategy {no,steps,epoch}] [--save_steps SAVE_STEPS]
[--save_total_limit SAVE_TOTAL_LIMIT] [--save_safetensors [SAVE_SAFETENSORS]]
[--no_save_safetensors] [--save_on_each_node [SAVE_ON_EACH_NODE]]
[--save_only_model [SAVE_ONLY_MODEL]]
[--restore_callback_states_from_checkpoint [RESTORE_CALLBACK_STATES_FROM_CHECKPOINT]]
[--no_cuda [NO_CUDA]] [--use_cpu [USE_CPU]] [--use_mps_device [USE_MPS_DEVICE]]
[--seed SEED] [--data_seed DATA_SEED] [--jit_mode_eval [JIT_MODE_EVAL]]
[--use_ipex [USE_IPEX]] [--bf16 [BF16]] [--fp16 [FP16]] [--fp16_opt_level FP16_OPT_LEVEL]
[--half_precision_backend {auto,apex,cpu_amp}] [--bf16_full_eval [BF16_FULL_EVAL]]
[--fp16_full_eval [FP16_FULL_EVAL]] [--tf32 TF32] [--local_rank LOCAL_RANK]
[--ddp_backend {nccl,gloo,mpi,ccl,hccl,cncl}] [--tpu_num_cores TPU_NUM_CORES]
[--tpu_metrics_debug [TPU_METRICS_DEBUG]] [--debug DEBUG [DEBUG ...]]
[--dataloader_drop_last [DATALOADER_DROP_LAST]] [--eval_steps EVAL_STEPS]
[--dataloader_num_workers DATALOADER_NUM_WORKERS]
[--dataloader_prefetch_factor DATALOADER_PREFETCH_FACTOR] [--past_index PAST_INDEX]
[--run_name RUN_NAME] [--disable_tqdm DISABLE_TQDM]
[--remove_unused_columns [REMOVE_UNUSED_COLUMNS]] [--no_remove_unused_columns]
[--label_names LABEL_NAMES [LABEL_NAMES ...]]
[--load_best_model_at_end [LOAD_BEST_MODEL_AT_END]]
[--metric_for_best_model METRIC_FOR_BEST_MODEL] [--greater_is_better GREATER_IS_BETTER]
[--ignore_data_skip [IGNORE_DATA_SKIP]] [--fsdp FSDP]
[--fsdp_min_num_params FSDP_MIN_NUM_PARAMS] [--fsdp_config FSDP_CONFIG]
[--fsdp_transformer_layer_cls_to_wrap FSDP_TRANSFORMER_LAYER_CLS_TO_WRAP]
[--accelerator_config ACCELERATOR_CONFIG] [--deepspeed DEEPSPEED]
[--label_smoothing_factor LABEL_SMOOTHING_FACTOR]
[--optim {adamw_hf,adamw_torch,adamw_torch_fused,adamw_torch_xla,adamw_torch_npu_fused,adamw_apex_fused,adafactor,adamw_anyprecision,sgd,adagrad,adamw_bnb_8bit,adamw_8bit,lion_8bit,lion_32bit,paged_adamw_32bit,paged_adamw_8bit,paged_lion_32bit,paged_lion_8bit,rmsprop,rmsprop_bnb,rmsprop_bnb_8bit,rmsprop_bnb_32bit,galore_adamw,galore_adamw_8bit,galore_adafactor,galore_adamw_layerwise,galore_adamw_8bit_layerwise,galore_adafactor_layerwise,lomo,adalomo}]
[--optim_args OPTIM_ARGS] [--adafactor [ADAFACTOR]] [--group_by_length [GROUP_BY_LENGTH]]
[--length_column_name LENGTH_COLUMN_NAME] [--report_to REPORT_TO]
[--ddp_find_unused_parameters DDP_FIND_UNUSED_PARAMETERS]
[--ddp_bucket_cap_mb DDP_BUCKET_CAP_MB] [--ddp_broadcast_buffers DDP_BROADCAST_BUFFERS]
[--dataloader_pin_memory [DATALOADER_PIN_MEMORY]] [--no_dataloader_pin_memory]
[--dataloader_persistent_workers [DATALOADER_PERSISTENT_WORKERS]]
[--skip_memory_metrics [SKIP_MEMORY_METRICS]] [--no_skip_memory_metrics]
[--use_legacy_prediction_loop [USE_LEGACY_PREDICTION_LOOP]] [--push_to_hub [PUSH_TO_HUB]]
[--resume_from_checkpoint RESUME_FROM_CHECKPOINT] [--hub_model_id HUB_MODEL_ID]
[--hub_strategy {end,every_save,checkpoint,all_checkpoints}] [--hub_token HUB_TOKEN]
[--hub_private_repo [HUB_PRIVATE_REPO]] [--hub_always_push [HUB_ALWAYS_PUSH]]
[--gradient_checkpointing [GRADIENT_CHECKPOINTING]]
[--gradient_checkpointing_kwargs GRADIENT_CHECKPOINTING_KWARGS]
[--include_inputs_for_metrics [INCLUDE_INPUTS_FOR_METRICS]]
[--eval_do_concat_batches [EVAL_DO_CONCAT_BATCHES]] [--no_eval_do_concat_batches]
[--fp16_backend {auto,apex,cpu_amp}] [--evaluation_strategy {no,steps,epoch}]
[--push_to_hub_model_id PUSH_TO_HUB_MODEL_ID]
[--push_to_hub_organization PUSH_TO_HUB_ORGANIZATION]
[--push_to_hub_token PUSH_TO_HUB_TOKEN] [--mp_parameters MP_PARAMETERS]
[--auto_find_batch_size [AUTO_FIND_BATCH_SIZE]] [--full_determinism [FULL_DETERMINISM]]
[--torchdynamo TORCHDYNAMO] [--ray_scope RAY_SCOPE] [--ddp_timeout DDP_TIMEOUT]
[--torch_compile [TORCH_COMPILE]] [--torch_compile_backend TORCH_COMPILE_BACKEND]
[--torch_compile_mode TORCH_COMPILE_MODE] [--dispatch_batches DISPATCH_BATCHES]
[--split_batches SPLIT_BATCHES] [--include_tokens_per_second [INCLUDE_TOKENS_PER_SECOND]]
[--include_num_input_tokens_seen [INCLUDE_NUM_INPUT_TOKENS_SEEN]]
[--neftune_noise_alpha NEFTUNE_NOISE_ALPHA] [--optim_target_modules OPTIM_TARGET_MODULES]
[--batch_eval_metrics [BATCH_EVAL_METRICS]] [--eval_on_start [EVAL_ON_START]]
[--eval_use_gather_object [EVAL_USE_GATHER_OBJECT]] [--sortish_sampler [SORTISH_SAMPLER]]
[--predict_with_generate [PREDICT_WITH_GENERATE]]
[--generation_max_length GENERATION_MAX_LENGTH]
[--generation_num_beams GENERATION_NUM_BEAMS] [--generation_config GENERATION_CONFIG]
[--tune [TUNE]] [--quantization_approach QUANTIZATION_APPROACH] [--metric_name METRIC_NAME]
[--is_relative [IS_RELATIVE]] [--no_is_relative] [--perf_tol PERF_TOL]
[--benchmark [BENCHMARK]] [--benchmark_only [BENCHMARK_ONLY]] [--int8 [INT8]]
[--accuracy_only [ACCURACY_ONLY]] [--cores_per_instance CORES_PER_INSTANCE]
[--num_of_instance NUM_OF_INSTANCE]
run_translation.py: error: the following arguments are required: --output_dir

(intel) C:\Users\ArabTech\Desktop\1>python C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py ^
More? --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" ^
More? --do_predict ^
More? --source_lang en ^
More? --target_lang ro ^
More? --source_prefix "translate English to Romanian: " ^
More? --input_file input.txt ^
More? --output_file output.txt ^
More? --per_device_eval_batch_size 4 ^
More? --predict_with_generate
usage: run_translation.py [-h] --model_name_or_path MODEL_NAME_OR_PATH [--config_name CONFIG_NAME]
[--tokenizer_name TOKENIZER_NAME] [--cache_dir CACHE_DIR]
[--use_fast_tokenizer [USE_FAST_TOKENIZER]] [--no_use_fast_tokenizer]
[--model_revision MODEL_REVISION] [--use_auth_token [USE_AUTH_TOKEN]]
[--source_lang SOURCE_LANG] [--target_lang TARGET_LANG] [--dataset_name DATASET_NAME]
[--dataset_config_name DATASET_CONFIG_NAME] [--train_file TRAIN_FILE]
[--validation_file VALIDATION_FILE] [--test_file TEST_FILE]
[--overwrite_cache [OVERWRITE_CACHE]]
[--preprocessing_num_workers PREPROCESSING_NUM_WORKERS]
[--max_source_length MAX_SOURCE_LENGTH] [--max_target_length MAX_TARGET_LENGTH]
[--val_max_target_length VAL_MAX_TARGET_LENGTH] [--pad_to_max_length [PAD_TO_MAX_LENGTH]]
[--max_train_samples MAX_TRAIN_SAMPLES] [--max_eval_samples MAX_EVAL_SAMPLES]
[--max_predict_samples MAX_PREDICT_SAMPLES] [--num_beams NUM_BEAMS]
[--ignore_pad_token_for_loss [IGNORE_PAD_TOKEN_FOR_LOSS]] [--no_ignore_pad_token_for_loss]
[--source_prefix SOURCE_PREFIX] [--forced_bos_token FORCED_BOS_TOKEN] --output_dir
OUTPUT_DIR [--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]]
[--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] [--eval_strategy {no,steps,epoch}]
[--prediction_loss_only [PREDICTION_LOSS_ONLY]]
[--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]
[--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]
[--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE]
[--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE]
[--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
[--eval_accumulation_steps EVAL_ACCUMULATION_STEPS] [--eval_delay EVAL_DELAY]
[--torch_empty_cache_steps TORCH_EMPTY_CACHE_STEPS] [--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2]
[--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM]
[--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS]
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup,inverse_sqrt,reduce_lr_on_plateau,cosine_with_min_lr,warmup_stable_decay}]
[--lr_scheduler_kwargs LR_SCHEDULER_KWARGS] [--warmup_ratio WARMUP_RATIO]
[--warmup_steps WARMUP_STEPS]
[--log_level {detail,debug,info,warning,error,critical,passive}]
[--log_level_replica {detail,debug,info,warning,error,critical,passive}]
[--log_on_each_node [LOG_ON_EACH_NODE]] [--no_log_on_each_node] [--logging_dir LOGGING_DIR]
[--logging_strategy {no,steps,epoch}] [--logging_first_step [LOGGING_FIRST_STEP]]
[--logging_steps LOGGING_STEPS] [--logging_nan_inf_filter [LOGGING_NAN_INF_FILTER]]
[--no_logging_nan_inf_filter] [--save_strategy {no,steps,epoch}] [--save_steps SAVE_STEPS]
[--save_total_limit SAVE_TOTAL_LIMIT] [--save_safetensors [SAVE_SAFETENSORS]]
[--no_save_safetensors] [--save_on_each_node [SAVE_ON_EACH_NODE]]
[--save_only_model [SAVE_ONLY_MODEL]]
[--restore_callback_states_from_checkpoint [RESTORE_CALLBACK_STATES_FROM_CHECKPOINT]]
[--no_cuda [NO_CUDA]] [--use_cpu [USE_CPU]] [--use_mps_device [USE_MPS_DEVICE]]
[--seed SEED] [--data_seed DATA_SEED] [--jit_mode_eval [JIT_MODE_EVAL]]
[--use_ipex [USE_IPEX]] [--bf16 [BF16]] [--fp16 [FP16]] [--fp16_opt_level FP16_OPT_LEVEL]
[--half_precision_backend {auto,apex,cpu_amp}] [--bf16_full_eval [BF16_FULL_EVAL]]
[--fp16_full_eval [FP16_FULL_EVAL]] [--tf32 TF32] [--local_rank LOCAL_RANK]
[--ddp_backend {nccl,gloo,mpi,ccl,hccl,cncl}] [--tpu_num_cores TPU_NUM_CORES]
[--tpu_metrics_debug [TPU_METRICS_DEBUG]] [--debug DEBUG [DEBUG ...]]
[--dataloader_drop_last [DATALOADER_DROP_LAST]] [--eval_steps EVAL_STEPS]
[--dataloader_num_workers DATALOADER_NUM_WORKERS]
[--dataloader_prefetch_factor DATALOADER_PREFETCH_FACTOR] [--past_index PAST_INDEX]
[--run_name RUN_NAME] [--disable_tqdm DISABLE_TQDM]
[--remove_unused_columns [REMOVE_UNUSED_COLUMNS]] [--no_remove_unused_columns]
[--label_names LABEL_NAMES [LABEL_NAMES ...]]
[--load_best_model_at_end [LOAD_BEST_MODEL_AT_END]]
[--metric_for_best_model METRIC_FOR_BEST_MODEL] [--greater_is_better GREATER_IS_BETTER]
[--ignore_data_skip [IGNORE_DATA_SKIP]] [--fsdp FSDP]
[--fsdp_min_num_params FSDP_MIN_NUM_PARAMS] [--fsdp_config FSDP_CONFIG]
[--fsdp_transformer_layer_cls_to_wrap FSDP_TRANSFORMER_LAYER_CLS_TO_WRAP]
[--accelerator_config ACCELERATOR_CONFIG] [--deepspeed DEEPSPEED]
[--label_smoothing_factor LABEL_SMOOTHING_FACTOR]
[--optim {adamw_hf,adamw_torch,adamw_torch_fused,adamw_torch_xla,adamw_torch_npu_fused,adamw_apex_fused,adafactor,adamw_anyprecision,sgd,adagrad,adamw_bnb_8bit,adamw_8bit,lion_8bit,lion_32bit,paged_adamw_32bit,paged_adamw_8bit,paged_lion_32bit,paged_lion_8bit,rmsprop,rmsprop_bnb,rmsprop_bnb_8bit,rmsprop_bnb_32bit,galore_adamw,galore_adamw_8bit,galore_adafactor,galore_adamw_layerwise,galore_adamw_8bit_layerwise,galore_adafactor_layerwise,lomo,adalomo}]
[--optim_args OPTIM_ARGS] [--adafactor [ADAFACTOR]] [--group_by_length [GROUP_BY_LENGTH]]
[--length_column_name LENGTH_COLUMN_NAME] [--report_to REPORT_TO]
[--ddp_find_unused_parameters DDP_FIND_UNUSED_PARAMETERS]
[--ddp_bucket_cap_mb DDP_BUCKET_CAP_MB] [--ddp_broadcast_buffers DDP_BROADCAST_BUFFERS]
[--dataloader_pin_memory [DATALOADER_PIN_MEMORY]] [--no_dataloader_pin_memory]
[--dataloader_persistent_workers [DATALOADER_PERSISTENT_WORKERS]]
[--skip_memory_metrics [SKIP_MEMORY_METRICS]] [--no_skip_memory_metrics]
[--use_legacy_prediction_loop [USE_LEGACY_PREDICTION_LOOP]] [--push_to_hub [PUSH_TO_HUB]]
[--resume_from_checkpoint RESUME_FROM_CHECKPOINT] [--hub_model_id HUB_MODEL_ID]
[--hub_strategy {end,every_save,checkpoint,all_checkpoints}] [--hub_token HUB_TOKEN]
[--hub_private_repo [HUB_PRIVATE_REPO]] [--hub_always_push [HUB_ALWAYS_PUSH]]
[--gradient_checkpointing [GRADIENT_CHECKPOINTING]]
[--gradient_checkpointing_kwargs GRADIENT_CHECKPOINTING_KWARGS]
[--include_inputs_for_metrics [INCLUDE_INPUTS_FOR_METRICS]]
[--eval_do_concat_batches [EVAL_DO_CONCAT_BATCHES]] [--no_eval_do_concat_batches]
[--fp16_backend {auto,apex,cpu_amp}] [--evaluation_strategy {no,steps,epoch}]
[--push_to_hub_model_id PUSH_TO_HUB_MODEL_ID]
[--push_to_hub_organization PUSH_TO_HUB_ORGANIZATION]
[--push_to_hub_token PUSH_TO_HUB_TOKEN] [--mp_parameters MP_PARAMETERS]
[--auto_find_batch_size [AUTO_FIND_BATCH_SIZE]] [--full_determinism [FULL_DETERMINISM]]
[--torchdynamo TORCHDYNAMO] [--ray_scope RAY_SCOPE] [--ddp_timeout DDP_TIMEOUT]
[--torch_compile [TORCH_COMPILE]] [--torch_compile_backend TORCH_COMPILE_BACKEND]
[--torch_compile_mode TORCH_COMPILE_MODE] [--dispatch_batches DISPATCH_BATCHES]
[--split_batches SPLIT_BATCHES] [--include_tokens_per_second [INCLUDE_TOKENS_PER_SECOND]]
[--include_num_input_tokens_seen [INCLUDE_NUM_INPUT_TOKENS_SEEN]]
[--neftune_noise_alpha NEFTUNE_NOISE_ALPHA] [--optim_target_modules OPTIM_TARGET_MODULES]
[--batch_eval_metrics [BATCH_EVAL_METRICS]] [--eval_on_start [EVAL_ON_START]]
[--eval_use_gather_object [EVAL_USE_GATHER_OBJECT]] [--sortish_sampler [SORTISH_SAMPLER]]
[--predict_with_generate [PREDICT_WITH_GENERATE]]
[--generation_max_length GENERATION_MAX_LENGTH]
[--generation_num_beams GENERATION_NUM_BEAMS] [--generation_config GENERATION_CONFIG]
[--tune [TUNE]] [--quantization_approach QUANTIZATION_APPROACH] [--metric_name METRIC_NAME]
[--is_relative [IS_RELATIVE]] [--no_is_relative] [--perf_tol PERF_TOL]
[--benchmark [BENCHMARK]] [--benchmark_only [BENCHMARK_ONLY]] [--int8 [INT8]]
[--accuracy_only [ACCURACY_ONLY]] [--cores_per_instance CORES_PER_INSTANCE]
[--num_of_instance NUM_OF_INSTANCE]
run_translation.py: error: the following arguments are required: --output_dir

(intel) C:\Users\ArabTech\Desktop\1>python C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py ^
More? --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" ^
More? --do_predict ^
More? --source_lang en ^
More? --target_lang ro ^
More? --source_prefix "translate English to Romanian: " ^
More? --input_file input.txt ^
More? --output_file output.txt ^
More? --per_device_eval_batch_size 4 ^
More? --predict_with_generate ^
More? --output_dir "C:\Users\ArabTech\Desktop\1\translation_output"
Traceback (most recent call last):
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py", line 696, in
main()
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py", line 301, in main
model_args, data_args, training_args, optim_args = parser.parse_args_into_dataclasses()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\intel\Lib\site-packages\transformers\hf_argparser.py", line 339, in parse_args_into_dataclasses
obj = dtype(**inputs)
^^^^^^^^^^^^^^^
File "", line 23, in init
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py", line 223, in post_init
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.

(intel) C:\Users\ArabTech\Desktop\1>
(intel) C:\Users\ArabTech\Desktop\1>python C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py ^
More? --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" ^
More? --do_predict ^
More? --source_lang en ^
More? --target_lang ro ^
More? --source_prefix "translate English to Romanian: " ^
More? --input_file input.txt ^
More? --output_file output.txt ^
More? --per_device_eval_batch_size 4 ^
More? --predict_with_generate ^
More? --output_dir "C:\Users\ArabTech\Desktop\1\translation_output" ^
More? --test_file input.txt
Traceback (most recent call last):
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py", line 696, in
main()
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py", line 301, in main
model_args, data_args, training_args, optim_args = parser.parse_args_into_dataclasses()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\intel\Lib\site-packages\transformers\hf_argparser.py", line 339, in parse_args_into_dataclasses
obj = dtype(**inputs)
^^^^^^^^^^^^^^^
File "", line 23, in init
File "C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\examples\huggingface\pytorch\translation\quantization\run_translation.py", line 223, in post_init
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.

(intel) C:\Users\ArabTech\Desktop\1>python -m transformers.cli.translate ^
More? --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" ^
More? --input_file input.txt ^
More? --output_file output.txt ^
More? --source_lang en ^
More? --target_lang ro
C:\Users\ArabTech\Desktop\1\intel-extension-for-transformers\intel\Scripts\python.exe: Error while finding module specification for 'transformers.cli.translate' (ModuleNotFoundError: No module named 'transformers.cli')

(intel) C:\Users\ArabTech\Desktop\1>

@BlindRusty
Copy link

Have put a working set of instructions.
Kindly refer : #1695 (comment)
Thanks

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants