Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Error in import: ModuleNotFoundError: No module named 'neural_compressor.conf' #1695

Open
Nicogs43 opened this issue Aug 27, 2024 · 8 comments

Comments

@Nicogs43
Copy link

I have installed intel-extension-for-transformer using pip install intel-extension-for-transformers but trying a little script to see if it worked I got this error :

Traceback (most recent call last):
File "/home/nico/env/prov.py", line 2, in
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
File "/home/nico/env/lib/python3.10/site-packages/intel_extension_for_transformers/transformers/init.py", line 19, in
from .config import (
File "/home/nico/env/lib/python3.10/site-packages/intel_extension_for_transformers/transformers/config.py", line 21, in
from neural_compressor.conf.config import (
ModuleNotFoundError: No module named 'neural_compressor.conf'

So I tried to install neural-compressor using pip install neural-compressor[pt] but nothing change.
Then I tried to check if the conf module exist using with python -c "import neural_compressor; print(dir(neural_compressor))" and in fact it doesn't appear in the list.

Please could you help me ? Where am I wrong ? Thank you in advance.

@opticSquid
Copy link

Yes I'm facing the same issue

@anayjain
Copy link

Same issue - #1688 If anyone knows how to solve this please let me know!

@anayjain
Copy link

#1689 I found another closed issue with something similar but the solution didn't work for me.

@Nicogs43
Copy link
Author

Nicogs43 commented Aug 28, 2024

#1689 I found another closed issue with something similar but the solution didn't work for me.

Hi, I also tried to install the 2.6 version of neural-compressor and indeed it no longer gives me the initial error. I don't know if it is the best method to solve the problem. Anyway after installingtorch accelerate dataset neural-speed sentencepiece peftI tried to run this script:


from transformers import AutoTokenizer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
model_name = "Intel/neural-chat-7b-v3-1"     
prompt = "Once upon a time, there existed a little girl,"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids

model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
outputs = model.generate(inputs)

But after downloading the model it gave me this error:
Traceback (most recent call last):
File "/home/nico/prova2/pippo2.py", line 9, in
model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
File "/home/nico/prova2/lib/python3.10/site-packages/intel_extension_for_transformers/transformers/modeling/modeling_auto.py", line 722, in from_pretrained
model.init( # pylint: disable=E1123
File "/home/nico/prova2/lib/python3.10/site-packages/neural_speed/init.py", line 182, in init
assert os.path.exists(fp32_bin), "Fail to convert pytorch model"
AssertionError: Fail to convert pytorch model

So not a great step forward

@ayttop
Copy link

ayttop commented Aug 29, 2024

(1) C:\Users\ArabTech\Desktop\1>python run_translation.py --model_name_or_path "C:\Users\ArabTech\Desktop\1\google\flan-t5-small" --do_predict --source_lang en --target_lang ro --source_prefix "translate English to Romanian: " --input_file input.txt --output_file output.txt --per_device_eval_batch_size 4 --predict_with_generate
Traceback (most recent call last):
File "C:\Users\ArabTech\Desktop\1\run_translation.py", line 31, in
from intel_extension_for_transformers.transformers import OptimizedModel, objectives, metrics
File "C:\Users\ArabTech\anaconda3\envs\1\Lib\site-packages\intel_extension_for_transformers\transformers_init_.py", line 19, in
from .config import (
File "C:\Users\ArabTech\anaconda3\envs\1\Lib\site-packages\intel_extension_for_transformers\transformers\config.py", line 21, in
from neural_compressor.conf.config import (
ModuleNotFoundError: No module named 'neural_compressor.conf'

(1) C:\Users\ArabTech\Desktop\1>

@BlindRusty
Copy link

BlindRusty commented Sep 9, 2024

Working Environment

The following solution / approach is working on a defined configuration. Please make sure that you have the right GPU support installed on your system.
Config :

Machine : Local
GPU : Intel ARC A770 16 GB
OS Type : Linux
Linux Distro : KDE Neon

  1. Environment type : pip

pip list

The following are installed. Make sure that you install the libraries as it is in the conda or pip env.

accelerate==0.34.2 aiohappyeyeballs==2.4.0 aiohttp==3.10.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 click==8.1.7 contourpy==1.3.0 cycler==0.12.1 datasets==2.21.0 Deprecated==1.2.14 diffusers==0.30.2 dill==0.3.8 dpcpp-cpp-rt==2024.2.1 dpctl==0.17.0 einops==0.8.0 exceptiongroup==1.2.2 fastapi==0.114.0 fastchat==0.1.0 filelock==3.16.0 fonttools==4.53.1 frozenlist==1.4.1 fsspec==2024.6.1 h11==0.14.0 huggingface-hub==0.24.6 idna==3.8 importlib_metadata==8.4.0 intel-cmplr-lib-rt==2024.2.1 intel-cmplr-lib-ur==2024.2.1 intel-cmplr-lic-rt==2024.2.1 intel-extension-for-transformers==1.4.2 intel-opencl-rt==2024.2.1 intel-openmp==2024.2.1 intel-sycl-rt==2024.2.1 intel_extension_for_pytorch==2.1.40+xpu Jinja2==3.1.4 joblib==1.4.2 kiwisolver==1.4.7 MarkupSafe==2.1.5 matplotlib==3.9.2 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.16 networkx==3.3 neural-speed==1.0 neural_compressor==2.6 numpy==1.24.1 oneccl-bind-pt==2.1.400+xpu opencv-python-headless==4.10.0.84 packaging==24.1 pandas==2.2.2 peft==0.12.0 pillow==10.4.0 prettytable==3.11.0 protobuf==3.20.0 psutil==6.0.0 py-cpuinfo==9.0.0 pyarrow==17.0.0 pybind11==2.13.5 pycocotools==2.0.8 pydantic==2.9.1 pydantic_core==2.23.3 pyparsing==3.1.4 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.2 regex==2024.7.24 requests==2.32.3 safetensors==0.4.5 schema==0.7.7 scikit-learn==1.5.1 scipy==1.14.1 sentencepiece==0.2.0 six==1.16.0 sniffio==1.3.1 starlette==0.38.5 sympy==1.13.2 tbb==2021.13.1 threadpoolctl==3.5.0 tiktoken==0.7.0 tokenizers==0.19.1 torch==2.1.0.post3+cxx11.abi torchaudio==2.1.0.post3+cxx11.abi torchvision==0.16.0.post3+cxx11.abi tqdm==4.66.5 transformers==4.44.2 transformers-stream-generator==0.0.5 typing_extensions==4.12.2 tzdata==2024.1 urllib3==2.2.2 uvicorn==0.30.6 wcwidth==0.2.13 wrapt==1.16.0 xxhash==3.5.0 yacs==0.1.8 yarl==1.11.0 zipp==3.20.1

After you have installed the libraries in their respective versions ( it is imperative to follow so)

I have run these code snippet

from transformers import AutoTokenizer, TextStreamer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
model_name = "Intel/neural-chat-7b-v3-1"     # Hugging Face model_id or local model
prompt = "Once upon a time, there existed a little girl,"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
streamer = TextStreamer(tokenizer)

model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)

with the output as

init: n_vocab    = 32000
init: n_ctx      = 0
init: n_embd     = 4096
init: n_mult     = 256
init: n_head     = 32
init: n_head_kv  = 8
init: n_layer    = 32
init: n_rot      = 128
init: n_ff       = 14336
init: n_parts    = 1
load: ctx size   = 4316.89 MB
load: scratch0   = 4096.00 MB
load: scratch1   = 2048.00 MB
load: scratch2   = 4096.00 MB
load: mem required  = 14556.89 MB (+ memory per state)
...................................................................................................
model_init_from_file: support_bestla_kv = 1
model_init_from_file: kv self size =  136.50 MB
<s> Once upon a time, there existed a little girl, who was born with a gift. She could see the world in a different way. She could see the world through the eyes of a child. She could see the world as it was, and as it could be.

She could see the world as it was, and she saw that it was filled with beauty and wonder. She saw that there were people who loved and cared for one another. She saw that there were people who were kind and generous. She saw that there were people who were brave and strong.

She could see the world as it could be, and she saw that it could be filled with even more beauty and wonder. She saw that people could learn to love and care for one another even more deeply. She saw that people could become kinder and more generous. She saw that people could become braver and stronger.

The little girl knew that she had a special gift, and she wanted to share it with the world. She wanted to help people see the world through her eyes, so that they could experience the beauty and wonder that she saw. She wanted to inspire people to become better versions of themselves, to love more, to care more, to be kinder, more generous, braver, and stronger.

So, the little girl began to share her gift with others. She started by telling her stories to her friends and family. She shared her stories with her teachers and her classmates. She even shared her stories with people

attached the intel_gpu_top for usage : ( non memory )
usage_neural

@c2rx
Copy link

c2rx commented Sep 24, 2024

Same problem, please help! @kevinintel

@arieltoledo
Copy link

same problem here !

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants