Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"NameError: name 'torch' is not defined" but torch is installed and imported #36

Closed
giulianoformisano opened this issue Dec 27, 2022 · 9 comments

Comments

@giulianoformisano
Copy link

After creating a virtual environment, I tried to install and import m3inference:

pip install m3inference
import m3inference

But I get the following error, how could I fix it?

NameError                                 Traceback (most recent call last)
<ipython-input-9-50ee37ff85fa> in <module>
----> 1 import m3inference

3 frames
/usr/local/lib/python3.8/dist-packages/m3inference/full_model.py in M3InferenceModel()
     10 
     11 class M3InferenceModel(nn.Module):
---> 12     def __init__(self, device='cuda' if torch.cuda.is_available() else 'cpu'):
     13         super(M3InferenceModel, self).__init__()
     14 

NameError: name 'torch' is not defined

I tried to install and import torch before doing the same with m3inference.

Thanks!

@computermacgyver
Copy link
Member

Can you please follow these directions to install pytorch
https://pytorch.org/get-started/locally/
?

If it still does not work, please provide the specific versions of packages you have installed using pip freeze

@zijwang
Copy link
Member

zijwang commented Dec 27, 2022

@giulianoformisano could you try to verify whether you installed torch correctly? You could try import torch in Python console inside your virtualenv.

@giulianoformisano
Copy link
Author

Thanks a lot, unfortunately, I couldn't fix the issue following the instructions (https://pytorch.org/get-started/locally/).

Please find all my installed packages:

appnope==0.1.3
asttokens==2.2.1
backcall==0.2.0
certifi==2022.12.7
charset-normalizer==2.1.1
comm==0.1.2
contourpy==1.0.6
cycler==0.11.0
debugpy==1.6.4
decorator==5.1.1
entrypoints==0.4
executing==1.2.0
fonttools==4.38.0
idna==3.4
ipykernel==6.19.4
ipython==8.7.0
jedi==0.18.2
jupyter_client==7.4.8
jupyter_core==5.1.1
kiwisolver==1.4.4
m3inference==1.1.5
matplotlib==3.6.2
matplotlib-inline==0.1.6
nest-asyncio==1.5.6
numpy==1.24.1
packaging==22.0
pandas==1.5.2
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.3.0
platformdirs==2.6.0
prompt-toolkit==3.0.36
psutil==5.9.4
ptyprocess==0.7.0
pure-eval==0.2.2
pycld2==0.41
Pygments==2.13.0
pyparsing==3.0.9
python-dateutil==2.8.2
pytz==2022.7
pyzmq==24.0.1
rauth==0.7.3
requests==2.28.1
seaborn==0.12.1
six==1.16.0
stack-data==0.6.2
torch==1.13.1
torchvision==0.14.1
tornado==6.2
tqdm==4.64.1
traitlets==5.8.0
typing_extensions==4.4.0
urllib3==1.26.13
wcwidth==0.2.5

@wdwgonzales
Copy link

@computermacgyver @zijwang

Everything was working fine earlier this year. When I tried running the script this time around, I encountered the same problem as @giulianoformisano. Some context: I work with an M1 mac and torch seems to be working just fine for me.

(base) ➜  scripts python m3twitter.py --id=19854920 --auth auth.txt --skip-cache
Traceback (most recent call last):
  File "/Users/wdwg/Desktop/scripts/m3twitter.py", line 4, in <module>
    from m3inference import M3Twitter
  File "/Users/wdwg/opt/anaconda3/lib/python3.9/site-packages/m3inference/__init__.py", line 1, in <module>
    from .m3inference import M3Inference
  File "/Users/wdwg/opt/anaconda3/lib/python3.9/site-packages/m3inference/m3inference.py", line 14, in <module>
    from .full_model import M3InferenceModel
  File "/Users/wdwg/opt/anaconda3/lib/python3.9/site-packages/m3inference/full_model.py", line 11, in <module>
    class M3InferenceModel(nn.Module):
  File "/Users/wdwg/opt/anaconda3/lib/python3.9/site-packages/m3inference/full_model.py", line 12, in M3InferenceModel
    def __init__(self, device='cuda' if torch.cuda.is_available() else 'cpu'):
NameError: name 'torch' is not defined

I tried to fix it by adding import torch, torchvision in the downloaded scripts. I've also run the following command, to update pytorch:

conda install pytorch torchvision torchaudio -c pytorch-nightly

After that, I added map_location=torch.device('mps') as an argument wherever there is torch.load(...) in the scripts, specifically in the m3inference.py file. MPS was selected because of a recent update. I tried "cpu" but it didn't work.

Anyway, running the program with the above adjustments partially fixed the problem. Now, I am stuck again with a segmentation error.

(base) ➜  scripts python m3twitter.py --id=19854920 --auth auth.txt --skip-cache
12/31/2022 17:46:16 - INFO - m3inference.m3inference -   Version 1.1.5
12/31/2022 17:46:16 - INFO - m3inference.m3inference -   Running on cpu.
12/31/2022 17:46:16 - INFO - m3inference.m3inference -   Will use full M3 model.
12/31/2022 17:46:17 - INFO - m3inference.m3inference -   Model full_model exists at /Users/wdwg/m3/models/full_model.mdl.
12/31/2022 17:46:17 - INFO - m3inference.utils -   Checking MD5 for model full_model at /Users/wdwg/m3/models/full_model.mdl
12/31/2022 17:46:17 - INFO - m3inference.utils -   MD5s match.
12/31/2022 17:46:18 - INFO - m3inference.m3inference -   Loaded pretrained weight at /Users/wdwg/m3/models/full_model.mdl
/Users/wdwg/Desktop/scripts
12/31/2022 17:46:18 - INFO - m3inference.m3twitter -   skip_cache is True. Fetching data from Twitter for id 19854920.
12/31/2022 17:46:18 - INFO - m3inference.m3twitter -   GET /users/show.json?id=19854920
12/31/2022 17:46:18 - INFO - m3inference.dataset -   1 data entries loaded.
Predicting...:   0%|                                                    | 0/1 [00:00<?, ?it/s][1]    
27886 segmentation fault  python m3twitter.py --id=19854920 --auth auth.txt --skip-cache
(base) ➜  scripts /Users/wdwg/opt/anaconda3/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 12 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

I use this package heavily (and cite it whenever I can). It would be a shame for it not to work in future projects. :(

@wdwgonzales
Copy link

Update:

I switched environments (switched to pytorch-nightly) using conda activate torch-nightly, and made the same modifications as detailed above, and it worked?

(torch-nightly) ➜  scripts python m3twitter.py --screen-name=barackobama --auth auth.txt
12/31/2022 18:13:52 - INFO - m3inference.m3inference -   Version 1.1.5
12/31/2022 18:13:52 - INFO - m3inference.m3inference -   Running on cpu.
12/31/2022 18:13:52 - INFO - m3inference.m3inference -   Will use full M3 model.
12/31/2022 18:13:53 - INFO - m3inference.m3inference -   Model full_model exists at /Users/wdwg/m3/models/full_model.mdl.
12/31/2022 18:13:53 - INFO - m3inference.utils -   Checking MD5 for model full_model at /Users/wdwg/m3/models/full_model.mdl
12/31/2022 18:13:53 - INFO - m3inference.utils -   MD5s match.
12/31/2022 18:13:54 - INFO - m3inference.m3inference -   Loaded pretrained weight at /Users/wdwg/m3/models/full_model.mdl
/Users/wdwg/Desktop/scripts
12/31/2022 18:13:54 - INFO - m3inference.m3twitter -   Results not in cache. Fetching data from Twitter for barackobama.
12/31/2022 18:13:54 - INFO - m3inference.m3twitter -   GET /users/show.json?screen_name=barackobama
12/31/2022 18:13:55 - INFO - m3inference.dataset -   1 data entries loaded.
Predicting...:   0%|                                                                                                      | 0/1 [00:00<?, ?it/s][W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
Predicting...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:07<00:00,  7.58s/it]
{'input': {'description': 'Dad, husband, President, citizen.',
           'id': '813286',
           'img_path': '/Users/wdwg/m3/cache/813286_224x224.jpg',
           'lang': 'en',
           'name': 'Barack Obama',
           'screen_name': 'BarackObama'},
 'output': {'age': {'19-29': 0.0003,
                    '30-39': 0.0003,
                    '<=18': 0.0004,
                    '>=40': 0.9991},
            'gender': {'female': 0.0004, 'male': 0.9996},
            'org': {'is-org': 0.0046, 'non-org': 0.9954}}}

@computermacgyver
Copy link
Member

Interesting. Thanks @wdwgonzales . It sounds like the segmentation fault from #26 is solved by using the latest (nightly) build and adding map_location=torch.device('mps') as an argument wherever there is torch.load(...). That is probably something we can do automatically if we detect the computer has an M1/arm64 architecture.

What computer architecture and OS are you using @giulianoformisano ? Also what version of Python?

@giulianoformisano
Copy link
Author

Thanks a lot for your inputs! I followed @wdwgonzales, but the procedure didn't fix the issue.

I am currently using an M2, python versions 3.9.6 and 3.8.8. I also tried the same procedure on Windows Server 2012, yet getting the same error.

@MuraliRamRavipati
Copy link

downgrading torch to 1.12.1 and torchvision to 0.13.1 worked for me

@giulianoformisano
Copy link
Author

@MuraliRamRavipati thank you so much! Downgrading torch to 1.12.1 and torchvision to 0.13.1 fixed the issue.

I created a new environment (python 3.8.8) and followed MuraliRamRavipati's suggestion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants