Skip to content

Torch errors

Hlib edited this page Jan 13, 2019 · 10 revisions

Import errors

ImportError: dlopen: cannot load any more object with static TLS

ImportError: Failed to import test module: langmodel.decode_text
Traceback (most recent call last):
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
    module = self._get_module_from_name(name)
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
    __import__(name)
  File "/home/travis/build/hlibbabii/log-recommender/tests/langmodel/decode_text.py", line 3, in <module>
    from logrec.langmodel.utils import beautify_text
  File "/home/travis/build/hlibbabii/log-recommender/tests/../logrec/langmodel/utils.py", line 5, in <module>
    from fastai.core import to_np, to_gpu
  File "/home/travis/build/hlibbabii/log-recommender/logrec/../../fastai-fork/fastai/core.py", line 2, in <module>
    from .torch_imports import *
  File "/home/travis/build/hlibbabii/log-recommender/logrec/../../fastai-fork/fastai/torch_imports.py", line 3, in <module>
    import torch, torchvision, torchtext
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/site-packages/torch/__init__.py", line 56, in <module>
    from torch._C import *
ImportError: dlopen: cannot load any more object with static TLS

Solution (no sure)

move import torch up or down

AttributeError: module 'torch' has no attribute 'float32'

ERROR: classifier.dataset_generator (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: classifier.dataset_generator
Traceback (most recent call last):
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
    module = self._get_module_from_name(name)
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
    __import__(name)
  File "/home/travis/build/hlibbabii/log-recommender/tests/classifier/dataset_generator.py", line 3, in <module>
    from logrec.classifier.dataset_generator import create_case
  File "/home/travis/build/hlibbabii/log-recommender/tests/../logrec/classifier/dataset_generator.py", line 7, in <module>
    from logrec.classifier.context_datasets import ContextsDataset, get_dir_and_file, WORDS_IN_CONTEXT_LIMIT
  File "/home/travis/build/hlibbabii/log-recommender/tests/../logrec/classifier/context_datasets.py", line 5, in <module>
    from torchtext import data
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/site-packages/torchtext/__init__.py", line 1, in <module>
    from . import data
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/site-packages/torchtext/data/__init__.py", line 4, in <module>
    from .field import RawField, Field, ReversibleField, SubwordField, NestedField, LabelField
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/site-packages/torchtext/data/field.py", line 61, in <module>
    class Field(RawField):
  File "/home/travis/miniconda/envs/fastai/lib/python3.6/site-packages/torchtext/data/field.py", line 118, in Field
    torch.float32: float,
AttributeError: module 'torch' has no attribute 'float32'

Solution

in fastai/environment.yml:

  • torchtext==0.2.3

RuntimeError: given sequence has an invalid size of dimension 2: 0

File "logrec/classifier/log_position_classifier.py", line 171, in <module>
    run(args.force_rerun)
  File "logrec/classifier/log_position_classifier.py", line 157, in run
    show_tests(fs.classification_test_path, model, text_field)
  File "logrec/classifier/log_position_classifier.py", line 113, in show_tests
    output_predictions(model, text_field, LEVEL_LABEL, context.rstrip("\n"), 2, label.rstrip("\n"))
  File "/home/lv71161/hlibbabii/log-recommender/logrec/langmodel/utils.py", line 21, in output_predictions
    t=to_gpu(input_field.numericalize(words, -1))
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torchtext/data/field.py", line 310, in numericalize
    arr = self.tensor_type(arr)
RuntimeError: given sequence has an invalid size of dimension 2: 0

Solution

Text field in the example is an empty list

ZeroDivisionError: Weights sum to zero, can't be normalized

Solution

increase data/decrease bptt

RuntimeError: matrix and vector expected, got 3D, 2D

logits = torch.mv(valid_pointer_history, rnn_out[idx])

RuntimeError: matrix and vector expected, got 3D, 2D at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/TH/generic/THTensorMath.c:1324:

Solution

batch size should be 1

ValueError: len() should return >= 0

File "/home/hlib/thesis/fastai-fork/fastai/model.py", line 180, in fit validate_skip=validate_skip, text_field=text_field) File "/home/hlib/thesis/fastai-fork/fastai/model.py", line 257, in validate for (*x, y) in iter(dl): File "/home/hlib/thesis/fastai-fork/fastai/nlp.py", line 133, in next if self.i >= self.n-1 or self.iter>=len(self): raise StopIteration ValueError: len() should return >= 0

Solution

Try setting smaller bptt?

ValueError: Expected more than 1 value per channel when training, got input size [1, 900]

File "logrec/classifier/log_position_classifier.py", line 215, in <module>
    run(args.force_rerun)
  File "logrec/classifier/log_position_classifier.py", line 194, in run
    train(fs, learner, classifier_training_param.classifier_training)
  File "logrec/classifier/log_position_classifier.py", line 99, in train
    file=open(training_log_file, 'w+')
  File "/home/lv71161/hlibbabii/fastai/fastai/learner.py", line 293, in fit
    return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs)
  File "/home/lv71161/hlibbabii/fastai/fastai/learner.py", line 240, in fit_gen
    swa_eval_freq=swa_eval_freq, text_field=self.text_field, **kwargs)
  File "/home/lv71161/hlibbabii/fastai/fastai/model.py", line 153, in fit
    loss = model_stepper.step(V(x),V(y), epoch)
  File "/home/lv71161/hlibbabii/fastai/fastai/model.py", line 50, in step
    output = self.m(*xs)
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
    input = module(input)
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lv71161/hlibbabii/fastai/fastai/lm_rnn.py", line 218, in forward
    l_x = l(x)
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lv71161/hlibbabii/fastai/fastai/lm_rnn.py", line 197, in forward
    def forward(self, x): return self.lin(self.drop(self.bn(x)))
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 37, in forward
    self.training, self.momentum, self.eps)
  File "/home/lv71161/hlibbabii/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/functional.py", line 1011, in batch_norm
    raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size [1, 900]

Solution

Clone this wiki locally