We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have followed the tutorial but I encounter this problem (My training flow works fine with MaskFormer and Mask2Former).:
{ "name": "TypeError", "message": "can't multiply sequence by non-int of type 'NoneType'", "stack": "--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[18], line 5 3 print(f\"Epoca:{epoch}\") 4 model.train() ----> 5 for idx,batch in enumerate(tqdm(train_dataloader)): 6 optimizer.zero_grad() 8 for mask in batch[\"mask_labels\"]: File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/tqdm/notebook.py:250, in tqdm_notebook.__iter__(self) 248 try: 249 it = super().__iter__() --> 250 for obj in it: 251 # return super(tqdm...) will not catch exception 252 yield obj 253 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/tqdm/std.py:1181, in tqdm.__iter__(self) 1178 time = self._time 1180 try: -> 1181 for obj in iterable: 1182 yield obj 1183 # Update and possibly print the progressbar. 1184 # Note: does not call self.update(1) for speed optimisation. File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/torch/utils/data/dataloader.py:633, in _BaseDataLoaderIter.__next__(self) 630 if self._sampler_iter is None: 631 # TODO(https://github.com/pytorch/pytorch/issues/76750) 632 self._reset() # type: ignore[call-arg] --> 633 data = self._next_data() 634 self._num_yielded += 1 635 if self._dataset_kind == _DatasetKind.Iterable and \\ 636 self._IterableDataset_len_called is not None and \\ 637 self._num_yielded > self._IterableDataset_len_called: File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/torch/utils/data/dataloader.py:677, in _SingleProcessDataLoaderIter._next_data(self) 675 def _next_data(self): 676 index = self._next_index() # may raise StopIteration --> 677 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 678 if self._pin_memory: 679 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py:54, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 52 else: 53 data = self.dataset[possibly_batched_index] ---> 54 return self.collate_fn(data) Cell In[11], line 6, in collate_fn(batch) 3 images = inputs[0] 4 segmentation_maps = inputs[1] ----> 6 batch = preprocessor( 7 images, 8 segmentation_maps=segmentation_maps, 9 task_inputs=[\"semantic\"], 10 return_tensors=\"pt\", 11 ) 13 batch[\"original_images\"] = inputs[2] 14 batch[\"original_segmentation_maps\"] = inputs[3] File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/transformers/models/oneformer/image_processing_oneformer.py:538, in OneFormerImageProcessor.__call__(self, images, task_inputs, segmentation_maps, **kwargs) 537 def __call__(self, images, task_inputs=None, segmentation_maps=None, **kwargs) -> BatchFeature: --> 538 return self.preprocess(images, task_inputs=task_inputs, segmentation_maps=segmentation_maps, **kwargs) File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/transformers/models/oneformer/image_processing_oneformer.py:741, in OneFormerImageProcessor.preprocess(self, images, task_inputs, segmentation_maps, instance_id_to_semantic_id, do_resize, size, resample, do_rescale, rescale_factor, do_normalize, image_mean, image_std, ignore_index, do_reduce_labels, return_tensors, data_format, input_data_format, **kwargs) 736 if segmentation_maps is not None: 737 segmentation_maps = [ 738 self._preprocess_mask(segmentation_map, do_resize, size, input_data_format=input_data_format) 739 for segmentation_map in segmentation_maps 740 ] --> 741 encoded_inputs = self.encode_inputs( 742 images, 743 task_inputs, 744 segmentation_maps, 745 instance_id_to_semantic_id, 746 ignore_index, 747 do_reduce_labels, 748 return_tensors, 749 input_data_format=input_data_format, 750 ) 751 return encoded_inputs File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/transformers/models/oneformer/image_processing_oneformer.py:1041, in OneFormerImageProcessor.encode_inputs(self, pixel_values_list, task_inputs, segmentation_maps, instance_id_to_semantic_id, ignore_index, reduce_labels, return_tensors, input_data_format) 1039 task = task_inputs[i] 1040 if task == \"semantic\": -> 1041 classes, masks, texts = self.get_semantic_annotations(label, num_class_obj) 1042 elif task == \"instance\": 1043 classes, masks, texts = self.get_instance_annotations(label, num_class_obj) File ~/miniconda3/envs/deep_l_04/lib/python3.9/site-packages/transformers/models/oneformer/image_processing_oneformer.py:841, in OneFormerImageProcessor.get_semantic_annotations(self, label, num_class_obj) 838 annotation_classes = label[\"classes\"] 839 annotation_masks = label[\"masks\"] --> 841 texts = [\"a semantic photo\"] * self.num_text 842 classes = [] 843 masks = [] TypeError: can't multiply sequence by non-int of type 'NoneType'" }
I have instantiated the model as::
model_id="shi-labs/oneformer_ade20k_swin_tiny" preprocessor=AutoImageProcessor.from_pretrained(model_id,task_inputs=["semantic"],ignore_index=0, reduce_labels=False, do_resize=False, do_rescale=False, do_normalize=False) model=AutoModelForUniversalSegmentation.from_pretrained(model_id,id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True) model.config.contrastive_temperature = None
Additionally, my dataset is in the form (pixel_values,mask).
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I have followed the tutorial but I encounter this problem (My training flow works fine with MaskFormer and Mask2Former).:
I have instantiated the model as::
Additionally, my dataset is in the form (pixel_values,mask).
The text was updated successfully, but these errors were encountered: