You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An error occurs when exporting T5G2PModel to ONNX.
Steps/Code to reproduce bug
import torch
from transformers import PreTrainedTokenizerBase, AutoTokenizer
from nemo.collections.tts.g2p.models.t5 import T5G2PModel
model_name = "T5G2P.nemo"
model = T5G2PModel.restore_from(model_name, map_location=torch.device("cpu"))
model.eval()
model.export("test.onnx")
[Error]
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::full but it isn't a special case. Argument types: int[], bool, int, NoneType, Device, bool,
Describe the bug
An error occurs when exporting T5G2PModel to ONNX.
Steps/Code to reproduce bug
[Error]
RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::full but it isn't a special case. Argument types: int[], bool, int, NoneType, Device, bool,
Candidates:
aten::full.names(int[] size, Scalar fill_value, *, str[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
aten::full(SymInt[] size, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
aten::full.names_out(int[] size, Scalar fill_value, *, str[]? names, Tensor(a!) out) -> Tensor(a!)
aten::full.out(SymInt[] size, Scalar fill_value, *, Tensor(a!) out) -> Tensor(a!)
Environment details
If NVIDIA docker image is used you don't need to specify these.
Otherwise, please provide:
The text was updated successfully, but these errors were encountered: