You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
J Jiang 42 minutes ago
Because our tensor rank is instantiated via template constant argument. So there's a physical limit to that.
Naoya Maruyama 40 minutes ago
There should be no fundamental limit. As Jie mentioned, higher dimensional tensors would require more spaces, but that should be it.
J Jiang 39 minutes ago
Yeah, we can support higher rank tensors, but there would still be a limit set at compile time 🙂
Naoya Maruyama 31 minutes ago
Yeah, higher ranks would mean an increased size of the size and stride vectors, so the parameter size of kernel launches would increase in our current implementation, which could mean launch failures. We could pass those data through constant memory.
J Jiang 29 minutes ago
I vaguely remember that we break fusion into smaller kernel when looking at argument sizes... Don't remember if it's done at TorchScript level or FusionSegmenter... But it should be a workable problem.
Christian Sarofeen 25 minutes ago
All of this is workable/reworkable. Will leave it up to @Mike Ruberry
to let us know how important/urgent he thinks this is.
Christian Sarofeen 24 minutes ago
Adding one or two more dimensions seems pretty easy, getting to the full size seems a bit trickier.
Christian Sarofeen 23 minutes ago
If we do decide to tackle this a good target would be to remove the tensor parameters directly but decompose them into what's actually used as many stride/size parameters probably aren't.
Christian Sarofeen 22 minutes ago
The initial design was primarily around convenience.
🚀 The feature, motivation and pitch
Pytorch supports tensor dimensions of up to size 25.
The text was updated successfully, but these errors were encountered: