Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Does it make sense to support 0-dimensional tensors (scalars)? #3

Open
devinamatthews opened this issue Apr 9, 2018 · 6 comments

Comments

@devinamatthews
Copy link
Collaborator

No description provided.

@devinamatthews
Copy link
Collaborator Author

While representing scalars as 0-dimensional tensors and allowing their use in tensor operations is natural from a mathematical perspective, there are possible design and implementation concerns that arise. However, similar concerns pop up for e.g. tensors with extents of all 1s.

@emstoudenmire
Copy link
Collaborator

I think this would be useful, as long as it's not too hard to do, if only to prevent certain edge-case bugs from popping up again and again. Couldn't any function (such as 'contract') just start by checking whether the order is 0 or whether all modes have extent =1 (hoping I used the agreed-upong terminology correctly here!). Then special routines could be executed for these edge cases.

@DmitryLyakh
Copy link
Collaborator

I would strongly advocate including order-0 tensors (scalars) for consistency. Otherwise, our tensor algebra (or arithmetic) is not closed, that is, a contraction of two tensors sometimes will result in a tensor and sometimes will result in a scalar, which is an unrelated class to tensors if we exclude order-0 tensors.

@JeanKossaifi
Copy link

Hi! I develop TensorLy, a high level API for tensor operations, with backends for the likes of NumPy, PyTorch, MXNet, etc. This project seems interesting, especially given the need for clean/well-adopted APIs..

My 2 cents: order-0 tensors are crucial not only for consistency but also practical implementations. For instance checking for convergence, etc. In TensorLy I had/have several issues with the various frameworks either not supporting this or doing so in a way that makes 0-order tensors hard to compare with "normal" scalars. For instance, most frameworks allow to take the norm of a tensor along specific mode (axis), the order of the resulting tensor then depending on the selected ones... calling a method such as as_scalar sometimes breaks some of the desirable properties of the tensor, such as, in the tensor case, attached gradients.

@dgasmith
Copy link

@JeanKossaifi From browsing your project, it is probably worth reading over this blog as well for backend swapping.

@JeanKossaifi
Copy link

Thanks @dgasmith - indeed, TensorLy is mentioned in the post and I have been discussing with the author about the issue. I look forward to see convergence to a unified NumPy API..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants