Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add convenient access to underlying memory of a SharedTensor #33

Open
MichaelHirn opened this issue Dec 30, 2015 · 1 comment
Open

Add convenient access to underlying memory of a SharedTensor #33

MichaelHirn opened this issue Dec 30, 2015 · 1 comment
Milestone

Comments

@MichaelHirn
Copy link
Member

Unwrapping a SharedTensor to get access to one of its memory copies is a tedious process at the moment, with a lot of unwrapping, tmp variables and boilerplate. As pointed out in the PR of the NN Plugin, this leads to all sort of inconvenient trouble.

@alexandermorozov
Copy link
Contributor

I think there are 3 most often used classes of methods:

  1. Initialization on creation of SharedTensor. Methods like from_slice(&dims, &slice), from_iter(&dims, &iter) would be most useful,

  2. Overwriting after creation: .copy_from_slice() and .copy_from_iter().

  3. Generic access with slices: read_slice(), write_only_slice(), read_write_slice(). Those methods would be implemented as thin wrappers on top read(&native_device) / write_only(&native_device) / read_write(&native_device). This cathegory includes first two, but having them would still be very convinient.

It's not very clear how 1) and 2) should behave on slices and iterators of incorrect length. Panic or return Result with error? Maybe initialization from longer iterators should still be allowed for convinience...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants