You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unwrapping a SharedTensor to get access to one of its memory copies is a tedious process at the moment, with a lot of unwrapping, tmp variables and boilerplate. As pointed out in the PR of the NN Plugin, this leads to all sort of inconvenient trouble.
The text was updated successfully, but these errors were encountered:
I think there are 3 most often used classes of methods:
Initialization on creation of SharedTensor. Methods like from_slice(&dims, &slice), from_iter(&dims, &iter) would be most useful,
Overwriting after creation: .copy_from_slice() and .copy_from_iter().
Generic access with slices: read_slice(), write_only_slice(), read_write_slice(). Those methods would be implemented as thin wrappers on top read(&native_device) / write_only(&native_device) / read_write(&native_device). This cathegory includes first two, but having them would still be very convinient.
It's not very clear how 1) and 2) should behave on slices and iterators of incorrect length. Panic or return Result with error? Maybe initialization from longer iterators should still be allowed for convinience...
Unwrapping a SharedTensor to get access to one of its memory copies is a tedious process at the moment, with a lot of unwrapping, tmp variables and boilerplate. As pointed out in the PR of the NN Plugin, this leads to all sort of inconvenient trouble.
The text was updated successfully, but these errors were encountered: