You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.
Does any layer/loss that makes calls to torch._C need to have an implementation in matchbox? This results in cloning torch's API. Is it possible to create some default wrapper for all such layers? It would just apply torch._C routines to MaskedBatch.data and return a new MaskedBatch.
Yes, but we'll have to either list them somewhere or modify PyTorch source.
Eventually I definitely want to overload at the aten level but there's
enough potential churn there right now with c10, and enough relevant things
still implemented in python, that the current approach is likely to
continue to be cleaner until PyTorch 1.0
On Sat, May 26, 2018 at 12:56 AM Leonid Vlasenkov ***@***.***> wrote:
Does any layer/loss that makes calls to torch._C need to have an
implementation in matchbox? This results in cloning torch's API. Is it
possible to create some default wrapper for all such layers? It would just
apply torch._C routines to MaskedBatch.data and return a new MaskedBatch.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALL4tsLWhCfsEhqaZ9wGGMUzjqSjomtJks5t2QqqgaJpZM4ULpXg>
.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Tried to forward a
MaskedBatch
through a GRUCell on GPU. Got the following:Does it mean that
matchbox
requires another implementation of GRU for GPU? Is there some workarond?The text was updated successfully, but these errors were encountered: