You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why the input is forward padded?
For e.g. 31, Aug 1986
Is there specific reason for that? What difference does it make to do padding before the input sequence instead of padding after input sequence. Please explain.
The text was updated successfully, but these errors were encountered:
My guess would be that you want as much data as possible when your sending the encoder state into the decoder. If you padded at the end of the data the LSTM might forget the stuff at the beginning.
^correct @lofar788 . I want all the most important data closer to the final output vector. So this ought to make it easier for the network. Technically speaking it ought to be able to (learn to) ignore the paddings by itself.
@sachinruk@lofar788 We can also use seq_len and pass sequence length such that dynamic_rnn will do learning upto length of each sequence. In this we don't even need to worry about learning from pad.
Why the input is forward padded?
For e.g. 31, Aug 1986
Is there specific reason for that? What difference does it make to do padding before the input sequence instead of padding after input sequence. Please explain.
The text was updated successfully, but these errors were encountered: