You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking at dataset.lua, where decoderInputs are being set up.
decoderInputs = torch.IntTensor(maxTargetOutputSeqLen-1,size):fill(0)
for samplenb = 1, #targetSeqs do
trimmedEosToken = targetSeqs[samplenb]:sub(1,-2)
for word = 1, trimmedEosToken:size(1) do
if size == 1 then
decoderInputs[word] = trimmedEosToken[word]
else
decoderInputs[word][samplenb] = trimmedEosToken[word]
end
end
end
This tensor is then used in model.decoder:forward(decoderInputs) in train.lua. My question is, how (and why) is this tensor different from encoderInputs?
I ask this because when we evaluate the model using eval.lua, we simply pass the decoder's own output at each time step into model.decoder:forward (see Seq2Seq:eval(input), where we have the line local prediction = self.decoder:forward(torch.Tensor(output))[#output] ).
Help would be appreciated.
The text was updated successfully, but these errors were encountered:
I'm looking at dataset.lua, where
decoderInputs
are being set up.This tensor is then used in
model.decoder:forward(decoderInputs)
in train.lua. My question is, how (and why) is this tensor different from encoderInputs?I ask this because when we evaluate the model using
eval.lua
, we simply pass the decoder's own output at each time step intomodel.decoder:forward
(seeSeq2Seq:eval(input)
, where we have the linelocal prediction = self.decoder:forward(torch.Tensor(output))[#output]
).Help would be appreciated.
The text was updated successfully, but these errors were encountered: