You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It appears that there is a difference in the mean field iterations presented in your paper (section 3.1) compared to the original paper (also section 3.1). More specifically, in the original paper, the output of each iteration is softmax(- unary - message_passing) while in your paper it is softmax(unary + message_passing) (note the additive inversion).
In your implementation, I could not understand this part:
if not i == num_iter - 1 or self.final_softmax:
if self.conf['softmax']:
prediction = exp_and_normalize(prediction, dim=1)
According to the algorithm, prediction should be normalized at every iteration. In the code, though, it is not normalized for the last iteration, or when self.final_softmax = False, or when self.conf['softmax'] = False.
Could you please explain these two issues?
Thank you in advance!
Best regards.
The text was updated successfully, but these errors were encountered:
Hi Marvin,
Interesting work!
I would like to ask some questions.
It appears that there is a difference in the mean field iterations presented in your paper (section 3.1) compared to the original paper (also section 3.1). More specifically, in the original paper, the output of each iteration is
softmax(- unary - message_passing)
while in your paper it issoftmax(unary + message_passing)
(note the additive inversion).In your implementation, I could not understand this part:
According to the algorithm,
prediction
should be normalized at every iteration. In the code, though, it is not normalized for the last iteration, or whenself.final_softmax = False
, or whenself.conf['softmax'] = False
.Could you please explain these two issues?
Thank you in advance!
Best regards.
The text was updated successfully, but these errors were encountered: