Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On the difference compared to the original paper #13

Open
netw0rkf10w opened this issue Nov 12, 2018 · 0 comments
Open

On the difference compared to the original paper #13

netw0rkf10w opened this issue Nov 12, 2018 · 0 comments

Comments

@netw0rkf10w
Copy link

netw0rkf10w commented Nov 12, 2018

Hi Marvin,

Interesting work!

I would like to ask some questions.

  1. It appears that there is a difference in the mean field iterations presented in your paper (section 3.1) compared to the original paper (also section 3.1). More specifically, in the original paper, the output of each iteration is softmax(- unary - message_passing) while in your paper it is softmax(unary + message_passing) (note the additive inversion).

  2. In your implementation, I could not understand this part:

        if not i == num_iter - 1 or self.final_softmax:
                if self.conf['softmax']:
                    prediction = exp_and_normalize(prediction, dim=1)

According to the algorithm, prediction should be normalized at every iteration. In the code, though, it is not normalized for the last iteration, or when self.final_softmax = False, or when self.conf['softmax'] = False.

Could you please explain these two issues?

Thank you in advance!
Best regards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant