Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To see memory slot probabilities #16

Open
joecheriross opened this issue Sep 26, 2016 · 6 comments
Open

To see memory slot probabilities #16

joecheriross opened this issue Sep 26, 2016 · 6 comments

Comments

@joecheriross
Copy link

joecheriross commented Sep 26, 2016

I am trying to see the memory slot probabilities(probabilities associated with different sentences) for a particular query. Is there a way to visualize them ? Please help.

Thanks,
Joe

@joecheriross joecheriross changed the title To see memory probabilities To see memory slot probabilities Sep 26, 2016
@domluna
Copy link
Owner

domluna commented Sep 26, 2016

I think it would be cool, thought about it in the past but I can't guarantee I'll get around to it. If you come up with something feel free to send a pull request!

@joecheriross
Copy link
Author

joecheriross commented Sep 27, 2016

Thanks Dominique.

A few questions to make sure my understanding is correct.
During testing, providing stories and questions.

  • In memn2n.py
    probs = tf.nn.softmax(dotted)
    The probs here get the probability values corresponding to each memory slot(sentence in the story) for the given query in each hop right ?
    If there are 3 sentences in the story, then probs will have 3 probability values
  • Finding a way to print probs and stories should help to get an idea on how the sentences are picked for answering a question ?

@domluna
Copy link
Owner

domluna commented Sep 27, 2016

That's correct. Maybe
https://www.tensorflow.org/versions/r0.10/api_docs/python/control_flow_ops.html#Print
could help in this regard.

On Tue, Sep 27, 2016 at 3:01 AM, Joe Cheri Ross [email protected]
wrote:

A few questions to make sure my understanding is correct.
During testing, providing stories and questions.

In memn2n.py
probs = tf.nn.softmax(dotted)
The probs here get the probability values corresponding to each memory
slot(sentence in the story) for the given query in each hop right ?
If there are 3 sentences in the story, then probs will have 3
probability values

Finding a way to print probs and stories should help to get an idea on
how the sentences are picked for answering a question ?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#16 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AB0aF2BC37b0zK7DbVGLpmNBqAFXovR7ks5quL84gaJpZM4KGoKO
.

@joecheriross
Copy link
Author

joecheriross commented Oct 21, 2016

I tried out some modification (in memn2n.py) to get the memory probabilities (attentions). The last probs obtained, is assigned to a variable self.mem_probs after the hops loop in _inference(). This is returned by the predict_proba() function.

But each time when I run, I get different values for the memory probabilities for the same test instances. I was trying to get accuracy based on attentions. But this gives different accuracy each time. Can you please see if I am doing something wrong ? Please let me know if something is not clear.

modification in memn2n.py

            probs= None #added by joe
            for _ in range(self._hops):
                m_emb = tf.nn.embedding_lookup(self.A, stories)
                m = tf.reduce_sum(m_emb * self._encoding, 2) + self.TA
                # hack to get around no reduce_dot
                u_temp = tf.transpose(tf.expand_dims(u[-1], -1), [0, 2, 1])
                dotted = tf.reduce_sum(m * u_temp, 2)

                # Calculate probabilities
                probs = tf.nn.softmax(dotted)
                #pr= tf.Print(probs, [probs], message="probs: ", summarize=1000000000)
                #self.image_summary_t= tf.image_summary("probs", probs)

                probs_temp = tf.transpose(tf.expand_dims(probs, -1), [0, 2, 1])
                c_temp = tf.transpose(m, [0, 2, 1])
                o_k = tf.reduce_sum(c_temp * probs_temp, 2)


                u_k = tf.matmul(u[-1], self.H) + o_k
                # nonlinearity
                if self._nonlin:
                    u_k = nonlin(u_k)

                u.append(u_k)

            self.mem_probs= probs #added by joe

modification in predict_proba

return self._sess.run([self.predict_proba_op, self.mem_probs], feed_dict=feed_dict)

@domluna
Copy link
Owner

domluna commented Oct 22, 2016

Does #pr= tf.Print(probs, [probs], message="probs: ", summarize=1000000000) give the same probs as self.mem_probs?

@joecheriross
Copy link
Author

Yes. I checked that as well, to make sure that I am not doing something wrong.
It also gives the same value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants