-
Notifications
You must be signed in to change notification settings - Fork 722
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I think this a bug. caption.py 140 #196
Comments
I have the same bug in caption.py . Did you fix it ? |
i fix it , you should consider the situation that your model is too weak to generate the |
please add code
before
in fuction |
why don't you use it like the following ? Is there any specific reason why you are returning 0 instead of matching attention map?
|
You are right. Actually, the code in my project doesn't need to care about seqs_alpha. So I just used '0' to replace its return. Your code are more complete and better. |
incomplete_inds = [ind for ind, next_word in enumerate(next_word_inds) if next_word != word_map['']]
incomplete_inds always is [0,1,2,3,4] .
and then
complete_inds = list(set(range(len(next_word_inds))) - set(incomplete_inds))
complete_inds is empty
so complete_seqs is null.
my Model out is error
The text was updated successfully, but these errors were encountered: