-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paper "Analyzing Inverse Problems with Invertible Neural Netowrks": Error in implementation #4
Comments
Hi, thank you for raising the issue! Until then, if in doubt, stick with the description in the paper. |
Ok, thanks for your answer, enjoy your holidays! :) |
I was now able to write a Tensorflow implementation of the invertible network. It can reproduce the results of the toy-8 example. During the process of implementing this, I've discovered a few things others might benefit from when working with this paper:
But apart from these suggestions: Great work, this will be valuable for lots of problems in science! P. S.: The problem I've had in my first post is not present in the toy-8 demo notebook, but in the |
Hi, I have the same confusion. Have you figured it out? Please let me know. I could really use your help. Thanks. |
Hi,
I'm trying to implement the invertible network described in the paper in Tensorflow 2.
I am having some difficulties matching the descriptions of the loss functions with the code.
Especially, I think there might be an inconsistency in this file:
If I've understood correctly, the function
loss_reconstruction
(that is almost undescribed in the paper) seems to use the following layout for the values that are fed to the sampling process:However, the
train_epoch
function seems to use a different layout:Is this a mistake, or does the output of the forward process really have a different format than the input of the inverse process?
The text was updated successfully, but these errors were encountered: