Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The details of model #10

Open
StonERMax opened this issue Mar 1, 2021 · 6 comments
Open

The details of model #10

StonERMax opened this issue Mar 1, 2021 · 6 comments

Comments

@StonERMax
Copy link

The training's input is the clean image and noise. The training is along the forward direction , "Sdn->A4->Gain->A4", as the figure 3 in paper while all layers use the inverse calculation (train_multithread function in code).
The sampling's input is the clean image with Gauss. The sampling is along the inverse direction (reversed model) while all the layers use the forward calculation (sample_multithread function in code).   
I wonder if my understanding above is correct.   
Why does the model operate in the forward direction while using the inverse calculation ?

@AbdoKamel
Copy link
Collaborator

Hi,

This is just a convention. You may simply flip the figure and call the training direction "inverse" and the sampling direction "forward" and nothing would change, that is, the internal operation in the layers would not change. Hope this helps!

@StonERMax
Copy link
Author

The training process is to sample the noise distribution (the latent space like z in Glow) to the noisy image (the data space like x in Glow). The goal of this work is to get the noise distribution at last (which is the inference in Glow). Therefore, the inverse direction is used as training here which is different from Glow which use the data space to approximate latent space as training.
I think my understanding may be the same as you and if there is anything wrong, please let me know !

@StonERMax
Copy link
Author

What's more, is there the pytorch implementation of Noise Flow?

@AbdoKamel
Copy link
Collaborator

Just to clarify, in the noise flow paper, Figure 3, and in the code:

  • The training is in the inverse direction: noise distribution --> normal distribution.
  • The sampling is in the forward direction: normal distribution --> noise distribution.
  • There might be a confusion from the fact that the noise distribution is denoted as n in the paper; but denoted as x in the code. Also, the normal distribution is denoted as x_0 in the paper; but denoted as z in the code.
    So, at the end, I don't see a difference in the training/sampling directions compared to the Glow model.

@AbdoKamel
Copy link
Collaborator

What's more, is there the pytorch implementation of Noise Flow?

Not currently; I hope we can do it in the future.

@StonERMax
Copy link
Author

In the Glow, data space distribution --> normal distribution which is the training in your comment uses the forward calculation however use the inverse calculation in Noise Flow code (such as _inverse_and_log_det_jacobian function of all layers).
I think it is the difference and ask why use the inverse calculation instead of forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants