You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I'm a fresh man for the pytorch and GAN. Thanks for you to share such a good implementation. But I'm confused about your training procesure about the G & D. To my best knownledge, the D should try to identify different modalities, while the G do oppsite. Meanwhile, they should be trained respectively and alternately. But as your code:
The G and D are trained simultaneously, which casued the loss of D become very high while G is being trained, and then the training of G become unstable because of the big loss of D when it's D's turn.
When I try to apply the framework to other work, the loss become Nan quickly. So the training process cannot continue.Is there something wrong with my idea, or the codes.
Another confusion is that adding the L2 regularization is a neccesary for avoiding overfitting?
Looking forward your reply.
Thanks.
The text was updated successfully, but these errors were encountered:
Hello,
I'm a fresh man for the pytorch and GAN. Thanks for you to share such a good implementation. But I'm confused about your training procesure about the G & D. To my best knownledge, the D should try to identify different modalities, while the G do oppsite. Meanwhile, they should be trained respectively and alternately. But as your code:
The G and D are trained simultaneously, which casued the loss of D become very high while G is being trained, and then the training of G become unstable because of the big loss of D when it's D's turn.
When I try to apply the framework to other work, the loss become Nan quickly. So the training process cannot continue.Is there something wrong with my idea, or the codes.
Another confusion is that adding the L2 regularization is a neccesary for avoiding overfitting?
Looking forward your reply.
Hello,
I'm a fresh man for the pytorch and GAN. Thanks for you to share such a good implementation. But I'm confused about your training procesure about the G & D. To my best knownledge, the D should try to identify different modalities, while the G do oppsite. Meanwhile, they should be trained respectively and alternately. But as your code:
The G and D are trained simultaneously, which casued the loss of D become very high while G is being trained, and then the training of G become unstable because of the big loss of D when it's D's turn.
When I try to apply the framework to other work, the loss become
Nan
quickly. So the training process cannot continue.Is there something wrong with my idea, or the codes.Another confusion is that adding the L2 regularization is a neccesary for avoiding overfitting?
Looking forward your reply.
Thanks.
The text was updated successfully, but these errors were encountered: