You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to reproduce your results for style transfer using photo2monet, photo2vangogh, photo2ukiyoe and photo2cezanne datasets. (Collection Style)
I wanted to know if you used specific lambda values for this particular use case (as the showcased results are really good).
I have another question regarding the conditional identity preserving loss mentioned in the paper:
removing the conditional identity preserving loss, multi-scale SSIM loss and color cycle-consistency loss substantially degrades the performance, meaning that the proposed joint optimization objectives are particularly important to stabilize the training process and thus produce much better generation performance
Maybe I missed it, but I haven't found this loss anywhere in the code. Is there a practical reason for it?
Finally, I have noticed that you did not use the mentioned LSGAN loss but rather the loss behind the Wasserstein GAN GP model. What are the reasons behind it?
Thanks
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to reproduce your results for style transfer using photo2monet, photo2vangogh, photo2ukiyoe and photo2cezanne datasets. (Collection Style)
I wanted to know if you used specific lambda values for this particular use case (as the showcased results are really good).
I have another question regarding the conditional identity preserving loss mentioned in the paper:
Maybe I missed it, but I haven't found this loss anywhere in the code. Is there a practical reason for it?
Finally, I have noticed that you did not use the mentioned LSGAN loss but rather the loss behind the Wasserstein GAN GP model. What are the reasons behind it?
Thanks
The text was updated successfully, but these errors were encountered: