-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First Image Transltation #37
Comments
For your first question, I tried both scenarios and get similar results. Thus, you can train CycleGAN without perceptual loss. For the second one, the model is pertained with ImageNet. |
Thanks for quick reply! And I found it very slow to train cyclegan with perceptual loss(it may takes around a month in my situation, I mentioned this under another question ). So I'm suprised that you just spent 4 days. Did you use single GPU? or multi GPUS? |
I use 4 gpus |
Thanks, that might be normal. The GPU I used is not compatible to teslaV100. Is it convenient for you to upload the first translation image? Anyway, It's also ok for not. While you train cyclegan with larger batchsize, is the initial learning rate you used same with standard cyclegan ? |
You can train with less epochs. I upload the parameters I use. You can refer to it. |
|
Hi, I have read the paper and still have som problems.
CycleGAN is trained with a perceptual loss. Does the first image translation use the perceptual loss? If so, which segmentation model parameters is used. Is the source only model with 33.6 mIoU in the paper?
With first traslated images in hand, when starting the adversarial training of the segmentation model, is the initial model parameters the ImageNet pretrained parameters or the source-image pretrained parameters with mIoU of 33.6?
The text was updated successfully, but these errors were encountered: