-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sth strange when trainning #47
Comments
The model is definitely crashed... Seems you've activated the tv_loss and set the weight too high that the output tends to be over smooth and crashed at local minimum [1,0,0] and [0,1,1], leading to the blue and yellow pattern... does all predicted uv maps look like this? I don't know what else did you change in |
I just remove dilate when generate uv map and set align_corners is True when upsample.And now when I trained again it got better. I will give the resaults later. |
Interesting, looks like the training was successful. Random crash happens frequently in my experiments, could be a bad item in provided toy dataset but I’m not sure either. Anyway, I just load the last successful checkpoint and resume training whenever crash happens, not a big deal. |
Later, I would report a mistake about every epoch. |
hi, I modify the train.sh as :
and run about 16 hours(44 epoch), and at the end of log.txt shows:
the total have not changed much since the 5th epoch.
and the inter output is strange(44 epoch):
I wonder if it it because the batch size is too small, since I have no enough GPU memory. Or maybe other option set I am wrong?
The text was updated successfully, but these errors were encountered: