Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-Train Model #1

Open
jacobbieker opened this issue Sep 6, 2021 · 9 comments
Open

Pre-Train Model #1

jacobbieker opened this issue Sep 6, 2021 · 9 comments

Comments

@jacobbieker
Copy link
Member

This GAN is a bit tricky to train, and I've had trouble so far getting it to learn a lot. The loss just seems to stay very high. This could be because of the more limited computation that I've tried training it on compared to DeepMind, but getting a pre-trained model publicly available would be very helpful

@jacobbieker
Copy link
Member Author

We also have their data we can get and download from the repo, so can first try their weights on their data and see if if works or not.

@jacobbieker
Copy link
Member Author

The first pretrained weights are available! From being trained 100 epochs on the sample dataset, with future plans for weights from a model trained on the whole UK dataset and US dataset coming later.

@jacobbieker jacobbieker reopened this Jun 20, 2022
@clearlyzero
Copy link

Hello, I'm sorry to bother you. My training process discriminator loss will remain at 2. Did you encounter this situation during the training of this network.

@jacobbieker
Copy link
Member Author

Its been awhile since I did the training, but if I remember correctly, yes, somewhat? It should still bounce around, but the loss stays relatively high.

@clearlyzero
Copy link

Its been awhile since I did the training, but if I remember correctly, yes, somewhat? It should still bounce around, but the loss stays relatively high.
Thanks,Does this mean that the minimum value of discriminator loss is 2

@jacobbieker
Copy link
Member Author

No, but it shouldn't drop to 0 or near 0, otherwise the discriminator is, essentially, too good at discriminating and the generator won't be getting better. Unfortunately, it seems the best way to see how well the network is training is to plot some outputs every once in awhile

@clearlyzero
Copy link

Thank you very much for your reply. Will the loss of the generator increase and then balance with the training process

@jacobbieker
Copy link
Member Author

Ideally, I think both the generator loss and discriminator loss will go up and down in training, the combined loss should go down a bit overall, but it won't go close to zero at all

@clearlyzero
Copy link

Thank you very much. I seem to have a clue. Thank you for replying to me in your busy work, which is very helpful to me

Chevolier pushed a commit to Chevolier/skillful_nowcasting that referenced this issue May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants