Skip to content

Commit

Permalink
Added gan
Browse files Browse the repository at this point in the history
  • Loading branch information
jeffheaton committed Dec 5, 2019
1 parent 6171556 commit 80b307a
Showing 1 changed file with 22 additions and 2 deletions.
24 changes: 22 additions & 2 deletions t81_558_class_07_2_Keras_gan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,13 @@
"\n",
"The program created next will generate faces similar to these. While these faces are not perfect, they demonstrate how we can construct and train a GAN on or own. Later we will see how to import very advanced weights from nVidia to produce high resolution, realistic looking faces.\n",
"\n",
"![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan-3.png \"GAN\")"
"![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan-3.png \"GAN Images\")\n",
"\n",
"As discussed in the previous module, the GAN is made up of two different neural networks: the discriminator and the generator. The generator generates the images, while the discriminator detects if a face is real or was generated. These two neural networks work as follows:\n",
"\n",
"![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_1.png \"GAN\")\n",
"\n",
"The discriminator accepts an image as its input and produces number that is the probability of the input image being real. The generator accepts a random seed vector and generates an image from that random vector seed. An unlimited number of new images can be created by providing additional seeds."
]
},
{
Expand Down Expand Up @@ -581,7 +587,21 @@
"colab_type": "text",
"id": "-ChOo3D1OsVc"
},
"source": []
"source": [
"Loss functions must be developed that allow the generator and discriminator to be trained in an adversarial way. Because these two neural networks are being trained independently they must be trained in two separate passes. This requires two separate loss functions and also two separate updates to the gradients. When the discriminator's gradients are applied to decrease the discriminator's loss it is important that only the discriminator's weights are update. It is not fair, nor will it produce good results, to adversarially damage the weights of the generator to help the discriminator. A simple backpropagation would do this. It would simultaneously affect the weights of both generator and discriminator to lower whatever loss it was assigned to lower.\n",
"\n",
"The following diagram shows how the discriminator is trained.\n",
"\n",
"![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_2.png \"GAN\")\n",
"\n",
"Here a training set is generated with an equal number of real and fake images. The real images are randomly sampled (chosen) from the training data. An equal number of random images are generated from random seeds. For the discriminator training set, the $x$ contains the input images and the $y$ contains a value of 1 for real images and 0 for generated ones.\n",
"\n",
"Likewise, the following diagram shows how the generator is trained.\n",
"\n",
"![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_3.png \"GAN\")\n",
"\n",
"For the generator training set, the $x$ contains the random seeds to generate images and the $y$ always contains the value of 1, because the optimal is for the generator to have generated such good images that the discriminiator was fooled into assigning them a probability near 1."
]
},
{
"cell_type": "code",
Expand Down

0 comments on commit 80b307a

Please sign in to comment.