Skip to content

barisozmen/autoaugment-unofficial

 
 

Repository files navigation

Project is forked from github.com/rpmcruz, then improved and some bugs fixed. See commits for improvements: https://github.com/barisozmen/autoaugment-unofficial/commits/master

Original Readme:


AutoAugment

My attempt at reproducing the following paper from Google. I have used Keras and TensorFlow.

There are two components to the code:

  1. Controller: a recurrent neural network that suggests transformations
  2. Child: the final neural network trained with the previous suggestion.

Each child is trained start-to-finish using the policies produced by the recurrent neural network (controller). The model is then evaluated in the validation set. The tuple (child validation accuracy score, controller softmax probabilities) are then stored in a list.

The controller is trained in order to maximize the derivative of its outputs with respect to each weight, $\frac{\partial y}{\partial w}$, times the [0,1] normalized accuracy scores from the previous list. The $y$ outputs are the "controller softmax probabilities" from the previous list.

All this is implemented in the fit() function which can be found inside each class.

Disclaimer: I am unsure how faithful the code is to the paper. I have used a lot of information from this other paper (Neural Architecture Search), which is the main citation from AutoAugment. I have since been told that code is published here. You might want to have a look at it as well.

About

Reproduction of AutoAugment from Google

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 81.5%
  • Python 18.5%