Project is forked from github.com/rpmcruz, then improved and some bugs fixed. See commits for improvements: https://github.com/barisozmen/autoaugment-unofficial/commits/master
My attempt at reproducing the following paper from Google. I have used Keras and TensorFlow.
- AutoAugment: Learning Augmentation Policies from Data. Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, Quoc V. Le.
There are two components to the code:
- Controller: a recurrent neural network that suggests transformations
- Child: the final neural network trained with the previous suggestion.
Each child is trained start-to-finish using the policies produced by the recurrent neural network (controller). The model is then evaluated in the validation set. The tuple (child validation accuracy score, controller softmax probabilities) are then stored in a list.
The controller is trained in order to maximize the derivative of its outputs with respect to each weight,
All this is implemented in the fit()
function which can be found inside each class.
Disclaimer: I am unsure how faithful the code is to the paper. I have used a lot of information from this other paper (Neural Architecture Search), which is the main citation from AutoAugment. I have since been told that code is published here. You might want to have a look at it as well.