Recurrent and Artificial Neural Networks
The vanishing gradient problem of automatic differentiation or backpropagation in neural networks was partially overcome in 1992 by an early generative model called the neural history compressor, implemented as an unsupervised stack of recurrent neural networks... (Wikipedia)
This is a from-scratch Golang implementation, inspired in part by the Stanford ML course.
Gradient descent, SGD and mini-batching, L1/L2 regularization, Adagrad, Adadelta, RMSprop and ADAM optimization algorithms, hyper-parameters and other tunables, activation functions, input and output normalization.
Plus:
- RNNs: naive, fully connected ("unrolled"), and partially connected ("limited") - tested here
- Super-NN, weighted aggregation of Neural Networks - described here
And also:
- Natural Evolution Strategies (NES)
- Parallel computing: SGD and NES (initial)
- And more
go get github.com/hqr/gorann
See Makefile for test, lint, command-line help, and other useful targets.