Skip to content

Experimental training capability

Compare
Choose a tag to compare
@rouson rouson released this 24 May 23:23
· 843 commits to main since this release
ed0afa5

This is the first release with an experimental capability for training neural networks. The current unit tests verify convergence for single-hidden-layer networks using gradient descent with updates averaged across mini-batches of input/output pairs. Future work will include verifying the training of deep neural networks and introducing stochastic gradient descent.

What's Changed

  • CI: install gfortran on macOS by @everythingfunctional in #48
  • Encapsulate all output and store unactivated weighted/biased neuron output for training by @rouson in #47
  • Add activation-function derivative functions by @rouson in #46
  • fix(setup.sh): brew install netcdf-fortran by @rouson in #45
  • fix(netcdf-interfaces): patch for fpm v >= 0.8 by @rouson in #49
  • Test: add test for single-layer perceptron by @rouson in #50
  • Add back-propagation by @rouson in #51
  • Export everything via one common module by @rouson in #52
  • Add nominally complete training algorithm by @rouson in #53
  • doc(README): mention training & additional future work by @rouson in #55
  • Group input/output pairs into mini-batches by @rouson in #54
  • doc(README): mention experimental training feature by @rouson in #56

New Contributors

Full Changelog: 0.5.0...0.6.0