Skip to content

Mixed precision support

Compare
Choose a tag to compare
@ducksoup ducksoup released this 05 Jul 10:11

This update adds back support for mixed precision training. These combinations of inputs / parameters are now supported:

  • float32 input, float32 weight and bias
  • float64 input, float64 weight and bias
  • float16 input, float16 weight and bias
  • float16 input, float32 weight and bias

Note: in the float16 cases all internal operations are still performed with float32 math, and float16 is not supported when operating in CPU mode.