Mixed precision support
This update adds back support for mixed precision training. These combinations of inputs / parameters are now supported:
float32
input,float32
weight and biasfloat64
input,float64
weight and biasfloat16
input,float16
weight and biasfloat16
input,float32
weight and bias
Note: in the float16
cases all internal operations are still performed with float32
math, and float16
is not supported when operating in CPU mode.