You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This has been mentioned before in #254, but I want to elaborate on our difficulties.
This type of hardcoded dtypes makes it extremely hard to move our programs to use float64.
For example, if we use tf.keras.backend.set_floatx('float64') anywhere, we get errors within tensorflow_ranking due to conflicting dtypes.
Will the global floating point policy (tf.keras.mixed_precision.set_global_policy and tf.keras.backend.floatx) be supported?
If the official stance on the global policy is to ignore it, can it be documented?
The text was updated successfully, but these errors were encountered:
The issue of float precision affects many computations in
tensorflow_ranking
, such asranking/tensorflow_ranking/python/metrics_impl.py
Lines 603 to 628 in a928e2b
This has been mentioned before in #254, but I want to elaborate on our difficulties.
This type of hardcoded dtypes makes it extremely hard to move our programs to use
float64
.For example, if we use
tf.keras.backend.set_floatx('float64')
anywhere, we get errors withintensorflow_ranking
due to conflicting dtypes.Will the global floating point policy (
tf.keras.mixed_precision.set_global_policy
andtf.keras.backend.floatx
) be supported?If the official stance on the global policy is to ignore it, can it be documented?
The text was updated successfully, but these errors were encountered: