Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf.keras.metrics.F1Score produces ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32 #33

Open
meekus-fischer opened this issue Aug 23, 2023 · 8 comments
Assignees

Comments

@meekus-fischer
Copy link

meekus-fischer commented Aug 23, 2023

System information.

  • Have I written custom code (as opposed to using a stock example script provided in Keras): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): RHELS 7.9
  • TensorFlow installed from (source or binary): Pip, binary
  • TensorFlow version (use command below): 2.13.0
  • Python version: 3.9
  • GPU model and memory: NVIDIA A100-SXM4-40GB
  • Exact command to reproduce:

Describe the problem.

During training, model is monitoring tf.keras.metrics.F1Score; however, when F1Score.update_state is called, a Value Error is thrown.

ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: <tf.Tensor 'cond/Identity_4:0' shape=(None,) dtype=int32>

which is the result of the following line of code in the FBetaScore Class:

 y_true = tf.convert_to_tensor(y_true, dtype=self.dtype)

Describe the current behavior.

F1Score metric unable to update_state. Error thrown. Unable to train model.

Describe the expected behavior.

I would expect F1Score to update_state based on a y_true tensor with an int32 datatype and a y_pred tensor of float32 datatype without throwing an error.

In the tfa.metrics.FBetaScore code, the corresponding line is:

y_true = tf.cast(y_true, self.dtype)

Is it possible that the new tf.keras.metric code should be using a tf.cast(...) vice a tf.convert_to_tensor(...)?

  • Do you want to contribute a PR? (yes/no): no

Standalone code to reproduce the issue.

Cannot share full code. Can share custom model init / train_step which causes the error.

class CustomModel(keras.Model):
    def __init__(self, model_type, val_samples, threshold, *args, **kwargs):
      super(CustomModel, self).__init__(*args,**kwargs)
      self.loss_tracker = tf.keras.metrics.Mean(name='Loss')
      self.val_samples = val_samples
      self.precision = tf.keras.metrics.Precision(name='Precision')
      self.recall = tf.keras.metrics.Recall(name='Recall')
      self.f1 = tf.keras.metrics.F1Score(name="F1", threshold=threshold)
      if model_type == 'binary':
          self.accuracy = tf.keras.metrics.BinaryAccuracy(name='accuracy', threshold=threshold)
      else:
          self.accuracy = tf.keras.metrics.CategoricalAccuracy(name='accuracy')
      

    def train_step(self, data):
        inputs, targets = data
        
        with tf.GradientTape() as tape:
            predictions = self(inputs, training=True)
            loss = self.compiled_loss(targets, predictions, regularization_losses=self.losses)

        gradients = tape.gradient(loss, self.trainable_variables)
        self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))

        self.loss_tracker.update_state(loss)
        self.accuracy.update_state(targets, predictions)
        self.precision.update_state(targets,predictions)
        self.recall.update_state(targets,predictions)
        self.f1.update_state(targets, predictions)
        return {"Accuracy":self.accuracy.result(),"Loss":self.loss_tracker.result(), 
                "Precision":self.precision.result(), "Recall":self.recall.result(),
                "F1":self.f1.result()}

    def test_step(self, data):
        inputs, targets = data

        predictions = []
        
        for _ in range(self.val_samples):
            predictions.append(self(inputs, training=False))
        
        predictions = tf.math.reduce_mean(tf.stack(predictions, axis=0), axis=0)

        loss = self.compiled_loss(targets, predictions, regularization_losses=self.losses)
        
        self.loss_tracker.update_state(loss)
        self.accuracy.update_state(targets, predictions)
        self.precision.update_state(targets,predictions)
        self.recall.update_state(targets,predictions)
        self.f1.update_state(targets, predictions)
        return {"Accuracy":self.accuracy.result(),"Loss":self.loss_tracker.result(),
                "Precision":self.precision.result(), "Recall":self.recall.result(),
                "F1":self.f1.result()}
    
    @property
    def metrics(self):
        return [self.accuracy,self.loss_tracker, self.precision, self.recall, self.f1]

Source code / logs.

Epoch 1/2000
Traceback (most recent call last):
  File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/data/kraken/john.fischer/projects/passive_sonar_models/scripts/../../passive_sonar_models/__main__.py", line 110, in <module>
    main()
  File "/data/kraken/john.fischer/projects/passive_sonar_models/scripts/../../passive_sonar_models/__main__.py", line 37, in main
    train_model.main(args)
  File "/data/kraken/john.fischer/projects/passive_sonar_models/scripts/../../passive_sonar_models/task/train_model.py", line 139, in main
    model.fit(train_data, epochs=args.num_epochs, validation_data=validate_data, #steps_per_epoch=steps_per_epoch, validation_steps=val_steps,
  File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/tmp/__autograph_generated_file3ywpkuyj.py", line 15, in tf__train_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/data/kraken/john.fischer/projects/passive_sonar_models/scripts/../../passive_sonar_models/models/mc_dropout_CNN.py", line 42, in train_step
    self.f1.update_state(targets,predictions)
ValueError: in user code:

    File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/engine/training.py", line 1338, in train_function  *
        return step_function(self, iterator)
    File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/engine/training.py", line 1322, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/engine/training.py", line 1303, in run_step  **
        outputs = model.train_step(data)
    File "/data/kraken/john.fischer/projects/passive_sonar_models/scripts/../../passive_sonar_models/models/mc_dropout_CNN.py", line 42, in train_step
        self.f1.update_state(targets,predictions)
    File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/utils/metrics_utils.py", line 77, in decorated
        update_op = update_state_fn(*args, **kwargs)
    File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/metrics/base_metric.py", line 140, in update_state_fn
        return ag_update_state(*args, **kwargs)
    File "/home/john.fischer/.conda/envs/psonar2/lib/python3.9/site-packages/keras/src/metrics/f_score_metrics.py", line 176, in update_state  **
        y_true = tf.convert_to_tensor(y_true, dtype=self.dtype)

    ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: <tf.Tensor 'cond/Identity_4:0' shape=(None,) dtype=int32>
@meekus-fischer
Copy link
Author

As an update, adjusting the update_state call to the following is a workaround:

self.f1.update_state(tf.cast(targets,dtype=predictions.dtype) ,predictions)

However, there is additionally an issue when there is only a single output class as in a Binary Classification problem using Binary CrossEntropy:

ValueError: FBetaScore expects 2D inputs with shape (batch_size, output_dim). Received input shapes: y_pred.shape=(None, 1) and y_true.shape=(None,).

@tilakrayal
Copy link
Collaborator

@meekus-fischer,
I tried to execute the mentioned code, in the given code snippet you have defined the class and its methods but are not calling them anywhere. Could you please provide the complete code or the colab gist. Kindly find the gist of it here.

ValueError: FBetaScore expects 2D inputs with shape (batch_size, output_dim). Received input shapes: y_pred.shape=(None, 1) and y_true.shape=(None,).

And you are facing the above error, the input shapes which were provided are not compatible for the shape(batch_size, output_dim). Could you please try to provide correct shapes and try to execute.
Thank you!

@meekus-fischer
Copy link
Author

meekus-fischer commented Aug 24, 2023

@meekus-fischer, I tried to execute the mentioned code, in the given code snippet you have defined the class and its methods but are not calling them anywhere. Could you please provide the complete code or the colab gist. Kindly find the gist of it here.

ValueError: FBetaScore expects 2D inputs with shape (batch_size, output_dim). Received input shapes: y_pred.shape=(None, 1) and y_true.shape=(None,).

And you are facing the above error, the input shapes which were provided are not compatible for the shape(batch_size, output_dim). Could you please try to provide correct shapes and try to execute. Thank you!

@tilakrayal
I have edited the gist with a toy example as I am not able to share my entire project / data. The problem with the input shape compatibility arises during a typical binary classification case. The y_true tensor has shape (batch_size,) due to the fact that the tensor is just a tensor of 0s and 1s. The gist now encapsulates this issue. A workaround, for both issues described in my problem is contained in the gist and below:

#Workaround
self.f1.update_state(tf.expand_dims(tf.cast(targets,dtype=predictions.dtype),-1), predictions)

This type of workaround shouldn't be required for a binary classification problem. I should be able to use a model like this regardless of the type of classification problem. If I used this for a Multiclass or multlabel problem, it would now break without additional logic to not expand_dims in those cases.

@meekus-fischer
Copy link
Author

As an update to the above. This solution does not allow TensorBoard to be utilized.

model.fit(train_ds, epochs=50, validation_data=val_ds, validation_freq=1, callbacks=tf.keras.callbacks.TensorBoard())

results in the following error

Epoch 1/50
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-16-039cc4c8c456>](https://3jm7pq18b6d-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in <cell line: 1>()
----> 1 model.fit(train_ds, epochs=50, validation_data=val_ds, validation_freq=1, callbacks=tf.keras.callbacks.TensorBoard())

1 frames
[/usr/local/lib/python3.10/dist-packages/tensorboard/plugins/scalar/summary_v2.py](https://3jm7pq18b6d-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230823-060135-RC00_559378898#) in scalar(name, data, step, description)
     86     )
     87     with summary_scope(name, "scalar_summary", values=[data, step]) as (tag, _):
---> 88         tf.debugging.assert_scalar(data)
     89         return tf.summary.write(
     90             tag=tag,

ValueError: Expected scalar shape, saw shape: (1,).

@vanishinggrad
Copy link

I am dealing with the same problem with tf.keras.metrics.F1Score for a binary classification problem.

@sachinprasadhs sachinprasadhs transferred this issue from keras-team/keras Sep 22, 2023
@giuliocn
Copy link

giuliocn commented Dec 2, 2023

@tilakrayal I reproduced a very similar error on TF 2.14.0 (see below). Please have a look at the full notebook gist.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-13-3be8d2fa8bed> in <cell line: 1>()
----> 1 result = classifier_model.evaluate(test_ds, return_dict=True)

/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
     68             # To get the full stack trace, call:
     69             # `tf.debugging.disable_traceback_filtering()`
---> 70             raise e.with_traceback(filtered_tb) from None
     71         finally:
     72             del filtered_tb

/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py in tf__test_function(iterator)
     13                 try:
     14                     do_return = True
---> 15                     retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
     16                 except:
     17                     do_return = False

ValueError: in user code:

    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 2042, in test_function  *
        return step_function(self, iterator)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 2025, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 2013, in run_step  **
        outputs = model.test_step(data)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 1896, in test_step
        return self.compute_metrics(x, y, y_pred, sample_weight)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 1225, in compute_metrics
        self.compiled_metrics.update_state(y, y_pred, sample_weight)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/compile_utils.py", line 620, in update_state
        metric_obj.update_state(y_t, y_p, sample_weight=mask)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/metrics_utils.py", line 77, in decorated
        result = update_state_fn(*args, **kwargs)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/metrics/base_metric.py", line 140, in update_state_fn
        return ag_update_state(*args, **kwargs)
    File "/usr/local/lib/python3.10/dist-packages/keras/src/metrics/f_score_metrics.py", line 176, in update_state  **
        y_true = tf.convert_to_tensor(y_true, dtype=self.dtype)

    ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: <tf.Tensor 'ExpandDims_1:0' shape=(None, 1) dtype=int32>

@rhuanbarros
Copy link

I have the same problem with the training phase in the case of binary classification.

@see-saw-code
Copy link

see-saw-code commented Jun 1, 2024

This is reproducible with a small change to the tutorial example at https://www.tensorflow.org/tutorials/keras/text_classification with TF 2.15.0

Simply adding an F1Score as a metric to compile raises the ValueError:

model.compile(optimizer=optimizer,
              loss=tf.losses.BinaryCrossentropy(from_logits=True),
              metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy'),
                                              tf.keras.metrics.F1Score()
                       ])

Crashes with ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int64 during fit(). Re-compiling with it as a metric after fit causes the same raise during evaluate().

A workaround to add F1Score to this particular tutorial is to change

train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], 
                                  batch_size=-1, as_supervised=True)

train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)

to

train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], 
                                  batch_size=-1, as_supervised=True)

train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)
train_labels = train_labels.astype(np.float64)
test_labels = test_labels.astype(np.float64)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants