You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to perform the inference of one image using a quantized model using the "detect.py" file. I have quantized the original yolov4 model using QKeras and then, I have transform it to SaveModel format. However, now I am getting the error:
Traceback (most recent call last):
File "/mnt/beegfs/gap/laumecha/conda-qkeras/tensorflow-yolov4-tflite/detect.py", line 134, in <module>
app.run(main)
File "/mnt/beegfs/gap/laumecha/miniconda3/envs/qkeras_env/lib/python3.9/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/mnt/beegfs/gap/laumecha/miniconda3/envs/qkeras_env/lib/python3.9/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "/mnt/beegfs/gap/laumecha/conda-qkeras/tensorflow-yolov4-tflite/detect.py", line 113, in main
boxes, scores, classes, valid_detections = tf.image.combined_non_max_suppression(
File "/mnt/beegfs/gap/laumecha/miniconda3/envs/qkeras_env/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/mnt/beegfs/gap/laumecha/miniconda3/envs/qkeras_env/lib/python3.9/site-packages/tensorflow/python/ops/image_ops_impl.py", line 5101, in combined_non_max_suppression
return gen_image_ops.combined_non_max_suppression(
File "/mnt/beegfs/gap/laumecha/miniconda3/envs/qkeras_env/lib/python3.9/site-packages/tensorflow/python/ops/gen_image_ops.py", line 358, in combined_non_max_suppression
_ops.raise_from_not_ok_status(e, name)
File "/mnt/beegfs/gap/laumecha/miniconda3/envs/qkeras_env/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 6897, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: scores has incompatible shape [Op:CombinedNonMaxSuppression]
Before converting to SavedModel format I have seen that the topology of the non-quantized and the quantized models have the same layers formats and outputs. So I assume that the output should be the same but I do not have much experience with SavedModel models.
The text was updated successfully, but these errors were encountered:
I am trying to perform the inference of one image using a quantized model using the "detect.py" file. I have quantized the original yolov4 model using QKeras and then, I have transform it to SaveModel format. However, now I am getting the error:
Before converting to SavedModel format I have seen that the topology of the non-quantized and the quantized models have the same layers formats and outputs. So I assume that the output should be the same but I do not have much experience with SavedModel models.
The text was updated successfully, but these errors were encountered: