Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how convert to .tflite? #15

Open
xieshenru opened this issue Jun 27, 2020 · 5 comments
Open

how convert to .tflite? #15

xieshenru opened this issue Jun 27, 2020 · 5 comments

Comments

@xieshenru
Copy link

when using the following code:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("./tflite_models/face_retinaface_mobilenetv2.tflite", "wb").write(tflite_model)
print('saved tflite model!')
An error of "Tensor 'input_image' has invalid shape '[None, None, None, 3]."appears
How convert to .tflite?Looking forward to your reply.

@btjhjeon
Copy link

btjhjeon commented Aug 11, 2020

I have converted to TFLite file successfully in TF 2.3, but you cannot convert it under TF 2.3.
it is because, as of now, only TF 2.3 supports ResizeNearestNeighbor operation which is used in FPN layers.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()

@DavorJordacevic
Copy link

I have converted to TFLite file successfully in TF 2.3, but you cannot convert it under TF 2.3.
it is because, as of now, only TF 2.3 supports ResizeNearestNeighbor operation which is used in FPN layers.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()

For me, it was not so straight forward. However, I was able to convert using custom_opdefs. But it is not easy to do the post training integer quantization. Have you tried this? I am talking about mobilenet backbone.

@tucachmo2202
Copy link

Retinaface model using dynamic input shape, you should fix input shape to static before converting to TF Lite.

@suraj-maniyar
Copy link

@DavorJordacevic Could you please share your method for converting RetinaFace to tflite using custom_opdefs?

I get this error :
image

Thanks.

@fmobrj
Copy link

fmobrj commented Nov 26, 2023

I have converted to TFLite file successfully in TF 2.3, but you cannot convert it under TF 2.3.
it is because, as of now, only TF 2.3 supports ResizeNearestNeighbor operation which is used in FPN layers.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()

For me, it was not so straight forward. However, I was able to convert using custom_opdefs. But it is not easy to do the post training integer quantization. Have you tried this? I am talking about mobilenet backbone.

Hi @DavorJordacevic . Have you managed to deal with the integer quantization for tflite for retinaface? The tflite version gives the same results as the original pytorch model, but when I convert to a quantized integer (8bit) model, the results are messy. Even If I use the quantizing parameters to transform the output. Could you solve this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants