You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to start a topic explaining questions that some of you have already asked. So far, in the OD API, only SSD model can be converted into a TensorFlow Lite model (according to the authors). These models must be fully quantised. The TF1 Model Zoo already offers quantised models for this, the TF2 Model Zoo does not yet. People have tried to train other models quantised with the "graph_rewriter" in TF1, some failed, others succeeded. For TF2 there is no quantized model and no one has managed to quantise there.
Now I came across this repo and a few others and see that other models can be quantised and converted into a TensorFlow Lite model. As I understand it, we should be able to convert talking trained model into a TensorFlow Lite model using the framework (note not OD API).
The question is:
can any model in an IDE be quantised using the TensorFlow framework and converted into a TensorFlow Lite model.
do I only need the .pb file or the weights of the model to do this.
what is the pipeline for this repo: Train model in IDE -> Save weights or .pb file -> Convert to TensorFlow model (SavedModel or HDF5?) -> Convert to TFLite model
can this approach be used here with a converted TFLite model in the OD API?
I hope for clarifying answers so that beginners, could understand what is possible :)
The text was updated successfully, but these errors were encountered:
Hello folks,
I would like to start a topic explaining questions that some of you have already asked. So far, in the OD API, only SSD model can be converted into a TensorFlow Lite model (according to the authors). These models must be fully quantised. The TF1 Model Zoo already offers quantised models for this, the TF2 Model Zoo does not yet. People have tried to train other models quantised with the "graph_rewriter" in TF1, some failed, others succeeded. For TF2 there is no quantized model and no one has managed to quantise there.
Now I came across this repo and a few others and see that other models can be quantised and converted into a TensorFlow Lite model. As I understand it, we should be able to convert talking trained model into a TensorFlow Lite model using the framework (note not OD API).
The question is:
I hope for clarifying answers so that beginners, could understand what is possible :)
The text was updated successfully, but these errors were encountered: