How to properly export/serve inference models from keras-cv #2203
Unanswered
luisliborio
asked this question in
Q&A
Replies: 1 comment
-
@luisliborio have you tried - https://keras.io/api/models/model_saving_apis/export/ |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Multiple guides like these: guide 1, guide 2, detail how you manage you custom data, load multiple models, be it from scratch or pre-trained. Then, they explain clearly how to use the .predict() method and they all work fine. However, I can't find how should I export this model, so I can load and serve it later. The guides never load the model and show how to .predict(), they just apply the precision directly from the trained model, which are very different from loaded models.
I was able to export the model using model.export() method to tf-serve format, and later use the .serve() method, but its output seems to not be postprocessed as it is in the original model, and can't find this postprocessing function. For example, the yolov8 .serve() output shape is (batch_size, 8400, 64). I have no idea how to process it to become bboxes.
Please, do you guys know any guides showing how to properly serve keras-cv models? Thank you!
Beta Was this translation helpful? Give feedback.
All reactions