You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using TensorFlow-2 Object Detection API to build my custom object detection model using FasterRCNN Resnet-152 640x640 Pretrained Model from the tf2 Model Zoo. I have successfully trained my model and the checkpoints created are in below format -
I then used [exporter_main_v2.py] (https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_main_v2.py) to export the model. I used --input_type as 'image_tensor' the first time and 'float_image_tensor' the second time. After running the script saved_model.pb was generated and, the output folder looked like this -
To get the meta data information, I ran the saved_model_cli command and this is how it looks -
I used the same saved_model.pb for inferencing using TF Serving command -
tensorflow_model_server --port=8500 --model_name=detection_model --model_base_path=path_to_saved_model(/tfserve_savedmodel/)
I was able to get the results for single image but cannot perform batch inferencing.
Points to consider-
I am using GRPC client to establish the connection to the server and I updated the code for batch inferencing. I was passing a batch of numpy array of size (1,640,640,3) for single image and tried creating a np array (10,640,640,3) for a batch of 10 images and passing it to the server.
I thought the input shape from the SavedModel SignatureDef had shape: (1, -1, -1, 3) and this is the reason I wasn't able to do for multiple images. Thus I updated the code in file exporter_lib_v2.py and set the shape to shape=[None, None, None, 3] and exported the model. But even then I got the input shape as (1, -1, -1, 3).
Also I checked the Tensorflow Serving Github Repo (https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md#batching-configuration) and tried passing a separate batch.config while running the server but still could not perform batch inferencing. I used the below command - tensorflow_model_server --port=8500 --model_config_file=/data1/root/lab/prime_team_projects/scripts/models.config.a --enable_batching true --batching_parameters_file=/data1/root/lab/prime_team_projects/scripts/batch.config
ayusheebrane
changed the title
exporting
Unable to get batched input argument from exported Object Detection Saved Model for TensorFlow Batch Serving
Mar 23, 2022
https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_lib_v2.py
I am using TensorFlow-2 Object Detection API to build my custom object detection model using FasterRCNN Resnet-152 640x640 Pretrained Model from the tf2 Model Zoo. I have successfully trained my model and the checkpoints created are in below format -
I then used [exporter_main_v2.py] (https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_main_v2.py) to export the model. I used --input_type as 'image_tensor' the first time and 'float_image_tensor' the second time. After running the script saved_model.pb was generated and, the output folder looked like this -
To get the meta data information, I ran the saved_model_cli command and this is how it looks -
I used the same saved_model.pb for inferencing using TF Serving command -
tensorflow_model_server --port=8500 --model_name=detection_model --model_base_path=path_to_saved_model(/tfserve_savedmodel/)
I was able to get the results for single image but cannot perform batch inferencing.
Points to consider-
Help me with this issue please.
System information
The text was updated successfully, but these errors were encountered: