Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get batched input argument from exported Object Detection Saved Model for TensorFlow Batch Serving #10552

Open
ayusheebrane opened this issue Mar 23, 2022 · 0 comments
Labels
models:research models that come under research directory type:bug Bug in the code

Comments

@ayusheebrane
Copy link

ayusheebrane commented Mar 23, 2022

https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_lib_v2.py

I am using TensorFlow-2 Object Detection API to build my custom object detection model using FasterRCNN Resnet-152 640x640 Pretrained Model from the tf2 Model Zoo. I have successfully trained my model and the checkpoints created are in below format -
image
I then used [exporter_main_v2.py] (https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_main_v2.py) to export the model. I used --input_type as 'image_tensor' the first time and 'float_image_tensor' the second time. After running the script saved_model.pb was generated and, the output folder looked like this -
image
image

To get the meta data information, I ran the saved_model_cli command and this is how it looks -
image

I used the same saved_model.pb for inferencing using TF Serving command -
tensorflow_model_server --port=8500 --model_name=detection_model --model_base_path=path_to_saved_model(/tfserve_savedmodel/)
I was able to get the results for single image but cannot perform batch inferencing.

Points to consider-

  1. I am using GRPC client to establish the connection to the server and I updated the code for batch inferencing. I was passing a batch of numpy array of size (1,640,640,3) for single image and tried creating a np array (10,640,640,3) for a batch of 10 images and passing it to the server.
  2. I thought the input shape from the SavedModel SignatureDef had shape: (1, -1, -1, 3) and this is the reason I wasn't able to do for multiple images. Thus I updated the code in file exporter_lib_v2.py and set the shape to shape=[None, None, None, 3] and exported the model. But even then I got the input shape as (1, -1, -1, 3).
  3. To dig deeper I checked the metadata information of the Pretrained Model(Faster R-CNN ResNet152 V1 640x640) from model zoo(https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md) I used for training my model. It also has the same SignatureDef input shape: (1, -1, -1, 3). I suspect if this is the reason my final saved model has batchsize of 1 even after updating the exporter_lib_v2.py file. I am facing issues to perform batch inferencing.
  4. Also I checked the Tensorflow Serving Github Repo (https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md#batching-configuration) and tried passing a separate batch.config while running the server but still could not perform batch inferencing. I used the below command - tensorflow_model_server --port=8500 --model_config_file=/data1/root/lab/prime_team_projects/scripts/models.config.a --enable_batching true --batching_parameters_file=/data1/root/lab/prime_team_projects/scripts/batch.config

Help me with this issue please.

System information

@ayusheebrane ayusheebrane added models:research models that come under research directory type:bug Bug in the code labels Mar 23, 2022
@ayusheebrane ayusheebrane changed the title exporting Unable to get batched input argument from exported Object Detection Saved Model for TensorFlow Batch Serving Mar 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models:research models that come under research directory type:bug Bug in the code
Projects
None yet
Development

No branches or pull requests

1 participant