-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploying YOLOv8n Model #2907
Comments
Hello @chunzhe-intel I did not dig much into the post-processing part, because it may vary per use case, but instead I have tried running the yolov8 notebook you mentioned: https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/yolov8-optimization/yolov8-object-detection.ipynb With slight modifications as mentioned in green color in attachements I was able to replace OpenVINO inference with OVMS inference over gRPC endpoint. OVMS serving model with output named manually, re-exported like mentioned in your original post. This is how I started OVMS:
You can see in input/output metadata during model loading phase:
As you can see the same boxes appear for OpenVINO and OVMS (the inference result is equal as well): Maybe this notebook could guide you to resolve the problem in post/pre processing code? |
@dkalinowski Thanks for trying this out. I was able to replicate your setup on my side. I would like to dive a little deeper into this. My setup requires me to use KServe API and from the demo here, the normalized numpy array is passed from client to the OVMS server for processing. Instead of the numpy array, I would like to pass JPEG/PNG instead, but that would mean the preprocessing (normalization) would be required to occur at the YOLO-model/OVMS side. Using the ResNet example in OVMS, it suggest that OVMS is able to accept JPEG/PNG as it converts the image-format to tensor. I did try to pass the image as PNG/JPEG but was not able to get expected results. Is there any settings or configuration that needs to be done for it to behave properly? |
Describe the bug
I'm trying to deploy the YOLOv8n model using OVMS. The YOLO model was obtained from the example in https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/yolov8-optimization/yolov8-object-detection.ipynb. I've done a little post-processing to the output tensor to obtain the classification result from the model using the jupyter notebook running on OV, it seems like it's able to produce the correct result as shown below:
Sample post-processing code:
Result:
I have put the same post-processing and model to OVMS but have obtained a different result:
We should be getting at minimum, class id of 16 (dog). From OVMS, we aren't seeing dog at all.
Both input image is preprocessed to the model input required size of 640x640 with letterbox fill
This leads me to think that there some configuration or preprocessing steps missing?
Does running the YOLO model on OVMS requires any other additional configuration as compared to OV? (other than the Tensor being named)
Is there similar example that I can reference? (YOLO if possible)
Snip of the model's xml
The text was updated successfully, but these errors were encountered: