Description
Please go to Stack Overflow for help and support:
http://stackoverflow.com/questions/tagged/tensorflow
Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:
- It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
- The form below must be filled out.
Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information
- What is the top-level directory of the model you are using:
SSD Mobilenet V3, large and small, downloaded from
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md - Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
I'm modifying sample code to get it to work with mobilenet_v3 instead of mobilenet_v1 given here: https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android - OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Linux Ubuntu 18.04 - TensorFlow installed from (source or binary):
pip3 binary - TensorFlow version (use command below):
v1.12 - Bazel version (if compiling from source):
N/A - CUDA/cuDNN version:
N/A - GPU model and memory:
N/A - Exact command to reproduce:
N/A
Describe the problem
I can get other .tflite models to work for object detection, such as the default ssd_mobilenet_v1, but the ssd_mobilenet_v3 model won't produce a prediction confidence any larger than 10^-15, i.e., it never makes predictions. Unless I'm missing some fundamental difference between the way ssd_mobilenet_v3 takes input data ssd_mobilenet_v1 takes input data (which is possible, but I can't find any documentation that indicates this) then it seems like the ssd_mobilenet_v3 just doesn't work for whatever reason.
Source code / logs
Here is a summary of the code I'm using to feed the model input data:
//Note that the model takes a 320 x 320 image.
//Get image data as integer values
private int[] intValues;
intValues = new int[320 * 320];
private Bitmap croppedBitmap = null;
croppedBitmap = Bitmap.createBitmap(320, 320, Config.ARGB_8888);
croppedBitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
//create ByteBuffer as input for running ssd_mobilenet_v3
private ByteBuffer imgData;
imgData = ByteBuffer.allocateDirect(320 * 320 * 3);
imgData.order(ByteOrder.nativeOrder());
//Fill Bytebuffer
//Note that & 0xFF is for just getting the last 8 bits, for converting to RGB values here
imgData.rewind();
for (int i = 0; i < inputSize; ++i) {
for (int j = 0; j < inputSize; ++j) {
int pixelValue = intValues[i * inputSize + j];
// Quantized model
imgData.put((byte) ((pixelValue >> 16) & 0xFF));
imgData.put((byte) ((pixelValue >> 8) & 0xFF));
imgData.put((byte) (pixelValue & 0xFF));
}
}
// Set up output buffers
private float[][][] output0;
private float[][][][] output1;
output0 = new float[1][2034][91];
output1 = new float[1][2034][1][4];
//Create input HashMap and run the model
Object[] inputArray = {imgData};
Map<Integer, Object> outputMap = new HashMap<>();
outputMap.put(0, output0);
outputMap.put(1, output1);
tfLite.runForMultipleInputsOutputs(inputArray, outputMap);
//Examine Confidences
for (int i = 0; i < 2034; i++) {
for (int j = 0; j < 91; j++) {
System.out.println(output0[0][i][j]);
}
}