You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Checked the issue tracker for similar issues to ensure this is not a duplicate.
Provided a clear description of your suggestion.
Included any relevant context or examples.
Issue or Suggestion Description
我们在使用模型检测时出现了如下报错:
input->type == kTfLiteInt8 || Int16 || Uint8 was not true,Node DEQUANTIZE (number 0f) failed to prepare with status 1。
我们使用的idf版本是v5.2,板子是esp32s3-wroom,使用的模型是mediapipe的hand_landmark_lite.tflite模型,在netron上查看其结构如下:
我已经按照netron将模型所需op全部加入resolver
这个DEQUANTIZE量化操作的输入并不是我在手动加模型输入的时候能够决定的呀,为什么会导致这样的报错呢,难道是esp-tflite-micro的这个op与模型的同名op之间有差别吗?
完整代码有点长,我们想做实时识别手势,所以还有摄像头的好多代码,下面是只有有关模型的代码,hand_landmark_lite.tflite这个模型在电脑上是确定可以运行的
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/c/common.h"
#include "model.h"
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;
constexpr int kTensorArenaSize = 20 * 1024;
uint8_t tensor_arena[kTensorArenaSize];
void model_setup();
void model_handle(uvc_frame_t* frame);
void model_setup() {
model = tflite::GetModel(g_model);
if (model->version() != TFLITE_SCHEMA_VERSION) {
MicroPrintf("Model provided is schema version %d not equal to supported "
"version %d.", model->version(), TFLITE_SCHEMA_VERSION);
return;
}
static tflite::MicroMutableOpResolver<8> resolver;
if (resolver.AddAdd() != kTfLiteOk)
return;
if (resolver.AddMean() != kTfLiteOk)
return;
if (resolver.AddLogistic() != kTfLiteOk)
return;
if (resolver.AddConv2D() != kTfLiteOk)
return;
if (resolver.AddFullyConnected() != kTfLiteOk)
return;
if (resolver.AddDequantize() != kTfLiteOk)
return;
if (resolver.AddDepthwiseConv2D() != kTfLiteOk)
return;
if (resolver.AddMaxPool2D() != kTfLiteOk)
return;
static tflite::MicroInterpreter static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
MicroPrintf("AllocateTensors() failed");
return;
}
input = interpreter->input(0);
output = interpreter->output(0);
}
float* normalize(uint8_t* data, int size){
float* ret = new float[size];
for(int i = 0; i < size; i++)
ret[i] = (float)data[i] / 255.0f;
return ret;
}
void model_handle(uvc_frame_t* frame) {
input->type = kTfLiteFloat32;
(input->data).f = normalize((uint8_t*)frame->data, frame->data_bytes);
input->dims->data[0] = 1;
input->dims->data[1] = 224;
input->dims->data[2] = 224;
input->dims->data[3] = 3;
(input->dims)->size = 4;
input->bytes = 224 * 224 * 3;
TfLiteStatus invoke_status = interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
MicroPrintf("Invoke failed");
return;
}
float* landmark = new float[63];
landmark = (output->data).f;
for(int i = 0; i < 21; i += 3)
MicroPrintf("x: %f, y: %f, z: %f", landmark[i], landmark[i + 1], landmark[i + 2]);
delete[] landmark;
}
感谢!
The text was updated successfully, but these errors were encountered:
github-actionsbot
changed the title
Node DEQUANTIZE (number 0f) failed to prepare with status 1
Node DEQUANTIZE (number 0f) failed to prepare with status 1 (TFMIC-40)
Oct 14, 2024
@Criminal-9527 the tflite-micro models are expected to use quantised weights (inputs/outputs and filter values) and only the scales are supposed to be float.
Checklist
Issue or Suggestion Description
我们在使用模型检测时出现了如下报错:
input->type == kTfLiteInt8 || Int16 || Uint8 was not true,Node DEQUANTIZE (number 0f) failed to prepare with status 1。
我们使用的idf版本是v5.2,板子是esp32s3-wroom,使用的模型是mediapipe的hand_landmark_lite.tflite模型,在netron上查看其结构如下:
我已经按照netron将模型所需op全部加入resolver
这个DEQUANTIZE量化操作的输入并不是我在手动加模型输入的时候能够决定的呀,为什么会导致这样的报错呢,难道是esp-tflite-micro的这个op与模型的同名op之间有差别吗?
完整代码有点长,我们想做实时识别手势,所以还有摄像头的好多代码,下面是只有有关模型的代码,hand_landmark_lite.tflite这个模型在电脑上是确定可以运行的
感谢!
The text was updated successfully, but these errors were encountered: