Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node DEQUANTIZE (number 0f) failed to prepare with status 1 (TFMIC-40) #97

Open
3 tasks done
Criminal-9527 opened this issue Oct 14, 2024 · 1 comment
Open
3 tasks done

Comments

@Criminal-9527
Copy link

Checklist

  • Checked the issue tracker for similar issues to ensure this is not a duplicate.
  • Provided a clear description of your suggestion.
  • Included any relevant context or examples.

Issue or Suggestion Description

我们在使用模型检测时出现了如下报错:
input->type == kTfLiteInt8 || Int16 || Uint8 was not true,Node DEQUANTIZE (number 0f) failed to prepare with status 1。
我们使用的idf版本是v5.2,板子是esp32s3-wroom,使用的模型是mediapipe的hand_landmark_lite.tflite模型,在netron上查看其结构如下:
输入
输出
信息
我已经按照netron将模型所需op全部加入resolver
op
这个DEQUANTIZE量化操作的输入并不是我在手动加模型输入的时候能够决定的呀,为什么会导致这样的报错呢,难道是esp-tflite-micro的这个op与模型的同名op之间有差别吗?
完整代码有点长,我们想做实时识别手势,所以还有摄像头的好多代码,下面是只有有关模型的代码,hand_landmark_lite.tflite这个模型在电脑上是确定可以运行的

#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/c/common.h"

#include "model.h"


const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;

constexpr int kTensorArenaSize = 20 * 1024;
uint8_t tensor_arena[kTensorArenaSize];

void model_setup();
void model_handle(uvc_frame_t* frame);


void model_setup() {
  model = tflite::GetModel(g_model);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    MicroPrintf("Model provided is schema version %d not equal to supported "
                "version %d.", model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  static tflite::MicroMutableOpResolver<8> resolver;
  if (resolver.AddAdd() != kTfLiteOk) 
    return;
  if (resolver.AddMean() != kTfLiteOk)
    return;
  if (resolver.AddLogistic() != kTfLiteOk) 
    return;
  if (resolver.AddConv2D() != kTfLiteOk) 
    return;
  if (resolver.AddFullyConnected() != kTfLiteOk) 
    return;
  if (resolver.AddDequantize() != kTfLiteOk) 
    return;
  if (resolver.AddDepthwiseConv2D() != kTfLiteOk) 
    return;
  if (resolver.AddMaxPool2D() != kTfLiteOk) 
    return;

  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    MicroPrintf("AllocateTensors() failed");
    return;
  }

  input = interpreter->input(0);
  output = interpreter->output(0);
}

float* normalize(uint8_t* data, int size){
  float* ret = new float[size];
  for(int i = 0; i < size; i++)
    ret[i] = (float)data[i] / 255.0f;
  return ret;
}

void model_handle(uvc_frame_t* frame) {
  input->type = kTfLiteFloat32;
  (input->data).f = normalize((uint8_t*)frame->data, frame->data_bytes);
  
  input->dims->data[0] = 1;
  input->dims->data[1] = 224;
  input->dims->data[2] = 224;
  input->dims->data[3] = 3;
  
  (input->dims)->size = 4;
  input->bytes = 224 * 224 * 3;

  TfLiteStatus invoke_status = interpreter->Invoke();
  if (invoke_status != kTfLiteOk) {
    MicroPrintf("Invoke failed");
    return;
  }

  float* landmark = new float[63];
  landmark = (output->data).f;
  for(int i = 0; i < 21; i += 3)
    MicroPrintf("x: %f, y: %f, z: %f", landmark[i], landmark[i + 1], landmark[i + 2]);
    
  delete[] landmark;
}

感谢!

@github-actions github-actions bot changed the title Node DEQUANTIZE (number 0f) failed to prepare with status 1 Node DEQUANTIZE (number 0f) failed to prepare with status 1 (TFMIC-40) Oct 14, 2024
@vikramdattu
Copy link
Collaborator

@Criminal-9527 the tflite-micro models are expected to use quantised weights (inputs/outputs and filter values) and only the scales are supposed to be float.

To make sure your model can work with tflite-micro framework, you need to quantise it to use int8. This is a simple process and can be achieved with small piece of code after you train the model.
Example can be found here: https://ai.google.dev/edge/litert/models/post_training_integer_quant#convert_using_integer-only_quantization

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants