Skip to content

Commit

Permalink
Automated sync from github.com/tensorflow/tensorflow (#2656)
Browse files Browse the repository at this point in the history
  • Loading branch information
TFLM-bot authored Aug 6, 2024
1 parent 39f9f19 commit 8f9a923
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions tensorflow/lite/kernels/internal/portable_tensor_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ void ApplyLayerNormFloat(const int16_t* input,
void ApplySigmoid(const int16_t* input, int32_t n_batch, int32_t n_input,
int16_t* output);

// Same as above but the internal calcualtion is float.
// Same as above but the internal calculation is float.
void ApplySigmoidFloat(const int16_t* input, int32_t n_batch, int32_t n_input,
int16_t* output);

Expand All @@ -333,7 +333,7 @@ void ApplySigmoidFloat(const int16_t* input, int32_t n_batch, int32_t n_input,
void ApplyTanh(int32_t intger_bits, const int16_t* input, int32_t n_batch,
int32_t n_input, int16_t* output);

// Apply Tanh to a quantized vector. Tbe internal calculation is in float.
// Apply Tanh to a quantized vector. The internal calculation is in float.
// - Input has 2^(integer_bits) as scale.
// - Output has Q0.15 as scale.
void ApplyTanhFloat(const int16_t* input, int32_t n_batch, int32_t n_input,
Expand Down

0 comments on commit 8f9a923

Please sign in to comment.