Skip to content

Embedding Layer trainable bug? #21201

Closed
Closed
@Fricker95

Description

@Fricker95

In the class Embedding(Layer) keras/src/layers/core/embedding.py

    def build(self, input_shape=None):
        if self.built:
            return
        if self.quantization_mode is not None:
            self.quantized_build(input_shape, mode=self.quantization_mode)
        if self.quantization_mode != "int8":
            self._embeddings = self.add_weight(
                shape=(self.input_dim, self.output_dim),
                initializer=self.embeddings_initializer,
                name="embeddings",
                regularizer=self.embeddings_regularizer,
                constraint=self.embeddings_constraint,
                trainable=True,
            )
        self.built = True
        if self.lora_rank:
            self.enable_lora(self.lora_rank)

Here the embeddings weight is built using trainable=True, this doesn't reflect the keras.layers.Layer parent class attribute _trainable. Therefore if we build the Embedding class using trainable=False as a parameter the underlying self._embeddings weight will still be trainable.

Is this set to true for a specific reason or is it simply a bug?
Thought I would point this out.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions