Description
**Hello,
I'm not 100% sure it's a bug.
I trained a model and saved it on Google Colab Entreprise
Tensorflow v2.17.0
Keras v 3.4.1
Once a try to load the model using tf.keras.models.load_model('model_v0-1 (1).keras') I get the following error :**
ValueError Traceback (most recent call last)
in <cell line: 1>()
----> 1 model = tf.keras.models.load_model('model_v0-1 (1).keras')
11 frames
/usr/local/lib/python3.10/dist-packages/keras/src/layers/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
158 inputs = tree.flatten(inputs)
159 if len(inputs) != len(input_spec):
--> 160 raise ValueError(
161 f'Layer "{layer_name}" expects {len(input_spec)} input(s),'
162 f" but it received {len(inputs)} input tensors. "
ValueError: Layer "dense" expects 1 input(s), but it received 2 input tensors. Inputs received: [<KerasTensor shape=(None, 11, 11, 1280), dtype=float32, sparse=False, name=keras_tensor_4552>, <KerasTensor shape=(None, 11, 11, 1280), dtype=float32, sparse=False, name=keras_tensor_4553>]
I trained EffecientNetB0 and added some layers
# Configure the startegy
if len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy()
else:
strategy = tf.distribute.get_strategy()
with strategy.scope():
base_model = tf.keras.applications.EfficientNetB0(include_top=False, input_shape=(336, 336, 3), weights='imagenet')
base_model.trainable = True # Unfreeze the base model
# Freeze the first few layers
for layer in base_model.layers[:15]:
layer.trainable = False
num_neurons = (len(filtered_data['species'].unique()) + 1280) // 2
# Add layers
model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(num_neurons, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(num_neurons, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(len(filtered_data['species'].unique()), activation='softmax')
])
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ efficientnetb0 (Functional) │ (None, 11, 11, 1280) │ 4,049,571 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ global_average_pooling2d │ (None, 1280) │ 0 │
│ (GlobalAveragePooling2D) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense (Dense) │ (None, 672) │ 860,832 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization │ (None, 672) │ 2,688 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout (Dropout) │ (None, 672) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_1 (Dense) │ (None, 672) │ 452,256 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_1 │ (None, 672) │ 2,688 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_1 (Dropout) │ (None, 672) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_2 (Dense) │ (None, 65) │ 43,745 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 16,142,256 (61.58 MB)
Trainable params: 5,365,237 (20.47 MB)
Non-trainable params: 46,543 (181.81 KB)
Optimizer params: 10,730,476 (40.93 MB)
Therefore, the only dense layers are the ones I added at the end.
Am I doing something wrong ? I read that some other person faces the same issue sinc TF2.16 and keras 3.4, so I guessed it is an issue in keras but not sure.
Thank you for your help/review