-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Layer "dense" expects 1 input(s), but it received 2 input tensors error while loading a keras model #20084
Comments
Try with |
Ok thank you. And you mean both training the model and load the model or can I just load the previous model ? |
I tried to train a test model using tf v 2.17.0 and keras-nightly v3.4.1.dev2024080503 I could train it properly and it works correctly
I got this, which is correct: However, when I load the model in a new environment (still with tf v 2.17.0 and keras-nightly v3.4.1.dev2024080503) using:
I got : And it does not matter the image I got always the same result with the Dacelo as first result with a score of 1 |
**Hello,
I'm not 100% sure it's a bug.
I trained a model and saved it on Google Colab Entreprise
Tensorflow v2.17.0
Keras v 3.4.1
Once a try to load the model using tf.keras.models.load_model('model_v0-1 (1).keras') I get the following error :**
ValueError Traceback (most recent call last)
in <cell line: 1>()
----> 1 model = tf.keras.models.load_model('model_v0-1 (1).keras')
11 frames
/usr/local/lib/python3.10/dist-packages/keras/src/layers/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
158 inputs = tree.flatten(inputs)
159 if len(inputs) != len(input_spec):
--> 160 raise ValueError(
161 f'Layer "{layer_name}" expects {len(input_spec)} input(s),'
162 f" but it received {len(inputs)} input tensors. "
ValueError: Layer "dense" expects 1 input(s), but it received 2 input tensors. Inputs received: [<KerasTensor shape=(None, 11, 11, 1280), dtype=float32, sparse=False, name=keras_tensor_4552>, <KerasTensor shape=(None, 11, 11, 1280), dtype=float32, sparse=False, name=keras_tensor_4553>]
I trained EffecientNetB0 and added some layers
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ efficientnetb0 (Functional) │ (None, 11, 11, 1280) │ 4,049,571 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ global_average_pooling2d │ (None, 1280) │ 0 │
│ (GlobalAveragePooling2D) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense (Dense) │ (None, 672) │ 860,832 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization │ (None, 672) │ 2,688 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout (Dropout) │ (None, 672) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_1 (Dense) │ (None, 672) │ 452,256 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_1 │ (None, 672) │ 2,688 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_1 (Dropout) │ (None, 672) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_2 (Dense) │ (None, 65) │ 43,745 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 16,142,256 (61.58 MB)
Trainable params: 5,365,237 (20.47 MB)
Non-trainable params: 46,543 (181.81 KB)
Optimizer params: 10,730,476 (40.93 MB)
Therefore, the only dense layers are the ones I added at the end.
Am I doing something wrong ? I read that some other person faces the same issue sinc TF2.16 and keras 3.4, so I guessed it is an issue in keras but not sure.
Thank you for your help/review
The text was updated successfully, but these errors were encountered: