You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
metadata_filename="metadata.tsv"os.makedirs(logs_path, exist_ok=True)
# Save Labels separately on a line-by-line manner.withopen(os.path.join(logs_path, metadata_filename), "w") asf:
fortokeninvectorizer.get_vocabulary():
f.write("{}\n".format(token))
keras.callbacks.TensorBoard(
log_dir=logs_path,
embeddings_freq=1,
embeddings_metadata=metadata_filename
)
Anyway TensorBoard embedding tab only shows this HTML page.
Issues
The above HTML page is returned because dataNotFound is true. This happens because this route (http://localhost:6006/data/plugin/projector/runs) returns an empty JSON. In particular, this route is addressed by this Python function. Under the hood this function tries to find the latest checkpoint. In particular, it gets the path of the latest checkpoint using tf.train.latest_checkpoint. Like doc string states, this TF function finds a TensorFlow (2 or 1.x) checkpoint. Now, TensorBoard callback saves a checkpoint, at the end of the epoch, but it is a Keras checkpoint.
Furthermore, projector_config.pbtxt is written in the wrong place: TensorBoard expects this file in the same place where checkpoints are saved.
Finally, choosing a fixed name is a strong assumption. In my model, tensor associated to Embedding layer had a different name (obviously).
Notes
IMO this feature stopped working when the callback updated to TF 2.0. Indeed, callback for TF 1.x should work. For example, it saves checkpoint using TF format. But when callback was updated to be compatible with TF 2.0 it was used tf.keras.Model.save_weights and not tf.train.Checkpoint: perfectly legit like reported here.
Possible solution
Saving only weights from Embedding layer. Here, you can find an example. To get model, you can use self._model. Plus it is not necessary to specify tensor name because there is only one tensor to save. The only drawback is: how to handle two or more embeddings?
The text was updated successfully, but these errors were encountered:
Environment
How to reproduce it?
I tried to visualizing data using the embedding Projector in TensorBoard. So I added the following args to TensorBoard callback:
Anyway TensorBoard embedding tab only shows this HTML page.
Issues
The above HTML page is returned because
dataNotFound
is true. This happens because this route (http://localhost:6006/data/plugin/projector/runs
) returns an empty JSON. In particular, this route is addressed by this Python function. Under the hood this function tries to find the latest checkpoint. In particular, it gets the path of the latest checkpoint usingtf.train.latest_checkpoint
. Like doc string states, this TF function finds a TensorFlow (2 or 1.x) checkpoint. Now, TensorBoard callback saves a checkpoint, at the end of the epoch, but it is a Keras checkpoint.Furthermore,
projector_config.pbtxt
is written in the wrong place: TensorBoard expects this file in the same place where checkpoints are saved.Finally, choosing a fixed name is a strong assumption. In my model, tensor associated to Embedding layer had a different name (obviously).
Notes
IMO this feature stopped working when the callback updated to TF 2.0. Indeed, callback for TF 1.x should work. For example, it saves checkpoint using TF format. But when callback was updated to be compatible with TF 2.0 it was used
tf.keras.Model.save_weights
and nottf.train.Checkpoint
: perfectly legit like reported here.Possible solution
Saving only weights from Embedding layer. Here, you can find an example. To get model, you can use
self._model
. Plus it is not necessary to specify tensor name because there is only one tensor to save. The only drawback is: how to handle two or more embeddings?The text was updated successfully, but these errors were encountered: