Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example_video_and_image_colorization #25

Open
Jonathhhan opened this issue Jul 1, 2022 · 7 comments
Open

example_video_and_image_colorization #25

Jonathhhan opened this issue Jul 1, 2022 · 7 comments

Comments

@Jonathhhan
Copy link
Contributor

Jonathhhan commented Jul 1, 2022

This was not easy for me (to find and understand a pretrained colorization model).
With this pretrained model I got it working (had to convert it to a saved model):
https://github.com/EnigmAI/Artistia/blob/main/ImageColorization/trained_models_v2/U-Net-epoch-100-loss-0.006095.hdf5
It tried it with this model before, but it seems they use two models together (do not really understand it yet): https://github.com/pvitoria/ChromaGAN

I converted the model like this (with Python):

from tensorflow.keras.models import Model, load_model
model = load_model(MODEL_FULLPATH)
model.save(MODEL_FULLPATH_MINUS_EXTENSION)

Anyway, here is the example: https://github.com/Jonathhhan/ofxTensorFlow2/tree/example_video_and_image_colorization/example_video_and_image_colorization

@bytosaur
Copy link
Member

Hey @Jonathhhan,

great work. I have adjusted a few things and wanted to push the changes soon. However, i have noticed that videos didnt look that good so i digged into the python code for inference. I saw that the authors are dividing by 256 then converting from RGB 2 LAB and finally taking the first channel as an input to the model. You are doing a division by 2.55 on the first channel. Could you please elaborate on that? Thanks :)

@Jonathhhan
Copy link
Contributor Author

@bytosaur thanks for the hint. I will have a look into that.

@Jonathhhan
Copy link
Contributor Author

Jonathhhan commented Jul 20, 2022

@bytosaur I made a version that converts to lab color space (and some other improvements): https://github.com/Jonathhhan/ofxTensorFlow2/blob/example_video_and_image_colorization/example_video_and_image_colorization_2/src/ofApp.cpp
And I think it has a better result.
The idea with dividing by 2.55 was to get the lab brightness value (0 - 100) without color space conversion (but I got rid of it).

@bytosaur
Copy link
Member

this example did not make to #29 since the results weren't that useful. Anyway, thanks a lot for the contribution.

@Jonathhhan
Copy link
Contributor Author

Jonathhhan commented Sep 13, 2022

@bytosaur no problem, i already know it. just out of interest: is the colorization itself a boring usecase, or is the result not good enough (it was trained with hitchcock movies, if i am right, and especially coloring sky and water works really bad - coloring skin and trees for example much better)? thanks for improving and including the other 2 examples (i would love to see more examples from other users, for a better understanding of how to implement different networks with ofxTensorFlow2)...

@bytosaur
Copy link
Member

hey @Jonathhhan,
no i actually think the use case is OK, just the quality for video was poorly. But yeah, in my eyes OpenFrameworks is useful for realtime applications, where image colorization may not be very interesting.
However, Yolov4 is quite cool and we are already using it for fun projects :)

Still I want to keep the thread open at least for a while.

@danomatika
Copy link
Member

danomatika commented Sep 14, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants