-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about the results #6
Comments
Hi, thanks for your interest. However, I think there should be some bug in your inference. It is almost impossible for our model to corrupt the original incomplete UV map. Is the second figure the same view of the first figure? |
What is your mask map? |
There are two mask maps you need to use. The first one is the UV mask (what you show above). The second one is processed in our code (this line), which bakes the already known texture while providing a mask indicating which pixels are known so the network will not change these regions. |
OK! I will test it again with the two masks. Thanks for your help. |
Thanks for your testing. I should admit that this case is challenging for current model version because it is trained on only 12w data from scratch. In this case, what we want to complete is the semantic pattern instead of low-level details such as wood strips, this is more difficult for this model. We are working on this and trying to develop a new model to do it better. |
Seems interesting. Looking forward to the new model! |
Hi,
First of all, I appreciate the great work you have done.
However, I have some concerns about the generalizability of the proposed method. When I feed a pre-generated texture map extracted from a robust multi-view image generation pipeline into your model, I encounter issues. Specifically, I am trying to use your work to perform inpainting and refinement on an incomplete texture.
Unfortunately, the results are extremely poor and even corrupt the original UV-map, as shown in the attached figures. The first figure is the initial texture that I used as input, and the second figure is the texture generated by your model.
Could you please help me understand why this is happening? Is this the expected behavior, or is there something I might be doing wrong?
Thank you for your assistance. FYI, the prompt I used is "a beautiful LV leather bag."
The text was updated successfully, but these errors were encountered: