Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the metric support input with mesh and texture? #2

Open
jiangzhongshi opened this issue Apr 3, 2024 · 3 comments
Open

Does the metric support input with mesh and texture? #2

jiangzhongshi opened this issue Apr 3, 2024 · 3 comments

Comments

@jiangzhongshi
Copy link

Hi,

Thanks for the great work and code.
I want to estimate the quality of a new mesh, with obj and texture file, is there an instruction to input then and go through the renderings to get the final score?

Thanks,
Zhongshi

@glavoue
Copy link
Contributor

glavoue commented Apr 5, 2024

Dear Zhongshi,
The metric takes as input local patches. So you have to make your renderings on your own and create your patches (64*64) before calling the metric.
The global quality score of the model is computed as the average of the local patch qualities.
I hope it's clear!

Guillaume

@jiangzhongshi
Copy link
Author

Hi Guillaume,

Thank you so much for the prompt response!

Do you have code pointers to make the renderings and/or any specification and recommendations for material/camera parameter, image count and camera positioning?

Thanks,
Zhongshi

@glavoue
Copy link
Contributor

glavoue commented Apr 8, 2024

Here is what we did in our experiments, and that you can follow:

The rendering was done using Unity with a Lambertian material and mipmapping activated (650 × 550 image size). We used a directional light coming from the top right.

For computing the metric, we patchified the images into small overlapping patches of size 64 × 64 sampled every 32 pixels., and we removed patches containing less than 65% stimulus information.

For the number of views, we selected either the most relevant view or computed 4 views around the object (see sec. 5.5 of the paper)

Below the parts of the paper that describe that information in details:

“We rendered the dynamic test stimuli in a neutral room (light gray walls), without shadows and under a directional light coming from the top right of the room. All models were approximately the same size and rendered with a lambertian material; mipmapping was activated”

“The image size, 650 × 550, is the video resolution of the stimuli seen by the participants in the subjective experiment. We divided (patchified) the images into small overlapping patches of size 64 × 64 sampled every 32 pixels. We removed patches containing less than 65% stimulus information (i.e., the percentage of background pixels in the patch is greater than 35%). We got an average of 60 patches per stimulus."

Sec 5.5. : "For each stimuli, we thus generated 4 snapshots taken from 4 camera positions regularly sampled on its bounding box and prepared the data as for the above network. We used the same training and testing set as our representative fold and used the same training parameters, randomly sampling 𝑁𝑝 = 300 patches from all possible viewpoints"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants