Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BIG grad difference w.r.t. inria implementation #91

Closed
kwea123 opened this issue Dec 23, 2023 · 1 comment
Closed

BIG grad difference w.r.t. inria implementation #91

kwea123 opened this issue Dec 23, 2023 · 1 comment

Comments

@kwea123
Copy link
Contributor

kwea123 commented Dec 23, 2023

I find there is a big gradient difference given the same input. Attached is the toy example and the files I used for this repo (nerfstudio) and inria.

compare.zip

The output is very different:
inria:

radii
tensor([4], device='cuda:0', dtype=torch.int32)
GRAD:
xyz
tensor([[ 3.4769e-07, -1.5832e-06, -1.2264e+01]], device='cuda:0')
dc
tensor([[8.1292, 0.0000, 0.0000]], device='cuda:0')
activated_opacity
tensor([8.1392], device='cuda:0')
activated_scaling
tensor([[61.3198, 61.3198,  0.0000]], device='cuda:0')
activated_rotation
tensor([[0., 0., 0., 0.]], device='cuda:0')
uvs
tensor([[ 4.1723e-07, -1.2666e-06,  0.0000e+00]], device='cuda:0')

yours

radii
tensor([4], device='cuda:0', dtype=torch.int32)
GRAD:
xyz
tensor([[ 1.0468,  0.0000, -0.2396]], device='cuda:0')
dc
tensor([[0.0325, 0.0000, 0.0000]], device='cuda:0')
activated_opacity
tensor([0.0324], device='cuda:0')
activated_scaling
tensor([[2.3485, 0.0479, 0.0000]], device='cuda:0')
activated_rotation
tensor([[0., 0., 0., 0.]], device='cuda:0')
uvs
tensor([[0.0872, 0.0000]], device='cuda:0')

The relative tolerance should be within 1e-6 or something, here it's obviously more than that...
I'm sure their implementation is correct, and from some other repos that are using this code, I observe that the gaussians look more circular rather the ellipsoidal so I'm pretty sure something is drastically different...

@vye16
Copy link
Collaborator

vye16 commented Jan 3, 2024

Hi, thank you for bringing this up. One thing to note is that the example you sent uses a different projection matrix for their implementation. However, this example did bring up a latent bug fixed in this PR. Once these two things are corrected, the outputs look like this:

Theirs:

xyz
tensor([[ 4.7684e-07,  1.1623e-06, -1.2168e+01]], device='cuda:0')
dc
tensor([[8.1262, 0.0000, 0.0000]], device='cuda:0')
activated_opacity
tensor([8.1262], device='cuda:0')
activated_scaling
tensor([[60.8423, 60.8423,  0.0000]], device='cuda:0')
activated_rotation
tensor([[ 0.0000e+00,  0.0000e+00,  0.0000e+00, -1.3476e-15]], device='cuda:0')
uvs
tensor([[4.7684e-07, 1.1623e-06, 0.0000e+00]], device='cuda:0')

Ours:

radii
tensor([4], device='cuda:0', dtype=torch.int32)
GRAD:
xyz
tensor([[ 0.0000e+00,  4.7684e-07, -1.2168e+01]], device='cuda:0')
dc
tensor([[8.1262, 0.0000, 0.0000]], device='cuda:0')
activated_opacity
tensor([8.1262], device='cuda:0')
activated_scaling
tensor([[60.8423, 60.8423,  0.0000]], device='cuda:0')
activated_rotation
tensor([[0., 0., 0., 0.]], device='cuda:0')
uvs
tensor([[0.0000e+00, 5.9605e-08]], device='cuda:0')

@vye16 vye16 closed this as completed Jan 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants