-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: support rendering of color and N-D features simultaniously #529
Comments
Rasterizing of N dimensional features is already supported by the gsplat backend. here |
@maturk Thanks for your response! |
Correct simultaneous rendering with a single forward pass call is not supported besides for rgb+depth rendering. I met a similar problem in my dn-splatter project where i wanted to render rgb+d and normals with a single forward pass. Its easier to do it twice, unless performance is necessary. But this approach is not so elegant. Happy to discuss this problem further, since it might be a useful thing to support better. |
But, I think it is possible. If you are using
Example: camera = ...
dirs = model.get_means() - camera.camera_center
colors = spherical_harmonics(model.active_sh_degree, dirs, model.get_shs())
colors = torch.clamp_min(colors + 0.5, 0.0)
extra_features = ...
input_features = torch.concat([colors, extra_features], dim=-1)
rendered_colors, rendered_alphas, meta = rasterization(
...
colors=input_features,
sh_degree=None,
...
)
rendered_rgbs = rendered_colors[..., :3]
rendered_features = rendered_colors[..., 3:] Another option is to re-implement your own rasterization pipeline using |
@yzslab I tried this strategy and it worked. Thanks a lot! |
Hi, may I ask if this works well for you? |
@RayYoh Yes it works perfectly in my case. My code is like follows (using the original inria 3dgs naming convention)
|
Hi @SuperXia123, thanks for your kind reply. I have also made it work well; the problem is raised by the amp training, i.e., fp16. I noticed that you also render the depth map. I want to ask if you used the GT depth map to supervise this. |
@RayYoh I used the inverse depth map (which also in range 0-1) to supervise geometry and it converges. I think, perhaps, using depth-loss directly may be very sensitive to noise, meaning extremly large loss could be introduced by only a slight inconsistence. |
Hi @SuperXia123, thanks for your kind reply. Does the inverse depth map get the value by 1/d for the original depth map? May I ask if there is any codebase that can be referred for this implementation? |
Support a user-defined feature map as optional input, which could be used to encode any wanted feature with flexibility, such as semantics, normals, etc...
The input feature map is a torch tensor with dimension [N, C],where N is the gaussian number and C is the channel number which depends on the user wanted features. The output feature map is [C, H, W], indicating the alpha-blended gaussian features.
The users the decode the output feature map according to their encoding strategy, while the rasterization engine do not care about the meaning of the feature map.
A similar implementation of this feature could be found here.
The text was updated successfully, but these errors were encountered: