Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support rendering of color and N-D features simultaniously #529

Open
SuperXia123 opened this issue Jan 6, 2025 · 5 comments

Comments

@SuperXia123
Copy link

SuperXia123 commented Jan 6, 2025

Support a user-defined feature map as optional input, which could be used to encode any wanted feature with flexibility, such as semantics, normals, etc...

The input feature map is a torch tensor with dimension [N, C],where N is the gaussian number and C is the channel number which depends on the user wanted features. The output feature map is [C, H, W], indicating the alpha-blended gaussian features.

The users the decode the output feature map according to their encoding strategy, while the rasterization engine do not care about the meaning of the feature map.

A similar implementation of this feature could be found here.

@maturk
Copy link
Collaborator

maturk commented Jan 6, 2025

Rasterizing of N dimensional features is already supported by the gsplat backend. here

@SuperXia123
Copy link
Author

@maturk Thanks for your response!
If I understand correctly, one can not render colors and self-defined feature channels simultaneously during a single function execution.
If that is true, should I perform the rasterization function twice if I want both colors and features to be rasterized?

@maturk
Copy link
Collaborator

maturk commented Jan 6, 2025

Correct simultaneous rendering with a single forward pass call is not supported besides for rgb+depth rendering.

I met a similar problem in my dn-splatter project where i wanted to render rgb+d and normals with a single forward pass. Its easier to do it twice, unless performance is necessary. But this approach is not so elegant.

Happy to discuss this problem further, since it might be a useful thing to support better.

@SuperXia123 SuperXia123 changed the title Feature request: support rendering of arbitrary feature map Feature request: support rendering of color and N-D features simultaniously Jan 7, 2025
@yzslab
Copy link
Contributor

yzslab commented Jan 7, 2025

But, I think it is possible.

If you are using rasterization:

  1. Manually call the spherical_harmonics to convert your SHs to RGB
  2. Concatenate your RGB and features as a new tensor in [N, 3+F], where F is the number of dimension of your features.
  3. Call the rasterization with the concatenated tensor and sh_degree=None

Example:

camera = ...

dirs = model.get_means() - camera.camera_center
colors = spherical_harmonics(model.active_sh_degree, dirs, model.get_shs())
colors = torch.clamp_min(colors + 0.5, 0.0)

extra_features = ...

input_features = torch.concat([colors, extra_features], dim=-1)

rendered_colors, rendered_alphas, meta = rasterization(
    ...
    colors=input_features,
    sh_degree=None,
    ...
)

rendered_rgbs = rendered_colors[..., :3]
rendered_features = rendered_colors[..., 3:]

Another option is to re-implement your own rasterization pipeline using fully_fused_projection and rasterize_to_pixels, instead of relying on the provided rasterization interface.

@SuperXia123
Copy link
Author

SuperXia123 commented Jan 10, 2025

But, I think it is possible.

If you are using rasterization:

  1. Manually call the spherical_harmonics to convert your SHs to RGB
  2. Concatenate your RGB and features as a new tensor in [N, 3+F], where F is the number of dimension of your features.
  3. Call the rasterization with the concatenated tensor and sh_degree=None

Example:

camera = ...

dirs = model.get_means() - camera.camera_center
colors = spherical_harmonics(model.active_sh_degree, dirs, model.get_shs())
colors = torch.clamp_min(colors + 0.5, 0.0)

extra_features = ...

input_features = torch.concat([colors, extra_features], dim=-1)

rendered_colors, rendered_alphas, meta = rasterization(
    ...
    colors=input_features,
    sh_degree=None,
    ...
)

rendered_rgbs = rendered_colors[..., :3]
rendered_features = rendered_colors[..., 3:]

Another option is to re-implement your own rasterization pipeline using fully_fused_projection and rasterize_to_pixels, instead of relying on the provided rasterization interface.

@yzslab I tried this strategy and it worked. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants