Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ray sampling #17

Open
PotatoPalooza opened this issue Oct 5, 2022 · 0 comments
Open

Ray sampling #17

PotatoPalooza opened this issue Oct 5, 2022 · 0 comments
Labels
enhancement New feature or request

Comments

@PotatoPalooza
Copy link
Member

Ray sampling is currently highly inefficient as it is done during the model inference and frequently samples empty space. For the refactor the ray sampling should be moved and done in batch in the data loader although this may cause memory concerns as storing the point data may be inefficient. This can also be further parallelized through the built-in PyTorch asynchronous data loader that will run on the CPU and thus reduce GPU blocking and load. https://pytorch.org/docs/stable/data.html

As for the inefficiency in the sampling itself, the sampling should be done at higher rates in areas of high intersection of multiple rays. That is areas with high ray density (especially after the first dozen iterations of the model) are more likely to need additional sampling to converge and the refactor should consider this aswell.

@PotatoPalooza PotatoPalooza added the enhancement New feature or request label Oct 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant