You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ray sampling is currently highly inefficient as it is done during the model inference and frequently samples empty space. For the refactor the ray sampling should be moved and done in batch in the data loader although this may cause memory concerns as storing the point data may be inefficient. This can also be further parallelized through the built-in PyTorch asynchronous data loader that will run on the CPU and thus reduce GPU blocking and load. https://pytorch.org/docs/stable/data.html
As for the inefficiency in the sampling itself, the sampling should be done at higher rates in areas of high intersection of multiple rays. That is areas with high ray density (especially after the first dozen iterations of the model) are more likely to need additional sampling to converge and the refactor should consider this aswell.
The text was updated successfully, but these errors were encountered:
Ray sampling is currently highly inefficient as it is done during the model inference and frequently samples empty space. For the refactor the ray sampling should be moved and done in batch in the data loader although this may cause memory concerns as storing the point data may be inefficient. This can also be further parallelized through the built-in PyTorch asynchronous data loader that will run on the CPU and thus reduce GPU blocking and load. https://pytorch.org/docs/stable/data.html
As for the inefficiency in the sampling itself, the sampling should be done at higher rates in areas of high intersection of multiple rays. That is areas with high ray density (especially after the first dozen iterations of the model) are more likely to need additional sampling to converge and the refactor should consider this aswell.
The text was updated successfully, but these errors were encountered: