You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As it is now, the mask makes "bad" pixels irrelevant for likelihood calculation, but the image is anyway rendered on the full image plane. I was just chatting with @ntessore about the possibility to avoid the computation of masked pixels. This would clearly improve the speed, especially in the view of analysing bigger areas, e.g. for galaxy clusters. It turns out that the simplest (and maybe the only reasonable) way to do it is to consider blocks of 16x16 pixels (as they are treated by OpenCL kernels), and skip their rendering if they are all masked. I think this would already help us a lot in clusters analysis.
The text was updated successfully, but these errors were encountered:
As it is now, the mask makes "bad" pixels irrelevant for likelihood calculation, but the image is anyway rendered on the full image plane. I was just chatting with @ntessore about the possibility to avoid the computation of masked pixels. This would clearly improve the speed, especially in the view of analysing bigger areas, e.g. for galaxy clusters. It turns out that the simplest (and maybe the only reasonable) way to do it is to consider blocks of 16x16 pixels (as they are treated by OpenCL kernels), and skip their rendering if they are all masked. I think this would already help us a lot in clusters analysis.
The text was updated successfully, but these errors were encountered: