You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, fitting for a model in pixel space using the linear transforms returned by the get_pixel_transforms method requires ~4l^2 pixels for a map of degree l. There are more efficient algorithms from the signal processing literature which require only ~l^2 pixels (paper) or ~2l^2 pixels (paper). The algorithm which requires ~l^2 pixels is implemented in the package SSHT. The package is written in C and Python, it provides the forward (Ylm -> pixels) and the inverse (pixels -> Ylms) transforms (see docs) and the adjoints of both transforms. It seems to be well maintained.
A related paper on sparse image reconstructions on the sphere defines the discrete TV norm on the sphere and the finite difference gradients of the pixel map in latitude and longitude which are needed to compute the norm. These finite difference operators differ slightly from the ones returned by get_pixel_transforms, the gradient operator in the latitude direction is the same (I think) but the one for the longitudinal angle is multiplied by 1/sin(theta) and they use additional regularization weights (q(theta_t) in the paper) to eliminate numerical instabilities close to the poles. In this paper, they claim that the "MW sampling algorithm" (the one which uses ~l^2 pixels) improves the sparsity of pixel maps when doing inference relative to an algorithm which uses twice as many pixels.
The drawback of this method is that is that you can't precompute the the transforms before doing inference so it might be that the increased evaluation cost of the model outweighs the the gains from reducing the number of parameters by half.
@rodluger, how complicated would it be to write a Theano Op which uses the SSHT package to evaluate the pixel transforms and the adjoints?
The text was updated successfully, but these errors were encountered:
Currently, fitting for a model in pixel space using the linear transforms returned by the
get_pixel_transforms
method requires ~4l^2 pixels for a map of degree l. There are more efficient algorithms from the signal processing literature which require only ~l^2 pixels (paper) or ~2l^2 pixels (paper). The algorithm which requires ~l^2 pixels is implemented in the package SSHT. The package is written in C and Python, it provides the forward (Ylm -> pixels) and the inverse (pixels -> Ylms) transforms (see docs) and the adjoints of both transforms. It seems to be well maintained.A related paper on sparse image reconstructions on the sphere defines the discrete TV norm on the sphere and the finite difference gradients of the pixel map in latitude and longitude which are needed to compute the norm. These finite difference operators differ slightly from the ones returned by
get_pixel_transforms
, the gradient operator in the latitude direction is the same (I think) but the one for the longitudinal angle is multiplied by 1/sin(theta) and they use additional regularization weights (q(theta_t) in the paper) to eliminate numerical instabilities close to the poles. In this paper, they claim that the "MW sampling algorithm" (the one which uses ~l^2 pixels) improves the sparsity of pixel maps when doing inference relative to an algorithm which uses twice as many pixels.The drawback of this method is that is that you can't precompute the the transforms before doing inference so it might be that the increased evaluation cost of the model outweighs the the gains from reducing the number of parameters by half.
@rodluger, how complicated would it be to write a Theano Op which uses the SSHT package to evaluate the pixel transforms and the adjoints?
The text was updated successfully, but these errors were encountered: