Skip to content

Optics shift invariant

Brian Wandell edited this page Aug 12, 2024 · 19 revisions

Many ISETCam calculations use a shift-invariant optical calculation. Shift invariance means that the image of a point in the scene spreads by the same amount in the image, no matter the position of the point in the scene. The spread in the image, called the point spread function (PSF), varies depending on the wavelength of the point light. The image region over which shift-invariance is a good approximation for a given lens is called its isoplanatic region.

In general, the PSF depends on the location of the point in the scene, both its location from the center of the image (field height) and most importantly the distance from the lens to the point. But the shift-invariant approximation is very useful for many scenes, for example when scenes with a modest (20 deg) field of view and with all the objects at approximately the same distance (in a plane). It is also useful when all the objects are far away, because for many lenses the PSF changes little for points beyond a certain distance. For the human eye or for the classical double Gauss lens, the distance is 10-20 focal lengths.

ISETCam stores the shit-invariant PSF as a wavefront function

The PSF is a property of the optics, and thus we describe it with parameters in the optics structure. Over the years, we stored information about the PSF in various representations. Starting in 2023, we stored the information using wavefront aberrations. (To learn why, see the page on PSF representations).

The PSF is not stored explicitly in the optics struct; it is computed from the parameters stored there. We store the optics wavefront aberration as a set of polynomial coefficients, using the international standard Zernike representations. The polynomial coefficients correspond to aberrations with specific labels such as defocus, coma, and different types of astigmatism. We can compute the PSF from these polynomial coefficients, and this is done in on-the-fly in oiCompute().

The value of using the polynomial representations is that they are continuous. Thus, we can start with them and create a PSF that is properly sampled in space for the resolution of the scene or optical image (BW to say more here). If we store the PSF, we need to pick a spatial sampling resolution, and that may not be appropriate for a specific scene's spatial resolution.

In the spectral irradiance calculations within oiCompute, the wavefront aberrations are converted into a point spread function that is sampled at the resolution of the scene. The PSF is convolved with the scene, wavelength-by-wavelength. The implementation requires paying attention to the details of the sampling rate for the scene spectral radiance. There are many tutorials and examples of the oiCompute calculations in the ISETCam toolbox.

Diffraction limited

The diffraction-limited calculation is based on the formula for a diffraction-limited, circular aperture. The calculation is shift-invariant, applying the classic point spread uniformly across the scene to create the optical image. The point spread is wavelength-dependent, but shift-invariant.

The diffraction-limited PSF depends only on the f-number of the lens. We will explain the formula and the issues with spatial sampling here.

We will contrast this calculation with the wavefront calculation, which can also be diffraction limited. They differ in how they handle the spatial sampling. The wavefront calculation,which is a bit slower, is more accurate for very high dynamic range scenes.

Clone this wiki locally