Skip to content

Figures (Main)

Brian Wandell edited this page Aug 27, 2024 · 1 revision

Welcome to the isethdrsensor wiki!

Figure 3

image

Figure 3 source code

Scene light groups. The dataset consists of 2000 scenes, each defined by four spectral radiance maps representing illumination by the sky, headlights, streetlights, and other light sources (e.g., tail lights, bicycle lights). To simulate various lighting conditions, the four maps are combined with different weights. For example, a daytime scene (left) has a bright sky and headlights, while a nighttime scene (right) has a darker sky with prominent headlights and streetlights. Using a lens model incorporating aperture and scratch effects (but excluding inter-reflections), scene radiance is converted to sensor irradiance. The graph on the right illustrates the illumination profile across a horizontal line. Note that headlight intensity remains constant between day and night, while reduced skylight lowers image contrast in darker areas. The software includes tools to select the weights to achieve desired dynamic range and low-light conditions.(lightGroupDynamicRangeSet.m).

Figure 4

image

Figure 4 source code

Flare model. The figure depicts a series of simulated scenes featuring an array of bright lights, resembling headlights, with a dark image in the background. The bright light intensities each step down by a factor of 10 across the image. Each scene was rendered using distinct flare parameters. The number of aperture blades increases from four (leftmost column) to a circular aperture (rightmost column). The density of simulated dust and scratches varies from high (top row) to minimal (bottom row).

Figures 5 and 7

image

Figure 5/7 source code

Simulated and measured nighttime driving scenes. Left: Images in this column were captured in rapid succession by a Google Pixel 4a, with exposure duration increasing from top to bottom (see inset). Right: Images in this column were simulated, using a model of the Google Pixel 4a [31]. The spatial extent of the flare, and the corresponding extent of the sensor saturation, are very similar when comparing the two columns. The red boxes outline two vulnerable road users: a cyclist (left) and motorcyclist (right). As exposure duration increases from 5 to 40 ms both cyclists become more visible. At the longest duration, the flare - arising from headlights behind the cyclists - expands and masks a significant part of these vulnerable road users.

A split pixel 3-capture sensor and flare. We used the scene in Figure 5 and to simulate the 3-capture split pixel image. The data from the large photodetector are saturated over large regions, due to lens flare (top). The small photodetector data preserve image contrast in parts of the image that are saturated by flare, and thus the combined image enhances the visibility of the motor cyclist (blue boxes). It combined image retains the visibility of the deer in the dark image region (red boxes).

Figure 6

image

Figure 6 source

A split pixel 3-capture sensor in a tunnel. We modeled a CMOS image sensor (CIS) with a split pixel, 3-capture, design [32]. Each pixel contains a large and small photodetector. The sensor acquires two images from the large photodetector, one is read with low gain (LPLG) and a second with high gain (LPHG). The third image is acquired using a small photodetector with low gain (SPLG). Simulated images of these three captures are shown across the top of the figure. The image at the bottom left (Combined) is reconstructed from the three captures. The graph at the lower right shows the log relative voltage from the LPLG sensor (red) and the combined sensor (blue) across a horizontal image line. The saturation of the LPLG data in the tunnel opening is evident; the combined sensor data preserve image contrast across the entire scene. The two curves are displaced vertically from one another for clarity.

Figures 8 and 9

image

Figure 8/9 source code

Comparing sensors with RGB and RGBW color filter arrays. Left: The two rows simulate images illuminated at different mean levels (0.1 lux, 0.25 lux). The two columns show simulations of RGB (left) and RGBW (right) sensor images reconstructed using trained restormer networks for demosaicing and denoising. The image quality of the RGBW reconstructions, particularly at darker illuminance levels, are better compared to RGB reconstructions. Right: The graphs compare the reconstructed log luminance of the RGB, RGBW, and the ground truth (Ideal) measured along the dashed, white line. The RGBW reconstructions are superior in darker image regions.

Clone this wiki locally