This repository contains functions and scripts designed to evaluate High Dynamic Range (HDR) imaging sensor architectures, with a focus on nighttime driving scenes. The tools provided here are for researchers and engineers working on HDR imaging systems, particularly in automotive applications.
Z. Liu, D. Shah and B. A. Wandell, "ISETHDR: A Physics-based Synthetic Radiance Dataset for High Dynamic Range Driving Scenes," in IEEE Sensors Journal, doi: 10.1109/JSEN.2025.3550455. URL
keywords: {Sensors;Light sources;Lighting;Software;Image sensors;Intelligent sensors;High dynamic range;Training;Roads;Accuracy;Image system engineering;high dynamic range;digital twins;open source dataset;automotive imaging;nighttime driving},
Look at this repository's Wiki for scripts that create the images in that paper.
The code in this repository relies on the following tools:
- ISETCam: A Matlab toolbox for simulating camera imaging systems.
git clone https://github.com/ISET/isetcam
- ISET3D-tiny: A Matlab toolbox for rendering scenes.
git clone https://github.com/ISET/iset3d-tiny
To run the neural network for denoising and demosaicing, you will also need to have a Matlab Python environment. See the instructions for creating a Python environment on the ISETCam wiki pages
To run the demosaicing code, we need to install a Python environment within Matlab](https://github.com/ISET/isetcam/wiki/Related-software), and then add in the Python libraries specified in the file (isethdrsensor/utility/python/requirements.txt). These libraries are used by the neural network code that performs the demosaicing and denoising here.
To install the requirements, you can use
cd isethdrsensor/utility/python
pip install -r requirements.txt
The scripts in this repository utilize HDR driving scenes to simulate the responses of various sensors modeled in ISETCam. Our primary focus is on evaluating two specific sensor architectures:
- RGBW Sensor: A sensor architecture that includes red, green, blue, and white pixels to capture a broader range of brightness levels.
- Omnivision 3-Capture Sensor: A sensor designed by Omnivision that uses multiple captures to extend dynamic range.
A second contribution is a set of scenes that are rendered into light groups (see the paper for details). A light group can be used to create an ISETCam scene spectral radiance, either for day, dusk or night conditions.
The light group data sets, which include labeled data, are shared freely through the Stanford Digital Repository. The data organization is described below.
In the lab, to browse the light groups we use fiftyone. It can be installed and then run this way
pip install fiftyone
And then this command will bring up a viewer to see what is in the scenes
fiftyone app connect --destination mux.stanford.edu
(August, 2024: We are still working out the privileges of the program and mux.stanford.edu to enable secure access for viewing the data.)
To run the simulations, ensure that you have the necessary dependencies installed. Then, use the provided scripts to evaluate the performance of the sensor architectures under different driving scenes. The scripts generate response data and figures that illustrate the sensor performance.
The scripts in this repository were used to generate most of the figures in the paper titled "ISETHDR: A Physically Accurate Synthetic Radiance Dataset for High Dynamic Range Driving Scenes," currently in preparation. You can modify the parameters in the scripts to explore different scenarios or reproduce the figures.
The data required for the /figures and /scripts directories is stored in the Stanford Digital Repository (SDR) Feel free to check it in your web browser.
The corresponding data from SDR will be automatically pulled for the scripts. A data folder will be created for the scene data, and a networks folder will be created for the pretrained ONNX network used for demosaicing and denoising, as described in the Experiments/RGBW Sensor section of the paper.
The link to the complete dataset: https://searchworks.stanford.edu/view/zg292rq7608
All rendered scenes are accompanied by metadata files stored in a separate folder. Each scene includes multiple .exr
layers and a corresponding .mat
file, all sharing the same numeric prefix (e.g., 1112153442
). This prefix serves as the unique identifier for each scene.
*_headlights.exr
: Light contribution from headlights*_otherlights.exr
: Contributions from nearby cars or ambient sources*_streetlights.exr
: Light from fixed streetlights*_skymap.exr
: Sky and global illumination*_instanceID.exr
: Encodes object instance IDs for segmentation
The associated .mat
file stores metadata such as depth map, segmentation map and a list of object names.
Contributions are welcome! If you have suggestions for improvements or new features, please feel free to open an issue or submit a pull request.
This project is licensed under the MIT License.
Special thanks to the developers of ISETCam, ISETAuto, and ISET3d for their invaluable tools that made this work possible.