This is the PyTorch code for 2021 CVPR paper "Seeing in Extra Darkness Using a Deep-red Flash"
Seeing in Extra Darkness Using a Deep-red Flash
Jinhui Xiong1*,
Jian Wang2*,
Wolfgang Heidrich1,
Shree Nayar2
1KAUST, 2Snap Research
*denotes equal contribution
CVPR 2021 (Oral)
Bottom left: We propose to use deep-red (e.g. 660 nm) light as flash for low-light imaging in mesopic light levels. This new flash can be introduced into smartphones with a minor hardware adjustment.
Middle: The eye spectral sensitivity in a dimly lit environment (0.01 cd/m^2) and the relative responses of R, G and B color channels of the camera we used, as well as the emissions spectrum of the red LED. Under dim lighting, rod vision dominates, yet the rods are nearly insensible to deep-red light. Meanwhile, our LED flash can be sensed by the camera especially in the red and green channels.
Right: Inputs to our videography pipeline are a sequence of no-flash and flash frames, and the outputs are denoised and would yield temporally stable videos with no frame rate loss.
python>=3.7 & PyTorch>=1.3 & cuda>=10.0
Minor change on the code if there is compatibility issue.
To test image filtering on our data, we prepared a notebook
evaluate.ipynb
in the image_filtering folder.
Change directory to video_filtering folder.
You may download the data for scene1 and scene2 for testing. Put the downloaded video in a newly created "input" folder.
To test video filtering, you need to first install PWC-Net for flow computation and Temporal Consistency Network for enhancing tempotal consistency.
After installation, you need to correctly import them (sample code is commented) and run
python video_filtering.py
@inproceedings{xiong2021seeing,
title={Seeing in Extra Darkness Using a Deep-red Flash},
author={Jinhui Xiong and Jian Wang and Wolfgang Heidrich and Shree Nayar},
year={2021},
booktitle={CVPR}
}
Please contact Jinhui Xiong [email protected] if you have any question or comment.