2018 IEEE International Conference on Computational Photography (ICCP) (2018)
Pittsburgh, PA, USA
May 4, 2018 to May 6, 2018
Guy Satat , MIT Media Lab
Matthew Tancik , MIT Media Lab
Ramesh Raskar , MIT Media Lab
Imaging through fog has important applications in industries such as self-driving cars, augmented driving, airplanes, helicopters, drones and trains. Here we show that time profiles of light reflected from fog have a distribution (Gamma) that is different from light reflected from objects occluded by fog (Gaussian). This helps to distinguish between background photons reflected from the fog and signal photons reflected from the occluded object. Based on this observation, we recover reflectance and depth of a scene obstructed by dense, dynamic, and heterogeneous fog. For practical use cases, the imaging system is designed in optical reflection mode with minimal footprint and is based on LIDAR hardware. Specifically, we use a single photon avalanche diode (SPAD) camera that time-tags individual detected photons. A probabilistic computational framework is developed to estimate the fog properties from the measurement itself, without prior knowledge. Other solutions are based on radar that suffers from poor resolution (due to the long wavelength), or on time gating that suffers from low signal-to-noise ratio. The suggested technique is experimentally evaluated in a wide range of fog densities created in a fog chamber It demonstrates recovering objects 57cm away from the camera when the visibility is 37cm. In that case it recovers depth with a resolution of 5cm and scene reflectance with an improvement of 4dB in PSNR and 3.4x reconstruction quality in SSIM over time gating techniques.
Photonics, Scattering, Signal to noise ratio, Estimation, Media, Cameras
G. Satat, M. Tancik and R. Raskar, "Towards photography through realistic fog," 2018 IEEE International Conference on Computational Photography (ICCP), Pittsburgh, PA, USA, 2018, pp. 1-10.