The Community for Technology Leaders
2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Venice, Italy
Oct. 22, 2017 to Oct. 29, 2017
ISSN: 2380-7504
ISBN: 978-1-5386-1032-9
pp: 5459-5467
ABSTRACT
We introduce a novel formulation of temporal color constancy which considers multiple frames preceding the frame for which illumination is estimated. We propose an end-to-end trainable recurrent color constancy network - the RCC-Net - which exploits convolutional LSTMs and a simulated sequence to learn compositional representations in space and time. We use a standard single frame color constancy benchmark, the SFU Gray Ball Dataset, which can be adapted to a temporal setting. Extensive experiments show that the proposed method consistently outperforms single-frame state-of-the-art methods and their temporal variants.
INDEX TERMS
convolution, image colour analysis, image representation, image sequences, learning (artificial intelligence), recurrent neural nets
CITATION

Y. Qian, K. Chen, J. Nikkanen, J. Kamarainen and J. Matas, "Recurrent Color Constancy," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 5459-5467.
doi:10.1109/ICCV.2017.582
172 ms
(Ver 3.3 (11022016))