2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Oct. 22, 2017 to Oct. 29, 2017
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICCV.2017.582
We introduce a novel formulation of temporal color constancy which considers multiple frames preceding the frame for which illumination is estimated. We propose an end-to-end trainable recurrent color constancy network - the RCC-Net - which exploits convolutional LSTMs and a simulated sequence to learn compositional representations in space and time. We use a standard single frame color constancy benchmark, the SFU Gray Ball Dataset, which can be adapted to a temporal setting. Extensive experiments show that the proposed method consistently outperforms single-frame state-of-the-art methods and their temporal variants.
convolution, image colour analysis, image representation, image sequences, learning (artificial intelligence), recurrent neural nets
Y. Qian, K. Chen, J. Nikkanen, J. Kamarainen and J. Matas, "Recurrent Color Constancy," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 5459-5467.