The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (2012 vol.18)
pp: 573-580
Yanli Liu , Coll. of Comput. Sci., Sichuan Univ., Chengdu, China
X. Granier , LaBRI, INRIA Bordeaux Sud-Ouest, Bordeaux, France
ABSTRACT
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.
INDEX TERMS
video signal processing, augmented reality, cameras, feature extraction, image sequences, lighting, object tracking, optimisation, optimization process, online tracking, outdoor lighting variation, augmented reality, moving camera, visual appearance consistency, virtual object, video scene, illumination, video sequence, lighting condition, full image-based approach, sunlight relative intensity, skylight relative intensity, planar feature point extraction, spatial coherence, temporal coherence, Lighting, Estimation, Cameras, Three dimensional displays, Feature extraction, Buildings, Geometry, moving cameras., Augmented reality, illumination coherence
CITATION
Yanli Liu, X. Granier, "Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras", IEEE Transactions on Visualization & Computer Graphics, vol.18, no. 4, pp. 573-580, April 2012, doi:10.1109/TVCG.2012.53
REFERENCES
[1] M. Andersen, T. Jensen, and C. Madsen, Estimation of Dynamic Light Changes in Outdoor Scenes Without the use of Calibration Objects. In Int. Conf. Pattern Recognition (ICPR), pages 91-94. IEEE Computer Society, 2006.
[2] Y. Cheng, Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8) pp. 790-799, 1995.
[3] H. Chong, S. Gortler, and T. Zickler, A perception-based color space for illumination-invariant image processing. ACM Trans. Graph. (Proc. SIGGRAPH), 27(3) pp. 61:1-61:8, 2008.
[4] K. Cornelis, M. Pollefeys, M. Vergauwen, L. V. Gool, and K. U, Leuven. Augmented reality using uncalibrated video sequences. In. Lecture Notes in Computer Science, volume 2018, pages 144-160, 2001.
[5] E. Eibenberger and E. Angelopoulou, Beyond the neutral interface reflection assumption in illuminant color estimation. In Proc. IEEE Int. Conf. Image Processing (ICIP), number 2, pages 4689-4692, 2010.
[6] T. Haber and P. Bekaert, Image-Based Acquisition of Shape and Spatially Varying Reflectance. In British Machine Vision Conference (BMVC) - Poster, 2008.
[7] K. Hara, K. Nishino, and K. Ikeuchi, Light Source Position and Reflectance Estimation from a Single View without the Distant Illumination Assumption. IEEE Trans. Pattern Anal. Mach. Intell., 27:493-505, 2005.
[8] V. Havran, M. S. G. Krawczyk, K. Myskowski, and H.-P. Seidel, Interactive System for Dynamic Scene Lighting using Captured Video Environment Maps. In Proc. Eurographics Symposium on Rendering 2005, pages 31-42, 2005.
[9] T.-Y. Ho, L. Wan, C.-S. Leung, P.-M. Lam, and T.-T. Wong, Unicube for dynamic environment mapping. IEEE Trans. Visualization and Comp. Graph., 17(1) pp. 51-63, 2011.
[10] D. Hoiem, A. A. Efros, and M. Hebert, Recovering surface layout from an image. Int. J. Comp. Vision, 75(1) pp. 151-172, Oct. 2007.
[11] K. Jacobs, A. H. Nielsen, J. Vesterbaek, and C. Loscos, Coherent radiance capture of scenes under changing illumination conditions for relighting applications. The Visual Comp., 26(3) pp. 171-185, 2010.
[12] N. Jacobs, N. Roman, and R. Pless, Consistent Temporal Variations in Many Outdoor Scenes. In IEEE conf. Comp. Vision and Pattern Recognition (CVPR), pages 1-6. IEEE Computer Society, 2007.
[13] I. N. Junejo and H. Foroosh, GPS coordinates estimation and camera calibration from solar shadows. Comp. Vis. Image Underst., 114 pp. 991-1003, 2010.
[14] O. Kähler and J. Denzler, Detection of planar patches in handheld image sequences. In Proc. Photogrammetric Computer Vision, volume 36, pages 37-42, 2006.
[15] A. D. C. Ke Xu, Kar Wee Chia, Real-time camera tracking for markerless and unprepared augmented reality environments. Image and Vision Computing, 26(5) pp. 673-689, 2008.
[16] J.-F. Lalonde, A. A. Efros, and S. G. Narasimhan, Detecting ground shadows in outdoor consumer photographs. In European Conference on Computer Vision, 2010.
[17] J.-F. Lalonde, A. A. Efros, and S. G. Narasimhan, Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequences. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 28(5) pp. 1-10, 2009.
[18] J.-F. Lalonde, S. G. Narasimhan, and A. A. Efros, What Do the Sun and the Sky Tell Us About the Camera? Int. J. Comp. Vision, 88(1) pp. 24-51, 2010.
[19] Y. Liu, X. Qin, G. Xing, and Q. Peng, A new approach to outdoor illumination estimation based on statistical analysis for augmented reality. Comp. Anim. Virtual Worlds, 21(3-4) pp. 321-330, 2010.
[20] Y. Liu, X. Qin, S. Xu, N. Eihachiro, and Q. Peng, Light source estimation of outdoor scenes for mixed reality. The Visual Comp. (Proc. CGI), 25(5-7) pp. 637-646, 2009.
[21] J. Lobo and J. Dias, Fusing of image and inertial sensing for camera calibration. In Int. Conf. Multisensor Fusion and Integration for Intelligent Systems (MFI), pages 103-108. IEEE Computer Society, 2001.
[22] D. Lowe, Object recognition from local scale-invariant features. In IEEE International conference on Computer Vision(ICCV), volume 2, pages 1150-1157, 1999.
[23] C. Madsen and B. Lal, Probeless Illumination Estimation for Outdoor Augmented Reality. INTECH, 2010.
[24] B. Mercier, D. Meneveaux, and A. Fournier, A Framework for Automatically Recovering Object Shape, Reflectance and Light Sources from Calibrated Images. Int. J. Comp. Vision, 73(1) pp. 77-93, 2007.
[25] Miika Aittala. Inverse lighting and photorealistic rendering for augmented reality. The Visual Comp. (Proc. CGI), 26(6-8) pp. 669-678, 2010.
[26] A. Panagopoulos, D. Samaras, and N. Paragios, Robust shadow and illumination estimation using a mixture model. In IEEE conf. Comput. Vision and Pattern Recognition (CVPR), pages 651-658. IEEE Computer Society, June 2009.
[27] A. Panagopoulos, C. Wang, D. Samaras, and N. Paragios, Illumination Estimation and Cast Shadow Detection through a Higher-order Graphical Model. In IEEE conf. Comput. Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2011.
[28] W. Rees, Physical Principles of Remote Sensing. Cambridge Univ. Press, 1990.
[29] Y. Sato and K. Ikeuchi, Reflectance analysis under solar illumination. In Workshop on Physics-Based Modeling in Computer Vision, pages 180-187. IEEE Computer Society, 1995.
[30] S. A. Shafer, Using color to separate reflection components. Color Research & Application, 10(4) pp. 210-218, 1985.
[31] S. N. Sinha, J. michael Frahm, M. Pollefeys, and Y. Gene, Gpu-based video feature tracking and matching. Technical report, In Workshop on Edge Computing Using New Commodity Architectures, 2006.
[32] J. Stumpfel, C. Tchou, A. Jones, T. Hawkins, A. Wenger, and P. Debevec, Direct hdr capture of the sun and sky. In Proc. ACM AFRIGRAPH, pages 145-149. ACM, 2004.
[33] K. Sunkavalli, W. Matusik, H. Pfister, and S. Rusinkiewicz, Factored time-lapse video. ACM Trans. Graph. (Proc. SIGGRAPH), 26 pp. 101:1-101:8, 2007.
[34] K. Sunkavalli, F. Romeiro, W. Matusik, T. Zickler, and H. Pfister, What do color changes reveal about an outdoor scene? In IEEE conf. Comp. Vision and Pattern Recognition (CVPR), pages 1-8. IEEE Computer Society, 2008.
[35] C. Tomasi and T. Kanade, Detection and tracking of point features. Technical Report CMU-CS-91-132, CMU, 1991.
[36] Y. Wang and D. Samaras, Estimation of multiple directional light sources for synthesis of augmented reality images. Graph. Models, 65 pp. 185-205, 2003.
[37] Y. Yu, P. Debevec, J. Malik, and T. Hawkins, Inverse global illumination: recovering reflectance models of real scenes from photographs. In Proc. ACM SIGGRAPH, pages 215-224. ACM Press/Addison-Wesley Publishing Co., 1999.
[38] G. Zhang, X. Qin, W. Hua, T.-T. Wong, P.-A. Heng, and H. Bao, Robust Metric Reconstruction from Challenging Video Sequences. In IEEE conf. Comp. Vision and Pattern Recognition (CVPR), volume 36, pages 1-8, 2007.
491 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool