The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2009 vol.15)
pp: 355-368
Taehee Lee , University of California, Los Angeles, Los Angeles
Tobias Höllerer , University of California, Santa Barbara, Santa Barbara
ABSTRACT
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
INDEX TERMS
Virtual reality, Scene Analysis
CITATION
Taehee Lee, Tobias Höllerer, "Multithreaded Hybrid Feature Tracking for Markerless Augmented Reality", IEEE Transactions on Visualization & Computer Graphics, vol.15, no. 3, pp. 355-368, May/June 2009, doi:10.1109/TVCG.2008.190
REFERENCES
[1] T. Lee and T. Höllerer, “Hybrid Feature Tracking and User Interaction for Markerless Augmented Reality,” Proc. IEEE Conf. Virtual Reality (VR '08), pp. 145-152, 2008.
[2] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana, “Virtual Object Manipulation on a Table-Top AR Environment,” Proc. IEEE/ACM Int'l Symp. Augmented Reality (ISAR '00), pp. 111-119, 2000.
[3] S. DiVerdi, D. Nurmi, and T. Höllerer, “ARWin—A Desktop Augmented Reality Window Manager,” Proc. IEEE/ACM Int'l Symp. Mixed and Augmented Reality (ISMAR '03), pp. 298-299, 2003.
[4] T. Lee and T. Höllerer, “Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking,” Proc. 11th IEEE Int'l Symp. Wearable Computers (ISWC '07), pp. 83-90, 2007.
[5] T. Höllerer, J. Wither, and S. DiVerdi, Anywhere Augmentation: Towards Mobile Augmented Reality in Unprepared Environments, Springer Verlag, 2007.
[6] H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System,” Proc. Second IEEE and ACM Int'l Workshop Augmented Reality (IWAR '99), pp. 85-94, 1999.
[7] M. Fiala, “ARTag, a Fiducial Marker System Using Digital Techniques,” Proc. Int'l Conf. Computer Vision and Pattern Recognition (CVPR '05), pp. 590-596, 2005.
[8] P. Wellner, “Interacting with Paper on the Digitaldesk,” Comm. ACM, vol. 36, no. 7, pp. 87-96, 1993.
[9] H. Ishii and B. Ullmer, “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms,” Proc. ACM Conf. Human Factors in Computing Systems (CHI '97), pp. 234-241, 1997.
[10] J. Rekimoto and M. Saitoh, “Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments,” Proc. ACM Conf. Human Factors in Computing Systems (CHI), 1999.
[11] A. Butz, T. Höllerer, S. Feiner, B. MacIntyre, and C. Beshers, “Enveloping Users and Computers in a Collaborative 3D Augmented Reality,” Proc. Second IEEE and ACM Int'l Workshop Augmented Reality (IWAR '99), pp. 35-44, 1999.
[12] M.L. Maher and M.J. Kim, “Studying Designers Using a Tabletop System for 3D Design with a Focus on the Impact on Spatial Cognition,” Proc. IEEE Int'l Workshop Horizontal Interactive Human-Computer Systems (TABLETOP '06), pp. 105-112, 2006.
[13] P. Dietz and D. Leigh, “Diamondtouch: A Multi-User Touch Technology,” Proc. 14th ACM Symp. User Interface Software and Technology (UIST '01), pp. 219-226, 2001.
[14] J.Y. Han, “Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection,” Proc. 18th ACM Symp. User Interface Software and Technology (UIST '05), pp. 115-118, 2005.
[15] G. Simon, A.W. Fitzgibbon, and A. Zisserman, “Markerless Tracking Using Planar Structures in the Scene,” Proc. IEEE/ACM Int'l Symp. Augmented Reality (ISAR '00), pp. 120-128, 2000.
[16] V. Ferrari, T. Tuytelaars, and L.J.V. Gool, “Markerless Augmented Reality with a Real-Time Affine Region Tracker,” Proc. IEEE/ACM Int'l Symp. Augmented Reality (ISAR '01), pp. 87-96, 2001.
[17] I. Skrypnyk and D.G. Lowe, “Scene Modelling, Recognition and Tracking with Invariant Image Features,” Proc. IEEE/ACM Int'l Symp. Mixed and Augmented Reality (ISMAR '04), pp. 110-119, 2004.
[18] T. Lee and T. Höllerer, “Viewpoint Stabilization for Live Collaborative Video Augmentations,” Proc. IEEE/ACM Int'l Symp. Mixed and Augmented Reality (ISMAR '06), pp. 241-242, 2006.
[19] A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, June 2007.
[20] G. Klein and D. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” Proc. IEEE/ACM Int'l Symp. Mixed and Augmented Reality (ISMAR '07), pp. 225-234, 2007.
[21] Intel, OpenCV: Open Source Computer Vision Library Reference Manual, 2000.
[22] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int'l J. Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[23] H. Bay, T. Tuytelaars, and L.J.V. Gool, “SURF: Speeded Up Robust Features,” Proc. Ninth European Conf. Computer Vision (ECCV '06), pp. 404-417, 2006.
[24] V. Lepetit and P. Fua, “Monocular Model-Based 3D Tracking of Rigid Objects,” Foundations and Trends in Computer Graphics and Vision, vol. 1, no. 1, pp. 1-89, 2006.
[25] M.J. Jones and J.M. Rehg, “Statistical Color Models with Application to Skin Detection,” Proc. Int'l Conf. Computer Vision and Pattern Recognition (CVPR '99), pp. 1274-1280, 1999.
[26] M. Kölsch and M. Turk, “Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration,” Proc. IEEE Workshop Real-Time Vision for Human-Computer Interaction, p. 158, 2004.
[27] Z.Y. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, Nov. 2000.
[28] Y. Ma, S. Soatto, J. Kosecka, and S. Sastry, An Invitation to 3D Vision, from Images to Models. Springer Verlag, 2003.
[29] D. Nister and H. Stewenius, “Scalable Recognition with a Vocabulary Tree,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '06), vol. 2, pp. 2161-2168, 2006.
[30] G. Schindler, M. Brown, and R. Szeliski, “City-Scale Location Recognition,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '07), pp. 1-7, 2007.
[31] B. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. Seventh Int'l Joint Conf. Artificial Intelligence (IJCAI '81), pp. 674-679, 1981.
[32] J. Shi and C. Tomasi, “Good Features to Track,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR '94), pp. 593-600, 1994.
[33] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. Fourth Alvey Vision Conf. (AVC '88), pp. 147-151, 1988.
[34] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision. Cambridge Univ. Press, 2003.
[35] M.A. Fischler and R.C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, no. 6, pp. 381-395, 1981.
[36] R. Hess, SIFT Detector, http://web.engr.oregonstate.edu~hess/, 2007.
[37] M. Pollefeys, R. Koch, and L.J.V. Gool, “Self-Calibration and Metric Reconstruction in Spite of Varying and Unknown Internal Camera Parameters,” Proc. Sixth IEEE Int'l Conf. Computer Vision (ICCV '98), pp. 90-95, 1998.
[38] G. Welch and G. Bishop, “An Introduction to the Kalman Filter,” technical report, Univ. of North Carolina at Chapel Hill, 1995.
6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool