The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2007 vol.27)
pp: 32-40
Takeo Kanade , Carnegie Mellon University
P.J. Narayanan , International Institute of Information Technology
ABSTRACT
Digitally recording dynamic events, such as a sports event, a ballet performance, and a lecture, for experiencing in a spatiotemporally distant and arbitrary setting requires 4D capture: three dimensions for their geometry and appearance over the fourth dimension of time. Cameras are suitable for this task as they are nonintrusive, universal, and inexpensive. Computer Vision techniques have advanced sufficiently to make the 4D capture possible. The authors present the process of 4D digitization of dynamic events with the Virtualized Reality system developed at CMU as the running example, together with a discussion on the general problem of digitizing dynamic events.
INDEX TERMS
image-based modeling, dynamic event capture, multiview stereo
CITATION
Takeo Kanade, P.J. Narayanan, "Virtualized Reality: Perspectives on 4D Digitization of Dynamic Events", IEEE Computer Graphics and Applications, vol.27, no. 3, pp. 32-40, May/June 2007, doi:10.1109/MCG.2007.72
REFERENCES
1. T. Kanade et al., "Development of a Video-Rate Stereo Machine," Proc. Int'l Robotics and Systems Conf. (IROS), IEEE CS Press, vol. 3, 1995, pp. 95–100.
2. T. Kanade et al., Video-Rate Z Keying: A New Method for Merging Images, tech. report CMU-RI-TR-95-38, Robotics Inst., Carnegie Mellon Univ., 1995.
3. T. Kanade, P. Rander, and P.J. Narayanan, "Virtualized Reality: Constructing Virtual Worlds from Real Scenes," IEEE MultiMedia, vol. 4, no. 1, 1997, pp. 34–47.
4. P.J. Narayanan, P.W. Rander, and T. Kanade, "Constructing Virtual Worlds Using Dense Stereo," Proc. IEEE Int'l Conf. Computer Vision (ICCV), IEEE CS Press, 1998, pp. 3–10.
5. S. Moezzi, L.-C. Tai, and P. Gerard, "Virtual View Generation for 3D Digital Video," IEEE MultiMedia, vol. 4, no. 1, 1997, pp. 18–26.
6. P.E. Debevec, C.J. Taylor, and J. Malik, "Modeling and Rendering Architecture from Photographs: A Hybrid Geometry-and Image-Based Approach," Proc. 23rd Ann. Conf. Computer Graphics and Interactive Techniques, ACM Press, 1996, pp. 11–20.
7. S.J. Gortler et al., "The Lumigraph," Proc. 23rd Ann. Conf. Computer Graphics and Interactive Techniques, ACM Press, 1996, pp. 43–54.
8. M. Levoy and P. Hanrahan, "Light Field rendering," Proc. 23rd Ann. Conf. Computer Graphics and Interactive Techniques, ACM Press, 1996, pp. 31–42.
9. G. Godin et al., "Active Optical 3D Imaging for Heritage Applications," IEEE Computer Graphics and Applications, vol. 22, no. 5, 2002, pp. 24–36.
10. M. Levoy et al., "The Digital Michelangelo Project: 3D Scanning of Large Statues," Proc. 27th Ann. Conf. Computer Graphics and Interactive Techniques, ACM Press, 2000, pp. 131–144.
11. T. Oishi et al., "Digital Restoration of the Original Great Buddha and Main Hall of Todaiji Temple," Trans. Virtual Reality Soc. Japan, vol. 10, no. 3, 2005, pp. 429–436.
12. M. Gross et al., "Blue-c: A Spatially Immersive Display and 3D Video Portal for Telepresence," ACM Trans. Graphics, vol. 22, no. 3, 2003, pp. 819–827.
13. S. Wurmlin et al., "3D Video Recorder," Pacific Conf. Computer Graphics and Applications, IEEE CS Press, 2002, pp. 325–334.
14. J. Carranza et al., "Free-Viewpoint Video of Human Actors," ACM Trans. Graphics, vol. 22, no. 3, 2003, pp. 569–577.
15. C.L. Zitnick et al., "High-Quality Video View Interpolation Using a Layered Representation," ACM Trans. Graphics, vol. 23, no. 3, 2004, pp. 600–608.
16. M. Tanimoto, "Free Viewpoint Television for 3D Scene Reproduction and Creation," Proc. Conf. Computer Vision and Pattern Recognition Workshop (CVPRW), IEEE CS Press, 2006, p. 172; http:/www.tanimoto.nuee.nagoya-u.ac.jp.
17. P.J. Narayanan, P.W. Rander, and T. Kanade, Synchronizing and Capturing Every Frame from Multiple Cameras, tech. report CMU-RI-TR-95-25, Robotics Inst., Carnegie Mellon Univ., 1995.
18. R. Tsai, "A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses," IEEE J. Robotics and Automation, vol. 3, no. 4, 1987, pp. 323–344.
19. Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, 2000, pp. 1330–1334.
20. G.K.M. Cheung, S. Baker, and T. Kanade, "Shape-from-Silhouette Across Time Part II: Applications to Human Modeling and Markerless Motion Tracking," Int'l J. Computer Vision, vol. 63, no. 3, 2005, pp. 225–245.
21. H. Saito, S. Baba, and T. Kanade, "Appearance-Based Virtual View Generation from Multicamera Videos Captured in the 3D Room," IEEE Trans. MultiMedia, vol. 5, no. 3, 2003, pp. 303–316.
22. S. Vedula, S. Baker, and T. Kanade, "Image-Based Spatio-Temporal Modeling and View Interpolation of Dynamic Events," ACM Trans. Graphics, vol. 24, no. 2, 2005, pp. 240–261.
23. T. Kanade et al., "Virtualized Reality: Digitizing a 3D Time-Varying Event as is and in Real Time," Mixed Reality: Merging Real and Virtual Worlds, Y. Ohta and H. Tamura, eds., Ohmsha, Ltd. and Springer, 1999, pp. 41–57.
24. M. Waschbusch, S. Wurmlin, and M. Gross, "Interactive 3D Video Editing," The Visual Computer, vol. 22, no. 9, 2006, pp. 631–641.
25. M. Oshita, "Motion-Capture-Based Avatar Control Framework in Third-Person View Virtual Environments," Proc. ACM SIG Computer-Human Interaction Int'l Conf. Advances in Computer Entertainment Technology, ACM Press, 2006, art. no. 2.
26. S.K. Nayar, "Computational Cameras: Redefining the Image," Computer, vol. 39, no. 8, 2006, pp. 30–38.
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool