The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2009 vol.15)
pp: 841-852
Yuichi Taguchi , The University of Tokyo, Tokyo
Takafumi Koike , Hitachi, Ltd., Kanagawa
Keita Takahashi , The University of Tokyo, Tokyo
Takeshi Naemura , The University of Tokyo, Tokyo
ABSTRACT
The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.
INDEX TERMS
Virtual reality, three-dimensional displays, display algorithms, image-based rendering.
CITATION
Yuichi Taguchi, Takafumi Koike, Keita Takahashi, Takeshi Naemura, "TransCAIP: A Live 3D TV System Using a Camera Array and an Integral Photography Display with Interactive Control of Viewing Parameters", IEEE Transactions on Visualization & Computer Graphics, vol.15, no. 5, pp. 841-852, September/October 2009, doi:10.1109/TVCG.2009.30
REFERENCES
[1] M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. ACM SIGGRAPH '96, pp.31-42, Aug. 1996.
[2] S.J. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Cohen, “The Lumigraph,” Proc. ACM SIGGRAPH '96, pp.43-54, Aug. 1996.
[3] Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: Live Transmission of Light Field from a Camera Array to an Integral Photography Display,” Proc. ACM SIGGRAPH ASIA '08 Emerging Technologies, Dec. 2008.
[4] F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-Dimensional Video System Based on Integral Photography,” Optical Eng., vol. 38, no. 6, pp.1072-1077, June 1999.
[5] W. Matusik and H. Pfister, “3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes,” ACM Trans. Graphics, vol. 23, no. 3, pp.814-824, Aug. 2004.
[6] J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral Three-Dimensional Television Using a 2000-Scanning-Line Video System,” Applied Optics, vol. 45, no. 8, pp.1704-1712, Mar. 2006.
[7] H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of Resolution Limitation of Integral Photography,” J. Optical Soc. Am. A, vol. 15, no. 8, pp.2059-2065, Aug. 1998.
[8] M. Zwicker, A. Vetro, S. Yea, W. Matusik, H. Pfister, and F. Durand, “Resampling, Antialiasing, and Compression in Multiview 3D Displays,” IEEE Signal Processing Magazine, vol. 24, no. 6, pp.88-96, Nov. 2007.
[9] T. Kanade, P. Rander, and P.J. Narayanan, “Virtualized Reality: Constructing Virtual Worlds from Real Scenes,” IEEE Multimedia, vol. 4, no. 1, pp.34-47, Jan. 1997.
[10] C.L. Zitnick, S.B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High-Quality Video View Interpolation Using a Layered Representation,” ACM Trans. Graphics, vol. 23, no. 3, pp.600-608, Aug. 2004.
[11] B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High Performance Imaging Using Large Camera Arrays,” ACM Trans. Graphics, vol. 24, no. 3, pp.765-776, July 2005.
[12] T. Naemura and H. Harashima, “Real-Time Video-Based Rendering for Augmented Spatial Communication,” Proc. SPIE Visual Comm. and Image Processing (VCIP '99), vol. 3653, pp.620-631, Jan. 1999.
[13] J.C. Yang, M. Everett, C. Buehler, and L. McMillan, “A Real-Time Distributed Light Field Camera,” Proc. 13th Eurographics Workshop Rendering, pp.77-85, June 2002.
[14] A. Isaksen, L. McMillan, and S.J. Gortler, “Dynamically Reparameterized Light Fields,” Proc. ACM SIGGRAPH '00, pp.297-306, July 2000.
[15] J.-X. Chai, X. Tong, S.-C. Chan, and H.-Y. Shum, “Plenoptic Sampling,” Proc. ACM SIGGRAPH '00, pp.307-318, July 2000.
[16] H. Schirmacher, M. Li, and H.-P. Seidel, “On-the-Fly Processing of Generalized Lumigraphs,” Proc. Eurographics '01, vol. 20, no. 3, pp.165-173, 2001.
[17] R. Yang, G. Welch, and G. Bishop, “Real-Time Consensus-Based Scene Reconstruction Using Commodity Graphics Hardware,” Proc. Pacific Conf. Computer Graphics and Applications '02, pp.225-235, Oct. 2002.
[18] T. Yamamoto, M. Kojima, and T. Naemura, “LIFLET: Light Field Live with Thousands of Lenslets,” Proc. ACM SIGGRAPH '04 Emerging Technologies, Aug. 2004.
[19] C. Zhang and T. Chen, “A Self-Reconfigurable Camera Array,” Proc. 15th Eurographics Symp. Rendering, pp.243-254, June 2004.
[20] T. Naemura, J. Tago, and H. Harashima, “Real-Time Video-Based Modeling and Rendering of 3D Scenes,” IEEE Computer Graphics and Applications, vol. 22, no. 2, pp.66-73, Mar. 2002.
[21] Y. Taguchi, K. Takahashi, and T. Naemura, “Real-Time All-in-Focus Video-Based Rendering Using a Network Camera Array,” Proc. IEEE 3DTV Conf. '08, pp.241-244, May 2008.
[22] G. Lippmann, “Epreuves Reversibles Donnant La Sensation Du Relief,” J. Physics, vol. 7, no. 4, pp.821-825, Nov. 1908.
[23] P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A Survey of 3DTV Displays: Techniques and Technologies,” IEEE Trans. Circuits and Systems for Video Technology, vol. 17, no. 11, pp.1647-1658, Nov. 2007.
[24] J. Konrad and M. Halle, “3D Displays and Signal Processing,” IEEE Signal Processing Magazine, vol. 24, no. 6, pp.97-111, Nov. 2007.
[25] T. Yendo, N. Kawakami, and S. Tachi, “Seelinder: The Cylindrical Lightfield Display,” Proc. ACM SIGGRAPH '05 Emerging Technologies, Aug. 2005.
[26] R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: A Novel Approach to the Display and Transmission of 360 Degrees-Viewable 3D Solid Images,” IEEE Trans. Visualization and Computer Graphics, vol. 12, no. 2, pp.178-185, Mar./Apr. 2006.
[27] A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an Interactive $360^\circ$ Light Field Display,” ACM Trans. Graphics, vol. 26, no. 3, pp.40:1-40:10, July 2007.
[28] H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable High-Resolution Integral Videography Autostereoscopic Display with a Seamless Multiprojection System,” Applied Optics, vol. 44, no. 3, pp.305-315, Jan. 2005.
[29] R. Yang, X. Huang, S. Li, and C. Jaynes, “Toward the Light Field Display: Autostereoscopic Rendering via a Cluster of Projectors,” IEEE Trans. Visualization and Computer Graphics, vol. 14, no. 1, pp.84-96, Jan./Feb. 2008.
[30] T. Koike, M. Oikawa, K. Utsugi, M. Kobayashi, and M. Yamasaki, “Autostereoscopic Display with 60 Ray Directions Using LCD with Optimized Color Filter Layout,” Proc. SPIE Stereoscopic Displays and Applications XVIII, vol. 6490A, Jan. 2007.
[31] R.Y. Tsai, “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses,” IEEE Trans. Robotics and Automation, vol. 3, no. 4, pp.323-344, Aug. 1987.
[32] C. Buehler, M. Bosse, L. McMillan, S.J. Gortler, and M.F. Cohen, “Unstructured Lumigraph Rendering,” Proc. ACM SIGGRAPH '01, pp.425-432, Aug. 2001.
[33] D. Scharstein and R. Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” Int'l J. Computer Vision, vol. 47, nos.1-3, pp.7-42, Apr.-June 2002.
[34] R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Technical Report CSTR 2005-02, Stanford Univ. Computer Science, Apr. 2005.
204 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool