This Article 
 Bibliographic References 
 Add to: 
Real-Time Video-Based Modeling and Rendering of 3D Scenes
March/April 2002 (vol. 22 no. 2)
pp. 66-73

In research on 3D image communications and virtual reality, the development of techniques for synthesizing arbitrary views has become an important technical issue. Given a structural model (such as a polygon or volume model) of an object, it is relatively easy to synthesize arbitrary views. Generating a structural model of an object, however, is not necessarily easy. For this reason, research has been progressing on a technique called image-based modeling and rendering (IBMR) as a means of avoiding this problem. To date, studies have been performed on a variety of specific IBMR techniques. In this article, we target 3D scenes in motion and describe a system that can perform processing in real time from image pickup to interactive display. We call the system a video-based rendering (VBR) system, since it utilizes video sequences instead of static images. The experimental results reported here show that our method is useful for interactive 3-D rendering of real scenes.

1. A. Isaksen, L. McMillan, and S.J. Gortler, "Dynamically Reparameterized Light Fields," Computer Graphics (Proc. Siggraph 2000), ACM Press, New York, 2000, pp. 297-306.
2. P.E. Debevec, C.J. Taylor, and J. Malik, “Modeling and Rendering Architecture from Photographs: A Hybrid Geometry- and Image-Based Approach,” Proc. SIGGRAPH '96, pp. 11-20, Aug. 1996.
3. T. Naemura, T. Yoshida, and H. Harashima, "3D Computer Graphics Based on Integral Photography," Optics Express, vol. 8, Feb. 2001, pp. 255-262, 30085.htm.
4. T. Naemura, M. Kaneko, and H. Harashima, "Multi-User Immersive Stereo," Proc. IEEE Int'l Conf. Image Process. (ICIP 98), IEEE CS Press, Los Alamitos, Calif., 1998, pp. 903-907.
1. M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. SIGGRAPH '96, pp. 31-42, 1996.
2. T. Naemura and H. Harashima, "Ray-based Approach to Integrated 3D Visual Communication," Three-Dimensional Video and Display: Devices and Systems, SPIE Press, Bellingham, Wash., vol. CR76, 2000, pp. 282-305.
3. S.J. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Cohen, “The Lumigraph,” Proc. SIGGRAPH '96, pp. 43-54, 1996.
4. T. Kanade, P. Rander, and P.J. Narayanan, "Virtualized Reality: Constructing Virtual Worlds from Real Scenes," IEEE MultiMedia, vol. 4, no. 1, Jan.-Mar. 1997, pp. 34 47.
5. T. Naemura and H. Harashima, "Real-Time Video-Based Rendering for Augmented Spatial Communication," Proc. Visual Comm. and Image Process (VCIP 99), vol. 3653, SPIE Press, Bellingham, Wash., 1999, pp. 620-631.
6. Y. Kunita et al., "Real-Time Rendering System of Moving Objects," Proc. IEEE Workshop Multiview Modeling and Analysis of Visual Scenes (MVIEW 99), IEEE CS Press, Los Alamitos, Calif., 1999, pp. 81-88.
7. R. Ooi et al., "Pixel Independent Random Access Image Sensor for Real Time Image-Based Rendering System," Proc. IEEE Int'l Conf. Image Processing (ICIP 2001), IEEE Signal Process. Soc., Piscataway, N.J., 2001, Vol. II, pp. 193-196.
8. W. Matusik et al., "Image-Based Visual Hulls," Computer Graphics (Proc. Siggraph 2000), ACM Press, New York, 2000, pp. 369-374.

Index Terms:
light field, camera array, layered representation, Image-Based Modeling and Rendering, Video-Based Rendering
Takeshi Naemura, Junji Tago, Hiroshi Harashima, "Real-Time Video-Based Modeling and Rendering of 3D Scenes," IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 66-73, March-April 2002, doi:10.1109/38.988748
Usage of this product signifies your acceptance of the Terms of Use.