Issue No. 11 - November (2001 vol. 23)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/34.969116
<p><b>Abstract</b>—Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. However, both methods still have several drawbacks when we attempt to apply them to mixed reality where we integrate virtual images with real background images. To overcome these difficulties, we propose a new method, which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface generated from a sequence of range images. The Eigen-Texture method is an example of a view-dependent texturing approach which combines the advantages of image-based and model-based approaches: No reflectance analysis of the object surface is needed, while an accurate 3D geometric model facilitates integration with other scenes. This paper describes the method and reports on its implementation.</p>
Image synthesis, texture, appearance, model-based rendering, image-based rendering, principle component analysis.
Ko Nishino, Yoichi Sato, Katsushi Ikeuchi, "Eigen-Texture Method: Appearance Compression and Synthesis Based on a 3D Model", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 23, no. , pp. 1257-1265, November 2001, doi:10.1109/34.969116