MARCH/APRIL 2002 (Vol. 22, No. 2) pp. 24-25
0272-1716/02/$31.00 © 2002 IEEE
Published by the IEEE Computer Society
Published by the IEEE Computer Society
Guest Editors' Introduction: Image-Based Modeling, Rendering, and Lighting
|HOW FAR WE'VE COME|
|Current Applications and Developments|
PDFs Require Adobe Acrobat
In the past seven years, the adjective image-based has become a common fixture in the computer graphics vernacular. As a result, we've witnessed an explosion of new and original uses for photographs and other forms of acquired images in computer graphics applications. We can use collections of images to relight scenes, capture the subtle appearance qualities of real-world objects, derive geometric models, and synthesize images from original viewpoints. Not coincidentally, this explosion has occurred in conjunction with the availability of affordable digital photography. This special issue of IEEE Computer Graphics and Applications spotlights both the progress and recent advances in image-based approaches to computer graphics. It brings together a range of diverse points of view and demonstrates the indelible impact these approaches have had on our field.
HOW FAR WE'VE COME
The introduction of image-based approaches exemplifies one of many dramatic yet, in hindsight, obvious departures in computer graphics. The quest for photorealistic renderings has long been the holy grail of computer graphics. Prior to the introduction of image-based methods, the most popular approach to achieving improved realism was to incorporate more and more knowledge of physics into simulation-based rendering systems. Generally, these improvements came with the costs of increased simulation times and the introduction of new, often difficult to obtain, material parameters. Image-based rendering is a refreshing alternative because it suggests that we might distill the essence of realism from example photographs. Furthermore, it's turned the computational focus away from simulation, toward a more manageable interpolation-based approach.
Image-based computer graphics techniques started gaining traction as a result of two important leaps. First was the realization that we could incorporate images as first-class components of an underlying scene representation. This contrasted with the prevailing practice where images played a more subordinate role, most frequently as static texture maps draped over geometry.
A second realization was the recognition that we could treat images as actual measurements of both geometric and radiometric information. This new interpretation has proven to be rich with new opportunities, including new approaches to visibility, improved methods for modeling view-dependent variations in the appearance of materials, and the possibility of combining radiant energy measurements from different images to reilluminate scenes. Furthermore, this interpretation of images as measurements has fostered new connections between the computer graphics and computer vision communities, as well as provided new perspectives on the problems of inverse rendering.
Current Applications and Developments
This special issue demonstrates the wide-ranging impact that image-based approaches have had on the computer graphics field. It brings together a collection of articles in which images play a central role. The issue begins with a tutorial by Paul Debevec on the exciting new topic of image-based lighting. This avenue of research is a significant extension of the existing family of image-based techniques. It lets us realistically illuminate computer synthesized models with lighting captured from real-world scenes.
The article "Image-Based Crowd Rendering" by Franco Tecchia, Céline Loscos, and Yiorgos Chrysanthou demonstrates the potential for using image-based methods in conjunction with traditional rendering approaches. Through the clever use of prerendered images, the authors demonstrate how to compose densely populated urban scenes with high visual fidelity while reducing the computational overhead of rendering. Approaches like these suggest one of many potential synergies between today's image-based and traditional approaches.
"Automated Mosaics via Topology Inference," by Steve Hsu, Harpreet S. Sawhney, and Rakesh Kumar, explains the state of the art in constructing wide-field-of-view panoramas. Panoramas are an area where image-based techniques have already enjoyed significant commercial success. They are used widely to convey impressions of a remote scene, and they've proven useful in applications including live Web broadcasts, remote teleconferencing, and real estate sales. This article also addresses one of the remaining user-intensive aspects of panoramic content creation.
Within the image-based research community, researchers have expended a significant effort on building systems that let images be rerendered assuming only small changes of viewpoint. However, until recently, researchers have largely overlooked the inherent limitations of an image's resolution in this reconstruction process. The article "Example-Based Super-Resolution" by William T. Freeman, Thouis R. Jones, and Egon C. Pasztor suggests a new approach for interpolating information along this unexplored dimension. It also signifies yet another connection between our field and current research in computer vision.
One sign of maturity of any computer graphics innovation is the appearance of dedicated hardware support. Moreover, image-based methods to date have only been applied in applications with static scenery. Takeshi Naemura, Junji Tago, and Hiroshi Harashima simultaneously address these problems in their article "Real-Time Video-Based Modeling and Rendering of 3D Scenes." They present a prototype of an image-acquisition system specifically targeted at acquiring image-based scene representations of dynamic scenes. It's reasonable to assume that such systems might be inexpensive and commonplace in the near future. The availability of such easily acquired rich content might stimulate an even more rapid adoption of image-based methods.
At this point, the future of image-based methods in computer graphics appears to be boundless with many opportunities for future innovation. We're pleased to see that the field has grown so much over these last few years. Moreover, we hope that this special issue of IEEE Computer Graphics and Applications stimulates greater interest in this exciting and rapidly moving field.
Paul Debevec is an executive producer at the University of Southern California's Institute for Creative Technologies, where he directs research in virtual actors, virtual environments, and applying computer graphics to creative projects. For the past five years, he has worked on techniques for capturing real-world illumination and illuminating synthetic objects with real light, facilitating the realistic integration of real and computer-generated imagery. He has a BS in math and a BSE in computer engineering from the University of Michigan and a PhD in computer science from the University of California, Berkeley. In August 2001, he received the Significant New Researcher Award from ACM Siggraph for his innovative work in the image-based modeling and rendering field. He is a member of ACM Siggraph and the Visual Effects Society, and is a program cochair for the 2002 Eurographics Workshop on Rendering.
Leonard McMillan is an associate professor in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology. He is a member of the MIT Laboratory for Computer Science and a coleader of the MIT Computer Graphics Group. He is interested in image-based rendering and a wide range of related topics including computer graphics rendering; digital-imaging methods and technologies; 3D display technologies; computer graphics hardware; and the fusion of image processing, multimedia, and computer graphics. He has a BS and an MS in electrical engineering from Georgia Institute of Technology and a PhD in computer science from the University of North Carolina at Chapel Hill.