Issue No. 01 - January/February (2011 vol. 31)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MCG.2011.8
Holly Rushmeier , Yale University
Our ideas about what cameras are and how they're used have changed radically over the past decade. A camera used to be something employed in particular contexts. A professional photographer used a camera to document events, produce portraits, or create fine art. Most other people used cameras for special occasions such as birthdays and holidays or when traveling on vacation. Furthermore, a camera produced only a single image—usually one that the photographer saw some days after pressing the button on the camera. This special issue explores recent advances in the acquisition and use of camera images in conjunction with computer graphics techniques.
The emergence of inexpensive digital cameras initiated this change in our traditional view of photography. Now we see cameras everywhere. People use them on the spur of the moment. The Internet is flooded with digital photographs and video. We also see digital cameras that do more than produce 24-bit images. The field of computational photography is exploiting digital technology's versatility to produce variations of traditional cameras. These new computational cameras record data that let photographers create many images per button press.
Digital cameras' availability and advanced capabilities have made them an essential tool in computer graphics. The availability of large numbers of images and of images with data beyond just three color channels has enabled the expansion of image-based modeling and rendering—that is, the creation of novel imagery from existing imagery. Digital cameras' ease of use has opened up this creation to a wider range of users and applications. The articles in this special issue provide examples of the many ways camera images combined with computer graphics techniques are expanding our capability to author and share visual experiences.
In This Issue
Traditionally, digital images in computer graphics have served as texture maps in synthetic scenes. Libraries of images have been maintained to provide high-resolution detail on objects, such as the wrinkles on leather, or to provide backdrops, such as the view of a city through a window. In "Building and Using a Database of One Trillion Natural-Image Patches," Sean Arietta and Jason Lawrence extend this idea to a very large scale. The imagery available on the Internet offers not only full images but also phenomenal numbers of image patches you can use to synthesize textures for applications including generating new digital objects and repairing damaged or incomplete digital images.
Recently, there has been renewed interest in stereo imagery for feature films, or "3D movies." The capability that differentiates between 3D as simply a gimmick and as an effective storytelling device is user control over the 3D effect. Two articles in this issue address this topic. In "A Viewer-Centric Editor for 3D Movies," Sanjeev Koppal and his colleagues describe methods to plan 3D acquisition and modify the results. There's also great interest in using digital techniques to create stereo pairs from traditional 2D image sequences. In "Depth Director: A System for Adding Depth to Movies," Ben Ward and his colleagues add user control of depth perception to the extraction of stereo pairs from images.
Initially, digital cameras had lower resolution than traditional film and thus appeared inferior. In "A Digital Gigapixel Large-Format Tile-Scan Camera," Moshe Ben-Ezra describes a digital camera that produces images of extraordinarily high spatial resolution. Although high-resolution cameras aren't new, this camera offers a reasonable cost solution for new application areas such as cultural-heritage documentation—in which requirements for detail and accuracy are high but budgets are always low.
An "ordinary" digital camera captures just three color channels (red, green, and blue) in some limited dynamic range of light intensity. This limited capture is predetermined by the decision on how to display the limit. In "Using Focused Plenoptic Cameras for Rich Image Capture," Todor Georgiev and his colleagues describe how to capture much richer data from the light incident on the camera. They demonstrate how to capture high-dynamic-range intensity, light polarization, or multispectral data in a single camera exposure. Capture in a single exposure means that you can capture this rich data for dynamic scenes. The results are image data that you can render in many different ways.
Large numbers of networked cameras enable "social photography." In "Social Snapshot: A System for Temporally Coupled Social Photography," Robert Patro and his colleagues demonstrate how multiple users with synchronized cell phone cameras can record a dynamic event that you can reconstruct in 3D. They introduce both a new technical contribution and a new idea for social interaction.
We thank guest editors Ramesh Raskar and Irfan Essa for their efforts in producing this special issue. We expect to see many future articles on this theme as innovations and the availability of camera technologies, combined with computer graphics methods and techniques, continue to expand.
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
Holly Rushmeier is a professor of computer science at Yale University. Her research interests are material and texture models, recovering shape and reflectance, sketching and alternative design techniques, modeling and interacting with architectural scale scenes, applications of perception to computer graphics, and cultural-heritage applications of computer graphics. Rushmeier has a PhD in mechanical engineering from Cornell University. She's a former associate editor in chief of IEEE Computer Graphics and Applications. Contact her at firstname.lastname@example.org.