The Community for Technology Leaders

Guest Editor's Introduction: Computational Photography—The Next Big Step

Oliver Bimber, Bauhaus-University Weimar

Pages: pp. 28-29

Abstract—Computational photography extends digital photography by providing the capability to record much more information and by offering the possibility of processing this information afterward.

The transition from analog to digital photography has certainly been a big step that is almost complete. Although a few professionals still prefer film, the majority of us have switched to digital cameras. Digital photography has opened many new possibilities, such as immediate image preview, postediting, or recording of short movie clips. Today's megapixel resolution of digital cameras can easily keep up with the quality of analog film for a broad range of consumer and professional applications.

This was reason enough for some of the major camera manufacturers such as Kodak, Canon, and Nikon to downscale or to cease their production of analog film cameras and film. Yet another big step lies ahead of us—and it is not too far off in the distance. It is called computational photography.

Computational photography extends digital photography by providing the capability to record much more information and by offering the possibility of processing this information afterward.

Analog and digital photography share one main limitation: They only record intensities and colors of light rays that a simple lens system projects linearly onto the image plane at a single point in time and under a fixed scene illumination. This is still mainly the principle of the camera obscura that has been known since antiquity. Thus, most of the light rays that are propagated through space and time are not recorded.

Dennis Gabor and Gabriel Jonas Lippmann, for example, addressed part of this problem on the analog side when they invented holography and what is known as Lippmann photography. Yet, digitizing photographic recordings does allow postprocessing them digitally. Therefore, computational photography will enable features such as 3D recording, digital refocusing, synthetic re-illumination, improved motion compensation and noise reduction, and much more.

In this Issue

In "Computational Cameras: Redefining the Image," Shree K. Nayar describes different examples of computational cameras. Optical extensions, such as curved mirrors or spectral filters in combination with ingenious computer vision techniques enable wide-angle, high-dynamic- range, multispectral, and depth imaging. While programmable imaging allows emulating several specialized functionalities with a single imaging system, programmable illumination realizes smart flashes. Nayar provides examples of both.

Most digital cameras allow capturing small movie sequences. Instead of a simple playback, however, future cameras will register the corresponding video frames into a spacetime slab. This data structure, together with appropriate processing techniques, offers higher image quality—less noise, larger depth of field, higher dynamic range—and opens completely new possibilities, such as consistent group shooting, motion-invariant image stitching, or playback of motion loops. Michael F. Cohen and Richard Szeliski describe this technique in "The Moment Camera."

Light that a point in space reflects or emits usually travels in all unoccluded directions. Nevertheless, conventional cameras can capture only a small subset of those light rays that travel through its optical system. In "Light Fields and Computational Imaging," Marc Levoy discusses new imaging devices that capture a much larger number of light rays that travel in many parameterized directions—called the 4D light field. This novel concept truly revolutionizes digital imaging in many areas and enables new applications, such as multiperspective panoramas and synthetic aperture photography.

One other limitation of conventional cameras is that they illuminate scene points with light rays coming only from a single direction—from the integrated flash. In addition to capturing light rays propagating from the scene in multiple directions, advanced lighting systems can illuminate the scene with light rays coming from multiple directions. Paul Debevec describes such a system, called the Light Stage, in "Virtual Cinematography: Relighting through Computation." This system enables virtual relighting of previously captured images or video sequences.


The articles in this special issue impressively outline how computational photography transforms photography from single-instant-direction toward multiple-instant-direction imaging and illumination. I believe this step is as big as the step from analog to digital photography. It will open a variety of new possibilities for professionals and novices. Having just bought a new D-SLR camera, I'm already looking forward to selling it on eBay when I buy my first C-SLR model.

About the Authors

Oliver Bimber is a junior professor of augmented reality at Bauhaus-University Weimar, Germany. His research interests include display technologies, computer graphics, and computer vision. Contact him at;
66 ms
(Ver 3.x)