Issue No.07 - July (1998 vol.31)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/2.689676
Computer graphics and computer vision are inverse problems. Traditional computer graphics starts with input geometric models and produces image sequences. Traditional computer vision starts with input image sequences and produces geometric models. Lately, there has been a meeting in the middle, and the center--the prize--is to create stunning, photorealistic images in real time. Vision researchers now work from images backward just as far as necessary to create models that capture a scene without going to full geometric models. Graphics researchers now work with hybrid geometry and image models. These models use images as partial results, reusing them to take advantage of similarities in the image stream. There are several current trends that make this an exciting time for image synthesis: The combined graphics and vision approaches have a hybrid vigor, much of which stems from sampled representations. This use of captured scenes (enhanced by vision research) yields richer rendering and modeling methods (for graphics) than methods that synthesize everything from scratch. Exploiting temporal and spatial coherence (similarities in images) via the use of layers and other techniques is boosting runtime performance. The explosion in PC graphics performance is making powerful computational techniques more practical. This article surveys cutting edge work in this exciting field, some of which will debut at Siggraph 98.
Jed Lengyel, "The Convergence of Graphics and Vision", Computer, vol.31, no. 7, pp. 46-53, July 1998, doi:10.1109/2.689676