Issue No. 05 - May (2009 vol. 31)
Justin Domke , University of Maryland, College Park
Yiannis Aloimonos , University of Maryland, College Park
Since cameras blur the incoming light during measurement, different images of the same surface do not contain the same information about that surface. Thus, in general, corresponding points in multiple views of a scene have different image intensities. While multiple-view geometry constrains the locations of corresponding points, it does not give relationships between the signals at corresponding locations. This paper offers an elementary treatment of these relationships. We first develop the notion of "ideal” and "real” images, corresponding to, respectively, the raw incoming light and the measured signal. This framework separates the filtering and geometric aspects of imaging. We then consider how to synthesize one view of a surface from another; if the transformation between the two views is affine, it emerges that this is possible if and only if the singular values of the affine matrix are positive. Next, we consider how to combine the information in several views of a surface into a single output image. By developing a new tool called "frequency segmentation,” we show how this can be done despite not knowing the blurring kernel.
Reconstruction, restoration, sharpening and deblurring, smoothing.
Justin Domke, Yiannis Aloimonos, "Image Transformations and Blurring", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, no. , pp. 811-823, May 2009, doi:10.1109/TPAMI.2008.133