2011 10th IEEE Int'l Symp. Mixed and Augmented Reality, pp. 127–136

Agile sensor motion based reconstruction of the same scene, with the same reconstruction volume but MN different images.

Also this month

KinectFusion: Real-time dense surface mapping and tracking

by Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon

We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. READ FULL ARTICLE (login required) »

Average (0 Votes)
The average rating is 0.0 stars out of 5.