2015 International Conference on 3D Vision (3DV) (2015)
Oct. 19, 2015 to Oct. 22, 2015
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/3DV.2015.40
This paper presents a system for 3D reconstruction of large-scale outdoor scenes based on monocular motion stereo. Ours is the first such system to run at interactive frame rates on a mobile device (Google Project Tango Tablet), thus allowing a user to reconstruct scenes "on the go" by simply walking around them. We utilize the device's GPU to compute depth maps using plane sweep stereo. We then fuse the depth maps into a global model of the environment represented as a truncated signed distance function in a spatially hashed voxel grid. We observe that in contrast to reconstructing objects in a small volume of interest, or using the near outlier-free data provided by depth sensors, one can rely less on free-space measurements for suppressing outliers in unbounded large-scale scenes. Consequently, we propose a set of simple filtering operations to remove unreliable depth estimates and experimentally demonstrate the benefit of strongly filtering depth maps. We extensively evaluate the system with real as well as synthetic datasets.
Cameras, Three-dimensional displays, Uncertainty, Image reconstruction, Real-time systems, Mobile handsets, Sensors
T. Schops, T. Sattler, C. Hane and M. Pollefeys, "3D Modeling on the Go: Interactive 3D Reconstruction of Large-Scale Scenes on Mobile Devices," 2015 International Conference on 3D Vision (3DV), Lyon, France, 2015, pp. 291-299.