The Community for Technology Leaders
2018 International Conference on 3D Vision (3DV) (2018)
Verona, Italy
Sep 5, 2018 to Sep 8, 2018
ISSN: 2475-7888
ISBN: 978-1-5386-8425-2
pp: 719-727
ABSTRACT
This work presents a novel architecture of deep neural networks to generate meshes approximating the surface of a 3D object from a single image. Compared to existing learning-based 3D reconstruction models, our architecture is characterized by (1) deep mesh deformation stacks with residual network design, where a simple mesh is transformed to approximate the target surface and undergoes multiple deformation steps to progressively refine the result and reduce the residuals, and (2) parallel paths per deformation step, which can exponentially enrich the generated meshes using deeper structure and more model parameters. We also propose novel regularization scheme that encourages the meshes to be both globally complementary to cover the target surface and locally consistent with each other. Empirical evaluation on benchmark datasets show advantage of the proposed architecture over existing methods.
INDEX TERMS
approximation theory, image reconstruction, learning (artificial intelligence), mesh generation, neural nets
CITATION

J. Pan, J. Li, X. Han and K. Jia, "Residual MeshNet: Learning to Deform Meshes for Single-View 3D Reconstruction," 2018 International Conference on 3D Vision (3DV), Verona, Italy, 2018, pp. 719-727.
doi:10.1109/3DV.2018.00087
169 ms
(Ver 3.3 (11022016))