The Community for Technology Leaders
2016 Fourth International Conference on 3D Vision (3DV) (2016)
Stanford, CA, USA
Oct. 25, 2016 to Oct. 28, 2016
ISBN: 978-1-5090-5408-4
pp: 611-619
ABSTRACT
Multi-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset [23] outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.
INDEX TERMS
Semantics, Estimation, Image segmentation, Feature extraction, Training, Proposals, Computer architecture
CITATION

A. Mousavian, H. Pirsiavash and J. Kosecka, "Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks," 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 2016, pp. 611-619.
doi:10.1109/3DV.2016.69
92 ms
(Ver 3.3 (11022016))