2014 2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud) (2014)
Oxford, United Kingdom
April 8, 2014 to April 11, 2014
we propose a semi-automatic method for requiring low computation resources and precisely generating depth map using on 2D to 3D conversion. Firstly, the user-defined points in the first frame background scene of 2D video sequence are used to extract vanishing line. Then, the background depth map is determined by vanishing line tracking and the moving objects are assigned with depth values according to their position in the background. Finally, the motion based depth map and the geometry based map are integrated into one depth map by a depth fusion algorithm. With the depth map and original 2D video, a 3D video is constructed.
T. Tsai and C. Fan, "Semi-Automatic 2D to 3D Video Conversion Based on Relative Velocity Estimation," 2014 2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud)(MOBILECLOUD), Oxford, United Kingdom, 2014, pp. 248-249.