The Community for Technology Leaders
2017 IEEE Virtual Reality (VR) (2017)
Los Angeles, CA, USA
March 18, 2017 to March 22, 2017
ISSN: 2375-5334
ISBN: 978-1-5090-6648-3
pp: 37-44
Jingwei Huang , Stanford University, Adobe Research, USA
Zhili Chen , Adobe Research, USA
Duygu Ceylan , Adobe Research, USA
Hailin Jin , Adobe Research, USA
ABSTRACT
Recent breakthroughs in consumer level virtual reality (VR) headsets are creating a growing user-base in demand for immersive, full 3D VR experiences. While monoscopic 360-videos are perhaps the most prevalent type of content for VR headsets, they lack 3D information and thus cannot be viewed with full 6 degree-of-freedom (DOF). We present an approach that addresses this limitation via a novel warping algorithm that can synthesize new views both with rotational and translational motion of the viewpoint. This enables the ability to perform VR playback of input monoscopic 360-videos files in full stereo with full 6-DOF of head motion. Our method synthesizes novel views for each eye in accordance with the 6-DOF motion of the headset. Our solution tailors standard structure-from-motion and dense reconstruction algorithms to work accurately for 360-videos and is optimized for GPUs to achieve VR frame rates (>120 fps). We demonstrate the effectiveness our approach on a variety of videos with interesting content.
INDEX TERMS
Cameras, Three-dimensional displays, Videos, Image reconstruction, Headphones, Geometry, Tracking
CITATION

J. Huang, Z. Chen, D. Ceylan and H. Jin, "6-DOF VR videos with a single 360-camera," 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 2017, pp. 37-44.
doi:10.1109/VR.2017.7892229
113 ms
(Ver 3.3 (11022016))