The Community for Technology Leaders
RSS Icon
Issue No.05 - May (2013 vol.19)
pp: 886-896
E. D. Ragan , Dept. of Comput. Sci., Virginia Tech, Blacksburg, VA, USA
R. Kopper , Dept. of Comput. & Inf. Sci. & Eng., Univ. of Florida, Gainesville, FL, USA
P. Schuchardt , Cavewhere, Blacksburg, VA, USA
D. A. Bowman , Dept. of Comput. Sci., Virginia Tech, Blacksburg, VA, USA
Spatial judgments are important for many real-world tasks in engineering and scientific visualization. While existing research provides evidence that higher levels of display and interaction fidelity in virtual reality systems offer advantages for spatial understanding, few investigations have focused on small-scale spatial judgments or employed experimental tasks similar to those used in real-world applications. After an earlier study that considered a broad analysis of various spatial understanding tasks, we present the results of a follow-up study focusing on small-scale spatial judgments. In this research, we independently controlled field of regard, stereoscopy, and head-tracked rendering to study their effects on the performance of a task involving precise spatial inspections of complex 3D structures. Measuring time and errors, we asked participants to distinguish between structural gaps and intersections between components of 3D models designed to be similar to real underground cave systems. The overall results suggest that the addition of the higher fidelity system features support performance improvements in making small-scale spatial judgments. Through analyses of the effects of individual system components, the experiment shows that participants made significantly fewer errors with either an increased field of regard or with the addition of head-tracked rendering. The results also indicate that participants performed significantly faster when the system provided the combination of stereo and head-tracked rendering.
Visualization, Electron tubes, Navigation, Data visualization, Head, Tracking, Rendering (computer graphics),graphical user interfaces, Artificial, augmented, and virtual realities
E. D. Ragan, R. Kopper, P. Schuchardt, D. A. Bowman, "Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small-Scale Spatial Judgment Task", IEEE Transactions on Visualization & Computer Graphics, vol.19, no. 5, pp. 886-896, May 2013, doi:10.1109/TVCG.2012.163
[1] C. Ware and G. Franck, "Evaluating Stereo and Motion Cues for Visualizing Information Nets in Three Dimensions," ACM Trans. Graphics, vol. 15, no. 2, pp. 121-140, 1996.
[2] A.E. Richardson, D.R. Montello, and M. Hegarty, "Spatial Knowledge Acquisition from Maps and from Navigation in Real and Virtual Environments," Memory and Cognition, vol. 27, no. 4, pp. 741-750, 1999.
[3] D. Waller, E. Hunt, and D. Knapp, "The Transfer of Spatial Knowledge in Virtual Environment Training," Presence: Teleoperators and Virtual Environments, vol. 7, no. 2, pp. 129-143, 1998.
[4] S.S. Chance, F. Gaunet, A.C. Beall, and J.M. Loomis, "Locomotion Mode Affects the Updating of Objects Encountered during Travel: The Contribution of Vestibular and Proprioceptive Inputs to Path Integration," Presence: Teleoperators and Virtual Environments, vol. 7, no. 2, pp. 168-178, 1998.
[5] C. Ware, K. Arthur, and K.S. Booth, "Fish Tank Virtual Reality," Proc. INTERACT '93 and CHI '93 Conf. Human Factors in Computing Systems, pp. 37-42, 1993.
[6] D.A. Bowman and R.P. McMahan, "Virtual Reality: How Much Immersion Is Enough?," Computer, vol. 40, no. 7, pp. 36-43, 2007.
[7] M. Slater, "A Note on Presence Terminology," Presence Connect, vol. 3, no. 3, 2003.
[8] R.P. McMahan, D.A. Bowman, D.J. Zielinski, and R.B. Brady, "Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game," IEEE Trans. Visualization and Computer Graphics, vol. 18, no. 4, pp. 626-633, Apr. 2012.
[9] W. Barfield, C. Hendrix, and K. Bystrom, "Visualizing the Structure of Virtual Objects Using Head Tracked Stereoscopic Displays," Proc. Virtual Reality Ann. Int'l Symp. (VRAIS '97), p. 114, 1997.
[10] P.A. Forsberg, M. Katzourin, K. Wharton, and M. Slater, "A Comparative Study of Desktop, Fishtank, and Cave Systems for the Exploration of Volume Rendered Confocal Data Sets," IEEE Trans. Visualization and Computer Graphics, vol. 14, no. 3, pp. 551-563, May/June 2008.
[11] K. Gruchalla, "Immersive Well-Path Editing: Investigating the Added Value of Immersion," Proc. IEEE Virtual Reality, pp. 157-164, 2004.
[12] A.S. Watson and C.J. Anumba, "The Need for an Integrated 2D/3D CAD System in Structural Engineering," Computers and Structures, vol. 41, no. 6, pp. 1175-1182, 1991.
[13] A. Khanzode, M. Fisher, and D. Reed, "Challenges and Benefits of Implementing Virtual Design and Construction Technologies for Coordination of Mechanical, Electrical, and Plumbing Systems on Large Healthcare Project," Proc. CIB 24th W78 Conf., pp. 205-212, 2007.
[14] N. Akkiraju, H. Edelsbrunner, P. Fu, and J. Qian, "Viewing Geometric Protein Structures From Inside a CAVE," IEEE Computer Graphics Applications, vol. 16, no. 4, pp. 58-61, July 1996.
[15] P. Schuchardt and D.A. Bowman, "The Benefits of Immersion for Spatial Understanding of Complex Underground Cave Systems," Proc. ACM Symp. Virtual Reality Software and Technology, pp. 121-124, 2007.
[16] M. Slater, "Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments," Philosophical Trans. Royal Soc. B: Biological Sciences, vol. 364, no. 1535, pp. 3549-3557, 2009.
[17] E.J. Gibson, J.J. Gibson, O.W. Smith, and H. Flock, "Motion Parallax as a Determinant of Perceived Depth," J. Experimental Psychology, vol. 58, no. 1, pp. 40-51, 1959.
[18] B. Rogers, "Motion Parallax as an Independent Cue for Depth Perception," Perception, vol. 8, no. 2, pp. 125-134, 1979.
[19] B. Rogers and M. Graham, "Similarities between Motion Parallax and Stereopsis in Human Depth Perception," Vision Research, vol. 22, no. 2, pp. 261-270, 1982.
[20] M. Mon-Williams, J.P. Wann, and S. Rushton, "Binocular Vision in a Virtual World: Visual Deficits Following the Wearing of a Head-Mounted Display," Ophthalmic and Physiological Optics, vol. 13, no. 4, pp. 387-391, 1993.
[21] J.P. Wann, S. Rushton, and M. Mon-Williams, "Natural Problems for Stereoscopic Depth Perception in Virtual Environments," Vision Research, vol. 35, no. 19, pp. 2731-2736, 1995.
[22] E.B. Johnston, B.G. Cumming, and M.S. Landy, "Integration of Stereopsis and Motion Shape Cues," Vision Research, vol. 34, no. 17, pp. 2259-2275, 1994.
[23] P.B. Hibbard and M.F. Bradshaw, "Isotropic Integration of Binocular Disparity and Relative Motion in the Perception of Three-Dimensional Shape," Spatial Vision, vol. 15, no. 2, pp. 205-217, 2002.
[24] R.A. Jacobs, "Optimal Integration of Texture and Motion Cues to Depth," Vision Research, vol. 39, no. 21, pp. 3621-3629, 1999.
[25] D. Buckley and J.P. Frisby, "Interaction of Stereo, Texture and Outline Cues in the Shape Perception of Three-Dimensional Ridges," Vision Research, vol. 33, no. 7, pp. 919-933, 1993.
[26] J.M. Hillis, S.J. Watt, M.S. Landy, and M.S. Banks, "Slant from Texture and Disparity Cues: Optimal Cue Combination," J. Vision, vol. 4, no. 12, Dec. 2004.
[27] F. Domini, C. Caudek, and H. Tassinari, "Stereo and Motion Information Are Not Independently Processed by the Visual System," Vision Research, vol. 46, no. 11, pp. 1707-1723, 2006.
[28] K.-I. Tsutsui, M. Jiang, H. Sakata, and M. Taira, "Short-Term Memory and Perceptual Decision for Three-Dimensional Visual Features in the Caudal Intraparietal Sulcus (Area CIP)," The J. Neuroscience, vol. 23, no. 13, pp. 5486-5495, July 2003.
[29] K.-I. Tsutsui, M. Jiang, K. Yara, H. Sakata, and M. Taira, "Integration of Perspective and Disparity Cues in Surface-Orientation–Selective Neurons of Area CIP," J. Neurophysiology, vol. 86, no. 6, pp. 2856-2867, Dec. 2001.
[30] K.-I. Tsutsui, M. Taira, and H. Sakata, "Neural Mechanisms of Three-Dimensional Vision," Neuroscience Research, vol. 51, no. 3, pp. 221-229, 2005.
[31] H. Sakata, K.-I. Tsutsui, and M. Taira, "Toward an Understanding of the Neural Processing for 3D Shape Perception," Neuropsychologia, vol. 43, no. 2, pp. 151-161, 2005.
[32] M. Taira, K.-I. Tsutsui, M. Jiang, K. Yara, and H. Sakata, "Parietal Neurons Represent Surface Orientation from the Gradient of Binocular Disparity," J. Neurophysiology, vol. 83, no. 5, pp. 3140-3146, May 2000.
[33] R. Pausch, D. Proffitt, and G. Williams, "Quantifying Immersion in Virtual Reality," Proc. 24th Ann. Conf. Computer Graphics and Interactive Techniques, pp. 13-18, 1997.
[34] R.A. Ruddle, S.J. Payne, and D.M. Jones, "Navigating Large-Scale Virtual Environments: What Differences Occur Between Helmet-Mounted and Desk-Top Displays?," Presence: Teleoperation Virtual Environment, vol. 8, no. 2, pp. 157-168, 1999.
[35] L. Arns, D. Cook, and C. Cruz-Neira, "The Benefits of Statistical Visualization in an Immersive Environment," Proc. IEEE Virtual Reality, pp. 88-95, 1999.
[36] D. Raja, D. Bowman, J. Lucas, and C. North, "Exploring the Benefits of Immersion in Abstract Information Visualization," Proc. Immersive Projection Technology Workshop, 2004.
[37] B. Laha, K. Sensharma, J.D. Schiffbauer, and D.A. Bowman, "Effects of Immersion on Visual Analysis of Volume Data," IEEE Trans. Visualization and Computer Graphics, vol. 18, no. 4, pp. 597-606, Apr. 2012.
[38] S. Palmisano, B. Gillam, D.G. Govan, R.S. Allison, and J.M. Harris, "Stereoscopic perception of real depths at large distances," J. Vision, vol. 10, no. 6, p. 19, 2010.
[39] C. Ware, Information Visualization: Perception for Design. Morgan Kaufman, 1999.
[40] D.A. Bowman, M. Setareh, M.S. Pinho, N. Ali, A. Kalita, Y. Lee, J. Lucas, M. Gracey, M. Kothapalli, Q. Zhu, A. Datey, and P. Tumati, "Virtual-SAP: An Immersive Tool for Visualizing the Response of Building Structures to Environmental Conditions," Proc. IEEE Virtual Reality, p. 243, 2003.
[41] F. Bacim, N. Polys, J. Chen, M. Setareh, J. Li, and L. Ma, "Cognitive Scaffolding in Web3D Learning Systems: a Case Study for Form and Structure," Proc. 15th Int'l Conf. Web 3D Technology, pp. 93-100, 2010.
10 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool