This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Perceptual Calibration for Immersive Display Environments
April 2013 (vol. 19 no. 4)
pp. 691-700
K. Ponto, Dept. of Comput. Sci., Univ. of Wisconsin, Madison, WI, USA
M. Gleicher, Dept. of Comput. Sci., Univ. of Wisconsin, Madison, WI, USA
R. G. Radwin, Dept. of Biomed. Eng., Univ. of Wisconsin, Madison, WI, USA
Hyun Joon Shin, Div. of Digital Media, Ajou Univ., Suwon, South Korea
The perception of objects, depth, and distance has been repeatedly shown to be divergent between virtual and physical environments. We hypothesize that many of these discrepancies stem from incorrect geometric viewing parameters, specifically that physical measurements of eye position are insufficiently precise to provide proper viewing parameters. In this paper, we introduce a perceptual calibration procedure derived from geometric models. While most research has used geometric models to predict perceptual errors, we instead use these models inversely to determine perceptually correct viewing parameters. We study the advantages of these new psychophysically determined viewing parameters compared to the commonly used measured viewing parameters in an experiment with 20 subjects. The perceptually calibrated viewing parameters for the subjects generally produced new virtual eye positions that were wider and deeper than standard practices would estimate. Our study shows that perceptually calibrated viewing parameters can significantly improve depth acuity, distance estimation, and the perception of shape.
Index Terms:
Calibration,Solid modeling,Estimation,Shape,Virtual environments,Cameras,stereo vision displays.,Virtual reality,calibration,perception,distance estimation,shape perception,depth compression
Citation:
K. Ponto, M. Gleicher, R. G. Radwin, Hyun Joon Shin, "Perceptual Calibration for Immersive Display Environments," IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 4, pp. 691-700, April 2013, doi:10.1109/TVCG.2013.36
Usage of this product signifies your acceptance of the Terms of Use.