Inthis special section, we are pleased to present extended versions of four outstanding papers that were originally presented at the IEEE Virtual Reality 2006 Conference (VR 2006). IEEE Virtual Reality is the premier international conference on all aspects of virtual, augmented, and mixed reality. The conference program at VR 2006 consisted of nine sessions on the following topics: perception, simulation and visualization, applications of VR, distributed and collaborative systems, evaluation and user studies, augmented reality, tracking and projection displays, 3D interaction, and haptic and olfactory displays. For this special section, the international program committee selected four excellent papers from the 28 accepted research papers. As always, the choice was difficult since many of the other papers were also excellent candidates.
The first paper, by Sean D. Young, Bernard D. Adelstein, and Stephen R. Ellis, received the best paper award at the VR 2006 for its high relevance to the field of virtual reality and simulation. The authors asked the question, "Does taking a motion sickness questionnaire make you motion sick?" Surprisingly, their research indicates that the answer is "yes!" The paper demonstrates that the administration of the questionnaire itself makes the participant aware that the virtual environment may produce motion sickness. The study shows that reports of motion sickness after immersion are much greater when both pre and posttest questionnaires are given than when only a posttest questionnaire is used. Since pretest questionnaires cannot simply be dropped in most cases, the authors suggest a number of ways to reduce this effect and discuss the implications of their observations.
Augmented reality (AR) systems, which combine real-world and virtual imagery, present a unique set of perceptual issues for the user. The paper by J. Edward Swan II, Adam Jones, Eric Kolstad, Mark A. Livingston, and Harvey S. Smallman addresses such a problem: the accuracy of depth judgments made by users of optical see-through AR displays. These displays allow users to view the physical world directly, while overlaying virtual objects on the real scene. In many applications, it is critical that the user perceives the virtual objects to be in the correct position relative to the real world, but differences in depth perception between the virtual and real imagery may prevent this. Moreover, measuring the accuracy of users' depth judgments is not trivial. The authors review previous work and methods used to address this problem, and then present two experiments of their own. The experiments use a perceptual matching technique and a blind walking technique to measure depth judgments, and reveal some interesting and surprising results.
An emerging area of research in the VR community focuses on virtual humans. In the past, virtual human research has mainly addressed technical issues—making the virtual characters realistic in appearance, movements, emotions, behaviors, etc. With many of these problems at least partially solved, however, researchers can now begin to evaluate the social aspects of virtual humans; that is, how real users interact with virtual characters. Andrew B. Raij, Kyle Johnsen, Robert F. Dickerson, Benjamin C. Lok, Marc S. Cohen, Margaret Duerson, Rebecca Rainer Pauly, Amy O. Stevens, Peggy Wagner, and D. Scott Lint present a paper along these lines, describing two studies in which medical students interacted with a simulated patient. The simulated patient was either a real person acting the part of a patient, or a virtual human playing this role. The studies show that while the interpersonal interactions with the virtual human were similar to interactions with the real human in many ways, there were also subtle differences in the participants' nonverbal behavior and attitude toward the virtual human. Such studies are critical for improving our understanding of how to use virtual characters in real-world VR applications.
Believable haptic interaction with complex virtual objects is still a challenging research topic. Michael Ortega, Stephane Redon, and Sabine Coquillart have generalized the god-object method to enable high quality haptic interaction with rigid bodies consisting of tens of thousands of triangles. They suggest separating the computation of the motion of the six-degree-of-freedom god-object from the computation of the force applied to the user. The constraint-based force felt by the user can be computed within a few microseconds, which is necessary for the tactile simulation of fine surface details. The force is computed using a novel constraint-based quasistatic approach, which allows the suppression of force artifacts typically found in previous methods. The update of the pose of the rigid god-object is performed within a few milliseconds, which allows visual display at appropriate frame rates.
All of these papers contain high quality work and valuable contributions to the body of knowledge on virtual, mixed, and augmented reality systems and environments. We thank the authors for their significantly extended versions of their IEEE VR 2006 papers and the reviewers for their constructive and detailed comments.
Doug A. Bowman
• B. Fröhlich is with the Virtual Reality Systems Group, Faculty of Media, Bauhaus-Universität Weimar, Bauhausstraße 11, 99423 Weimar, Germany. E-mail: email@example.com.
• D.A. Bowman is with the Department of Computer Science, 660 McBryde Hall, Virginia Tech, Blacksburg, VA 24061. E-mail: firstname.lastname@example.org.
• H. Iwata is with the Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba 305-8573, Japan.
For information on obtaining reprints of this article, please send e-mail to: email@example.com.
received the MS and PhD degrees in computer science from the Technical University of Braunschweig in 1988 and 1992, respectively. He is currently a full professor with the Media Faculty at Bauhaus-Universität Weimar, Germany. From 1997 to 2001, he held a position as a senior scientist at the German National Research Center for Information Technology (GMD), where he was involved in scientific visualization research. From 1995 to 1997, he worked as a research associate with the Computer Graphics Group at Stanford University. He has served as a program cochair for IEEE VR in 2003, 2005, and 2006, as well as a general cochair for EGVE/IPT in 2001. His group will host the IPT/EGVE event in 2007 in Weimar. He is also a coinitiator of the 3DUI Symposium Series and has served as a cochair for the preceding 3DUI Workshops and the first 3DUI Symposium. His research interests include real-time rendering, 2D and 3D input devices, 3D interaction techniques, display technology, and support for tight collaboration in colocated and distributed virtual environments.
Doug A. Bowman
received the BS degree in mathematics and computer science from Emory University (Atlanta, Georgia) in 1994, the MS degree in computer science from the Georgia Institute of Technology (Atlanta, Georgia) in 1997, and the PhD degree in computer science, also from Georgia Tech, in 1999. He is currently an associate professor in the Department of Computer Science at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, Virginia. He is also affiliated with the Center for Human-Computer Interaction at Virginia Tech. He is the lead author of 3D User Interfaces: Theory and Practice
(Addison-Wesley, 2005), and has published more than 60 articles in peer-reviewed journals and conferences. His research interests include 3D user interfaces, 3D interaction techniques, and the benefits of immersion in virtual environments.
received the BS, MS, and PhD degrees in engineering from the University of Tokyo in 1981, 1983, and 1986, respectively. He is a professor in the Graduate School of Systems and Information Engineering at the University of Tsukuba, where he is teaching human interface and leading research projects on virtual reality. His research interests include haptic interface, locomotion interface, and spatially immersive display. He is a board member of the Virtual Reality Society of Japan. He exhibited his work at the Emerging Technologies venue of SIGGRAPH 1994-2006 as well as Ars Electronica Festival 1996, 1997, 1999, and 2001.