Guest Editors' Introduction: Special Section on the IEEE Virtual Reality Conference (VR)
JULY 2012 (Vol. 18, No. 7) pp. 1013-1016
1077-2626/12/$31.00 © 2012 IEEE

Published by the IEEE Computer Society
Guest Editors' Introduction: Special Section on the IEEE Virtual Reality Conference (VR)
Victoria Interrante , IEEE Senior Member

Benjamin C. Lok , IEEE Member

Aditi Majumder , IEEE Member

Michitaka Hirose , IEEE Member
  Download Citation  
   
Download Content
 
PDFs Require Adobe Acrobat
 
The IEEE Virtual Reality Conference (VR) continues to be the leading venue for disseminating the latest in VR research, applications, and technologies. This special section features significantly extended versions of five of the best papers from the 2011 IEEE VR Conference, held in Singapore last March. These selected papers describe a variety of outstanding advances in the field, and illustrate the depth and breadth of the work that appeared at the conference.
In 2011, the IEEE VR Conference received 134 submissions, 72 of which were long papers, the rest being short papers and posters. Of the 72 full paper submissions, 14 were accepted for publication and presentation as full papers, an acceptance rate of 19.4 percent (similar to previous years). An additional eight of the submitted full papers (11 percent) were accepted in short format. At the conference, a four-member awards committee was assembled, drawn from program committee members spanning four continents and representing a broad range of expertise in the field. The awards committee members attended the presentations of the 10 papers that had been most highly rated during the review process and, after lengthy deliberation, selected the six works that they felt made the most significant contribution to the field that year. The authors of those papers were invited to extend their work with at least 30 percent new original content and to submit their expanded papers for consideration for inclusion in this special issue. Each of the submissions we received was sent out for another round of independent peer review by three external experts, at least one of whom had been a reviewer of the original conference submission and at least one of whom had not. After several rounds of revision, five submissions were accepted for publication and appear in this special section.
In “Modeling Object Pursuit for Desktop Virtual Reality,” Lei Liu and Robert van Liere introduce and validate a novel interaction model, in the spirit of Fitt's Law for pointing to a static location, and Accot and Zhai's law for steering along a known path, that can be used to quantify the relationship between human performance on a 3D object tracking task and the spatial and temporal characteristics of that task. This model can be used to quantitatively evaluate the efficiency of user interfaces that involve 3D interaction with moving objects, enabling the quantitative comparison of alternative interaction techniques and input devices used for this purpose.
Liu and van Liere's model decomposes the total time spent during object pursuit into tracking and correction phases. The time spent on object tracking is modeled as the ratio of path length to object velocity, and the correction time is modeled as an empirically derived function of path length, path curvature, object size, and object velocity. The parameters of the model were derived from data collected through an experiment in which participants used a tracked stylus to follow 3D objects of various sizes, displayed in stereo on a large, high resolution monitor, as they moved with constant velocity along simple paths of various lengths and curvatures. The user's task was to position the tip of the stylus within the target and to maintain that relationship as the target moved. The target continued to move as long as it was being successfully tracked, but stopped moving when the offset between the stylus location and the center of the tracked object exceeded the object's radial size. The tracking phases were defined by the periods of time during which the target was moving, and the correction phases as the periods of time during which the stylus was being repositioned into coincidence with the target.
The derived model provides several valuable insights into user performance on object pursuit tasks. First of all, it was discovered that the collection of correction movements during object pursuit can be successfully modeled as pointing movements using Fitt's law. Also, it was discovered that: the dominant factor influencing overall task performance is the velocity of the moving target, rather than the radius of path curvature or the ratio of path length to target width; that tracking time decreases and correction time increases as object velocity increases; and that a user's movements during object tracking under the conditions tested are more aptly characterized by a series of small unsteady submovements than by a smooth and uniform motion, despite the constant velocity of the moving target. Finally, it was shown how the model can be used, among other things, to determine the optimal velocity of a moving target for tracking purposes, given the path length, path curvature, and target size, for applications in diverse areas including computer games and data visualization.
In “Interactive Visibility Retargeting in VR Using Conformal Visualization,” Kaloian Petkov, Charilaos Papadopoulos, Min Zhang, Arie E. Kaufman, and Xianfeng Gu introduce a novel method, based on conformal mapping, that allows information from up to 360 degrees around a user to be effectively presented within the restricted field of regard afforded by a partially immersive display platform. Their mapping algorithm is globally angle-preserving, and locally shape-preserving, making it particularly well-suited for use with applications that require focused attention on relatively compact regions within a larger surrounding context. In contrast to purely image-based retargeting approaches, the proposed implementation is capable of constructing accurate stereoscopic images that contain minimal resampling artifacts. The authors apply their technique to a variety of piecewise planar display configurations, including partial CAVES with three, four, and five screens, and angled arrangements of flat panel displays, and they validate the effectiveness of their method through a user study measuring the success rate of polyp detection in a virtual colonoscopy scenario.
The initial step in defining the mapping involves using a mesh-processing toolkit to manually generate template meshes for the source and target display surface geometries such that each mesh contains a single closed boundary, and three corresponding vertices are identified on each boundary. When the source mesh spans a full 360 degrees, it must be cut at a point, typically chosen to correspond to the center of the missing area in the target mesh, and a correspondence established between cut lines from that point and the boundary edges of the target display area. The source and target meshes are then finely tessellated and mapped to the complex plane. A discrete Ricci flow algorithm is used to compute a conformal mapping of each mesh to the unit disc via a process that is guaranteed to robustly converge to a unique global minimum, and a Möbius transformation is used to align the two resulting maps, based on the 3-vertex boundary correspondence. The aligned conformal mapping is stored in the texture coordinates at each vertex, and the viewing direction at each vertex in the source mesh is stored in the color attribute. Finally, the mesh is flattened to the unit disc by treating the texture coordinates as vertex positions, and the resulting circular texture is mapped onto the target mesh, with the view directions rendered onto the faces of a cubemap for efficient sampling during rendering.
The resulting remapping can be computed at interactive speeds, and applied to either polygonally-defined or volumetric data via a variety of rendering pipelines. During rasterization, a custom vertex shader can be used to transform the locations of each vertex in the input scene so that every triangle in the input file is rendered in its remapped configuration on the target display surface. GPU tessellation can be used to improve the quality of the result. Alternatively, in a ray-tracing or direct volume rendering implementation, the view directions themselves can be transformed, rather than the scene geometry. The computational overhead associated with the application of the remapping depends on the implementation, and is particularly low with the ray tracing approach. The viewpoint used for head tracking can be decoupled from the reference point used in defining the conformal mapping, giving the user flexible options for varying the mapping, e.g., to effectively zoom in on particular areas of interest.
The conformal visualization method was evaluated in a five-sided CAVE through a user-study in which participants were asked to search for simulated polyps in a phantom colon data set. It was found that detection sensitivity increased from 91 percent to 93 percent with conformal visualization, while the total time taken and false positive rates remained unchanged. Users reported that the additional information provided by the conformal mapping was of particular value in helping them to navigate bends in the colon that pointed upwards.
Redirected walking techniques are useful for enabling people to actively explore a relatively larger immersive virtual environment by physically walking around in a relatively smaller real world space while wearing a head mounted display. In “Velocity-Dependent Dynamic Curvature Gain for Redirected Walking,” Christian T. Neth, Jan L. Souman, David Engel, Uwe Kloos, Heinrich H. Bülthoff, and Betty J. Mohler propose two novel extensions to traditional redirection controllers that aim to extend the distance over which people are able to freely walk during the course of arbitrary navigation through a large virtual environment before they encounter a boundary in the real world that forces them to stop. In the first extension, rather than introducing a small fixed amount of scene rotation while a person is walking, to induce them to follow a curved path in the real world while walking straight in the virtual world, they take advantage of the fact that people are less sensitive to being redirected when they are walking more slowly to design a controller that dynamically varies the amount of scene rotation that is injected, as function of the walking speed. In the second extension, inspired by studies in proxemics, they use individual third person avatars to deflect peoples' forward movement away from boundaries in the real world space by having the avatars approach from the side and walk ahead of people in a way that blocks their intended trajectory.
The paper begins with a psychophysical study that quantitatively investigates the influence of walking speed on peoples' sensitivity to being redirected along a curved path. Participants followed a floating sphere that appeared to travel along a straight path while the virtual world was rotated, to bend that path, either leftward or rightward, into a circular arc of radius between 20-200m. The sphere traveled at a constant velocity of 0.75, 1.0, or 1.25 m/s for a duration of 6-7 seconds. Over the course of 600 trials, participants were asked to report the perceived direction of the path's bend, and psychometric functions were fitted to the data to determine the detection thresholds. At the slowest walking speed of 0.75 m/s (slightly over half of the average typical walking speed of 1.4 m/s), participants were significantly less sensitive to being redirected along a curved path than when they walked more quickly, being unable to reliably identify the direction of bend until the radius of curvature reached 10m, as opposed to 24-27m at the two faster speeds.
These results were used to inform the implementation of a dynamic redirection controller, in which translational movement was scaled up by a constant factor of two to allow covering more ground in the virtual world than in the physical space, virtual head rotation was scaled either up or down relative to the physical movement of the head, by an amount that increased as people got closer to the boundaries of the tracked space, and a varying amount of curvature gain was introduced as people walked forward, in an attempt to redirect them along a curved path that circled the center of the real world walking area. A freeze-turn reorientation intervention was used whenever the user entered a configuration in which it was no longer possible for them to continue walking. In that case, a stop sign appeared, the view of the virtual world was frozen, and people were required to turn in place (either left or right) until the stop sign disappeared. Finally, two types of avatar redirection were introduced. In one, an avatar walks in front of the user at a distance depending on his walking velocity, in an attempt to slow the user down when he walks too quickly, and in the other, a different avatar appears from the side and walks along a path that intersects the user's direction of travel in an attempt to initiate collision-avoidance behaviors that could provide additional opportunities for redirection.
Finally, the authors conducted an applied study in which they investigated the effectiveness of using velocity-dependent dynamic curvature gain and third person avatars to increase the average distance that participants were able to freely explore a 500 x 150m virtual space, within the confines of an 8.6m $^2$ physical space, before needing to stop and turn. They found that people were able to continue for a significantly longer distance when the dynamic rather than static gain controller was used (an average of 22m as opposed to 15m), but that the avatar controllers had no significant effect.
It is widely accepted that real walking, as a locomotion interface, is generally perceived as being more natural, and can enable better navigation and evoke a higher sense of presence, than walking-in-place or using a joystick to travel. What is not as well understood is the extent to which these advantages persist when redirection is necessary. In “The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface,” Tabitha C. Peck, Henry Fuchs, and Mary C. Whitton describe the design, implementation, and, most significantly, the evaluation of a system that effectively enables the free exploration of arbitrarily large immersive virtual environments by combining redirected walking with the use of distractors for reorientation.
Their described redirection control algorithm begins by predicting where the user intends to go in the virtual world, and then steering the predicted direction of motion away from the boundaries of the real world space. The paper reviews several alternative methods for inferring the direction of a user's intended future movement, involving factors such as: instantaneous or time-averaged gaze direction or direction of prior translational motion, and the relative locations of predefined landmark positions in the environment with respect to the current position of the user and the general direction of his heading. Efficient redirection is achieved by determining the direction of rotation (clockwise or counterclockwise) that would minimize the turning angle required to keep the user within the tracked space, and then rotating the virtual world, relative to the real world, by the maximum necessary amount that can be imperceptibly applied, dependent on the angular velocity of the user's head motion.
When redirection fails and the user gets too close to the edge of the tracked space, they are stopped and reoriented away from the boundary through the use of a distractor—a small object that flies back and forth in front of them, evoking head turns during which imperceptible rotations of the virtual environment can be applied. The paper outlines several algorithms for controlling the appearance and disappearance of these distractor elements, then introduces the concept of deterrents—transiently shown fixed obstacles to motion in undesirable directions—and reports informal observations on the effects of their use.
Finally, the paper presents the results of a user study that compares performance on a suite of navigation and wayfinding tasks when participants use redirected free walking with distractors versus walking-in-place or joystick interfaces, finding significant differences on a number of behavioral measures. In particular, although no significant differences in ratings of subjective presence were found, it was reported that participants traveled significantly shorter distances, had significantly fewer incidences of repeated routes and wrong turns, pointed to hidden targets more accurately and quickly, were able to place and label targets on maps more accurately, and were able to more accurately estimate the virtual environment size when using redirected free walking with distractors.
Much attention has been devoted to the problem of distance underestimation in virtual environments. In “Tuning Self-Motion Perception in Virtual Reality with Visual Illusions,” Gerd Bruder, Frank Steinicke, Phil Wieland, and Markus Lappe probe the potential of using self-motion illusions to address this problem. Specifically, they introduce the idea of using four different types of optic flow manipulation to create self-motion illusions in a head-mounted-display-based immersive virtual environment, and investigate the resultant effect on self-motion judgments during active locomotion.
The self-motion illusions are stimulated by blending in, at the periphery or on the ground plane, one of four different types of motion stimuli. The first type of illusory motion is achieved using layered motion, in which a visual overlay of moving elements (particle flow fields, sine wave gratings, or an infinite tiled surface) is blended with the user's view of the virtual environment as he walks. The second is created using contour filtering, in which edge filters oriented in the direction of simulated optic flow are convolved with the image corresponding to the user's view to modulate the apparent location of luminance edges over time. The third and fourth types of motion illusion are obtained by using continuous scene motion that is reset at brief inter-stimulus intervals, during which time the display is either blanked so that the user may perceive a contrast-reversed image as an after-effect, or an intervening sequence of contrast-reversed images is explicitly displayed.
A series of experiments was done to assess the extent to which adding each of these different types of motion stimuli to a scene affects the user's perception of self-motion as he walks. Four different fixed levels of translational gain were applied to the participant's movement, along with multiple different speeds of the illusory motion elements, and participants had to indicate whether their virtual movement was smaller or larger than their physical movement. The authors found that the layered motion of a tiled surface stimulus had a significant impact on motion perception, but that layered motion of particle flow and sine wave gratings did not. The type of blending, peripheral or ground plane, did not matter. Illusory motion evoked by contour filtering, change blindness, and contrast inversion all had a consistently significant effect on self-motion perception. These results suggest that illusory motion may be successfully used during scaled walking to counter the impression of increased virtual traveling speed, effectively enabling the exploration of larger immersive virtual environments on foot.
We are pleased to have this outstanding set of papers appear in TVCG and are grateful to our awards committee members, Kiyoshi Kiokawa, Anatole Lécuyer, and Mark Billinghurt, for identifying the papers to be invited for extension, and to the anonymous reviewers who provided thorough and well-considered feedback that helped to ensure the high final quality of the submissions that appear in this special section.

    V. Interrante is with the Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55455.

    E-mail: interran@cs.umn.edu.

    B.C. Lok is with the Computer and Information Science and Engineering Department, University of Florida, Gainsville, FL 32611.

    E-mail: lok@cise.ufl.edu.

    A. Majumder is with the Department of Computer Science, University of Califorina, Irvine, Irvine, CA 92697. E-mail: majumder@ics.uci.edu.

    M. Hirose is with the Department of Mechano-Informatics, University of Tokyo, Japan. E-mail: hirose@cyber.t.u-tokyo.ac.jp.

For information on obtaining reprints of this article, please send e-mail to: tvcg@computer.org.





Victoria Interrante received the PhD degree in computer science in 1996 from the University of North Carolina at Chapel Hill, where she was coadvised by Henry Fuchs and Stephen Pizer. She worked as a staff scientist at the Institute for Computer Applications in Science and Engineering (ICASE) at NASA Langley for two years before joining the faculty of the Department of Computer Science and Engineering at the University of Minnesota, where she is currently an associate professor. Her research focuses on applying insights from perception to applications in virtual reality, visualization, and computer graphics. She received the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2000 for her work in this area. She was general cochair of the first ACM/SIGGRAPH Symposium on Applied Perception in Graphics and Visualization in 2004, and program cochair for the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging in 2008 and for the Joint Virtual Reality Conference in 2010. She currently serves on the editorial boards of ACM Transactions on Perception (2004-2012) and Computers & Graphics (2011-2012). She is a senior member of the IEEE.





Benjamin C. Lok received the PhD degree (2002, advisor: Dr. Frederick P. Brooks, Jr.) and the MS degree (1999) from the University of North Carolina at Chapel Hill, and the BS degree in computer science (1997) from the University of Tulsa. He did a postdoctoral fellowship (2003) under Dr. Larry F. Hodges at the University of North Carolina at Charlotte. He is an associate professor in the Computer and Information Sciences and Engineering Department at the University of Florida (UF). He is also an adjunct associate professor in the Surgery Department at the Medical College of Georgia. His research focuses on virtual humans and mixed reality in the areas of computer graphics, virtual environments, and human-computer interaction. He received a US National Science Foundation (NSF) CAREER Award (2007-2012) and the UF ACM CISE Teacher of the Year Award in 2005-2006. He and his students in the Virtual Experiences Research Group have received Best Paper Awards at ACM I3D (top 3, 2003) and IEEE VR (2008). He currently serves on the steering committee of the IEEE Virtual Reality Conference and he was a program cochair of the ACM VRST 2009 and IEEE Virtual Reality 2010 and 2011 conferences, and area chair for IEEE ISMAR 2009. He will be general chair of IEEE VR in 2013. Professor Lok is on the editorial boards of the International Journal of Human-Computer Studies and Simulation: Transactions of the Society for Modeling and Simulation. He is a member of the IEEE.





Aditi Majumder is an associate professor in the Department of Computer Science at the University of California, Irvine. She received the PhD degree from the Department of Computer Science, University of North Carolina at Chapel Hill in 2003. Her research areas are computer graphics, vision, and image processing with primary focus on multiprojector displays. Her research aims to make multiprojector displays truly commodity products and easily accessible to the common man. She has won three best paper awards in 2009-2010 in premiere venues: IEEE Visualization, IEEE VR, and IEEE PROCAMS. She is the coauthor of the book Practical Multi-Projector Display Design. She was the program and general cochair of the Projector-Camera Workshop (PROCAMS) 2005 and the program chair of PROCAMS 2009. She was also the conference cochair for ACM Virtual Reality Software and Technology 2007 and general conference chair of IEEE Virtual Reality 2012. She has played a key role in developing the first curved screen multiprojector display being marketed by NEC/Alienware currently, and is an advisor at Disney Imagineering for advances in their projection-based theme park rides. She was a recipient of the US National Science Foundation (NSF) CAREER award in 2009 for Ubiquitous Displays via a Distributed Framework. She is a member of the IEEE.





Michitaka Hirose received the PhD degree from the University of Tokyo. He is a professor at the University of Tokyo's Graduate School of Information Science and Technology. His research interests include human-computer interfaces, interactive computer graphics, wearable computers, and VR. He's a project leader of the Digital Public Art project and the head of Cyber Interface group of the Information and Robot Technology project, sponsored by Japan's Ministry of Education. He is a member of the ACM and IEEE and is vice president of the Virtual Reality Society of Japan.