Guest Editor's Introduction: Special Section on Virtual Reality
MAY/JUNE 2008 (Vol. 14, No. 3) pp. 485-486
1077-2626/08/$31.00 © 2008 IEEE

Published by the IEEE Computer Society
Guest Editor's Introduction: Special Section on Virtual Reality
Anthony Steed

William Sherman

Ming C. Lin , IEEE Member
  Download Citation  
Download Content
PDFs Require Adobe Acrobat
VirtualReality (VR) systems combine computer graphics and interactive techniques in many interesting and novel ways. With recent advances in immersive displays, graphics hardware, commodity computing platforms, authoring software, and human factors studies, the field of virtual reality has seen tremendous progress in the development of systems that fit users' needs and expectations. The IEEE Virtual Reality (VR) Conference is the world's premier conference for disseminating the latest research results in virtual reality. IEEE VR 2007 was held in Charlotte, North Carolina. This special section comprises significantly revised and expanded versions of four of the best papers selected from the Proceedings of IEEE VR 2007. We relied heavily on many reviewers of this special section and we would like to thank them warmly for the diligent work they did.
One of the key challenges in virtual reality is display systems. Autostereoscopic display systems are among the dominant forms of display technologies. Some of the restrictions present in existing static barrier autostereoscopic display systems are fixed view-distance range, slow response to head movements, and fixed stereo operating mode. The paper "Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System" by Tom Peterka, Robert L. Kooima, Daniel J. Sandin, Andrew Johnson, Jason Leigh, and Thomas A. DeFanti addresses several of these deficiencies. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, it affords other benefits, including expanded view distance working range, reduced sensitivity to system latency during head movement, eliminated physical barrier registration, ability to disable the barrier and convert the display to 2D, and the affordance of two independently tracked viewers, each with their own autostereo perspective of the virtual world. Thus, applications can mix 2D text and images, 3D monoscopic mode scenes, 3D autostereo single viewer mode scenes, multiple users interacting with their own perspective of the same scene and untracked multiview panoramagrams. With smaller form factors, pervasive Dynallax commodity desktop and laptop displays can be envisioned to support 2D and 3D modes seamlessly across applications. Although initial success has been demonstrated in this paper, the greatest advantages are yet to be realized: a wall or entire room tiled with Dynallax screens, multiple modes simultaneously active without regard to physical tile borders.
The next two papers describe improved design and usability engineering of mixed reality (MR) and augmented reality (AR) systems and applications, where the goal is to mix real objects with synthetically generated information. In their paper "Heads Up and Camera Down: A Vision-Based Tracking Modality for Mobile Mixed Reality," Stephen DiVerdi and Tobias Höllerer describe a novel concept, Anywhere Augmentation, to lower the initial time and costs for building mixed reality systems, bridging the gap between researchers in the field and regular users. They introduce the GroundCam, consisting of a camera and an orientation tracker, for both indoor and outdoor applications. The GroundCam tracking modality is a vision-based local tracker with high resolution, good short-term accuracy, and an update rate appropriate for interactive VR applications. The feasibility of a hybrid tracker, coupling the GroundCam with a GPS receiver, has also been demonstrated, as well as a discrete beacon-based wide area sensor. The GroundCam compares favorably to other similar tracking modalities. In addition, it is cheap and readily available, and requires almost no time to setup in a new environment, making it suitable for all types of applications for novice users.
Designing effective user interfaces for emerging technologies, such as VR, AR, and MR, that have no established design guidelines or interaction metaphors is an important area in VR, as they can introduce completely new ways for users to perceive and interact with technology and the world around them. In the paper "Usability Engineering for Augmented Reality: Employing User-Based Studies to Inform Design," Joseph L. Gabbard and J. Edward Swan II propose a usability engineering approach that employs user-based studies to inform design, by iteratively inserting a series of user-based studies into a traditional usability engineering lifecycle to better inform initial user interface designs. Under this approach, user performance can be explored against combinations of design parameters (i.e., experimental factors and levels), to discover what combinations of parameters support the best user performance under various conditions. This approach differs from traditional HCI approaches in the way that basic user interface and/or interaction issues are explored vis-á-vis user-based studies as part of the usability engineering of a specific application, as opposed to application developers drawing from a body of established guidelines produced in the past by others performing low-level, or generic, user-based studies. They also describe a case study involving text legibility in outdoor AR to illustrate how user-based studies can inform design.
Realistic simulations of virtual crowds have diverse applications in architecture design, emergency evacuation, urban planning, personnel training, education and entertainment. The last paper, "Real-Time Path Planning in Dynamic Virtual Environments Using Multiagent Navigation Graphs," by Avneesh Sud, Erik Andersen, Sean Curtis, Ming C. Lin, and Dinesh Manocha addresses the problem of real-time path planning and navigation for multiple virtual agents moving in a dynamic environment. They introduce a new data structure called "multi-agent navigation graph" or MaNG and show how to compute it efficiently using GPU-accelerated discrete Voronoi diagrams. Instead of the traditional approach of using the first-order Voronoi diagrams, the second-order Voronoi diagram of all the obstacles and agents is computed. The second-order Voronoi diagram provides pairwise proximity information for all the agents simultaneously. MaNG is computed by combining the first and second-order Voronoi graphs for global path planning of multiple virtual agents. The resulting MaNG offers a much more computationally efficient graph structure for path planning among multiple agents in dynamic environments. Only one MaNG is needed for planning of n agents instead of computing n first-order Voronoi diagrams for n moving agents using the classical approach. Furthermore, they also present techniques for local dynamics computation of each agent and use the proximity relationships computed by MaNG to compute interaction among multiple agents in real time. This approach can be easily integrated with other rule-based approach to model the avatar behaviors for simulating realistic virtual crowds.
This special section presents a collection of original research and contributions in virtual reality for IEEE Transactions on Visualization and Computer Graphics. It is our hope that it will stimulate novel and exciting ideas to further this field.

    A. Steed is with University College London, London, UK.


    W. Sherman is with the Desert Research Institute, 2215 Raggio Parkway, Reno, NV 89512. E-mail:

    M.C. Lin is with the University of North Carolina, Computer Science Department, Sitterson Hall, CB#3175, Chapel Hill, NC 27599-3175.


For information on obtaining reprints of this article, please send e-mail to:

Anthony Steed received the PhD degree from Queen Mary College, University of London in 1996. He is a reader (associate professor) in Virtual Environments at University College London. He is head of the Virtual Environments and Computer Graphics (VECG) group which currently numbers more than 20 academic staff, researchers, and doctoral students. His research interests are in very large 3D model rendering, immersion and presence in virtual environment displays, and interaction and collaboration between users of virtual environments systems. His long-term vision for virtual environments is that they should faithfully reproduce the real world at a distance. He has more than 100 refereed publications, and is coauthor of the book Computer Graphics and Virtual Environments: From Realism to Real-Time (Addison Wesley). He is also head of the Engineering Doctorate Center in Virtual Environments, Imaging, and Visualization, which funds doctorates in collaboration with industry.

William Sherman received the MS degree in computer science from the University of Illinois at Urbana-Champaign (UIUC). He is the technical director of the Center for Advanced Visualization, Computation, and Modeling (CAVCaM) at the Desert Research Institute (DRI) in Reno, Nevada. In 1989, he joined the scientific visualization team at the National Center for Supercomputing Applications (NCSA). At NCSA, he was responsible for the virtual reality lab from 1992 through 2004. In 2004, he joined the faculty at DRI to create a new virtual reality laboratory for the development of scientific visualization and training applications. His research insterests span the entire field of virtual reality, with a focus on integration libraries, and applications for science, education, and training. He is coauthor of the book Understanding Virtual Reality (Morgan Kaufmann). He is on the editorial board of the International Journal of Virtual Reality and is the general chair of the 2008 IEEE Conference on Virtual Reality.

Ming C. Lin received the PhD degree in electrical engineering and computer science from the University of California, Berkeley. She is currently the Beverly W. Long Distiguished Professor of Computer Science at the University of North Carolina (UNC) at Chapel Hill. She has received several honors and awards, including the US National Science Foundation (NSF) Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, and six best paper awards at international conferences on computer graphics and virtual reality. Her research interests include physically-based modeling, haptics, real-time 3D graphics for virtual environments, robotics, and geometric computing. She has (co)authored more than 170 refereed publications, coedited/authored three books, Applied Computational Geometry (Springer-Verlag), High-Fidelity Haptic Rendering (Morgan-Claypool), and Haptic Rendering: Foundations, Algorithms, and Applications (AK Peters). She has served on nearly 70 program committees of leading conferences on virtual reality, computer graphics, robotics, haptics and computational geometry, and cochaired more than 15 international conferences and workshops. She is the Associate Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics, a member of four editorial boards, and a guest editor for more than a dozen of special issues of scientific journals and technical magazines. She has also served on four steering committees and advisory boards of international conferences, as well as six technical advisory committees constituted by government organizations and industry. She is a member of the IEEE and the IEEE Computer Society.