',sticky:true})};
, IEEE
Pages: pp. 485-486
VirtualReality (VR) systems combine computer graphics and interactive techniques in many interesting and novel ways. With recent advances in immersive displays, graphics hardware, commodity computing platforms, authoring software, and human factors studies, the field of virtual reality has seen tremendous progress in the development of systems that fit users' needs and expectations. The IEEE Virtual Reality (VR) Conference is the world's premier conference for disseminating the latest research results in virtual reality. IEEE VR 2007 was held in Charlotte, North Carolina. This special section comprises significantly revised and expanded versions of four of the best papers selected from the Proceedings of IEEE VR 2007. We relied heavily on many reviewers of this special section and we would like to thank them warmly for the diligent work they did.
One of the key challenges in virtual reality is display systems. Autostereoscopic display systems are among the dominant forms of display technologies. Some of the restrictions present in existing static barrier autostereoscopic display systems are fixed view-distance range, slow response to head movements, and fixed stereo operating mode. The paper "Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System" by Tom Peterka, Robert L. Kooima, Daniel J. Sandin, Andrew Johnson, Jason Leigh, and Thomas A. DeFanti addresses several of these deficiencies. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, it affords other benefits, including expanded view distance working range, reduced sensitivity to system latency during head movement, eliminated physical barrier registration, ability to disable the barrier and convert the display to 2D, and the affordance of two independently tracked viewers, each with their own autostereo perspective of the virtual world. Thus, applications can mix 2D text and images, 3D monoscopic mode scenes, 3D autostereo single viewer mode scenes, multiple users interacting with their own perspective of the same scene and untracked multiview panoramagrams. With smaller form factors, pervasive Dynallax commodity desktop and laptop displays can be envisioned to support 2D and 3D modes seamlessly across applications. Although initial success has been demonstrated in this paper, the greatest advantages are yet to be realized: a wall or entire room tiled with Dynallax screens, multiple modes simultaneously active without regard to physical tile borders.
The next two papers describe improved design and usability engineering of mixed reality (MR) and augmented reality (AR) systems and applications, where the goal is to mix real objects with synthetically generated information. In their paper "Heads Up and Camera Down: A Vision-Based Tracking Modality for Mobile Mixed Reality," Stephen DiVerdi and Tobias Höllerer describe a novel concept, Anywhere Augmentation, to lower the initial time and costs for building mixed reality systems, bridging the gap between researchers in the field and regular users. They introduce the GroundCam, consisting of a camera and an orientation tracker, for both indoor and outdoor applications. The GroundCam tracking modality is a vision-based local tracker with high resolution, good short-term accuracy, and an update rate appropriate for interactive VR applications. The feasibility of a hybrid tracker, coupling the GroundCam with a GPS receiver, has also been demonstrated, as well as a discrete beacon-based wide area sensor. The GroundCam compares favorably to other similar tracking modalities. In addition, it is cheap and readily available, and requires almost no time to setup in a new environment, making it suitable for all types of applications for novice users.
Designing effective user interfaces for emerging technologies, such as VR, AR, and MR, that have no established design guidelines or interaction metaphors is an important area in VR, as they can introduce completely new ways for users to perceive and interact with technology and the world around them. In the paper "Usability Engineering for Augmented Reality: Employing User-Based Studies to Inform Design," Joseph L. Gabbard and J. Edward Swan II propose a usability engineering approach that employs user-based studies to inform design, by iteratively inserting a series of user-based studies into a traditional usability engineering lifecycle to better inform initial user interface designs. Under this approach, user performance can be explored against combinations of design parameters (i.e., experimental factors and levels), to discover what combinations of parameters support the best user performance under various conditions. This approach differs from traditional HCI approaches in the way that basic user interface and/or interaction issues are explored vis-á-vis user-based studies as part of the usability engineering of a specific application, as opposed to application developers drawing from a body of established guidelines produced in the past by others performing low-level, or generic, user-based studies. They also describe a case study involving text legibility in outdoor AR to illustrate how user-based studies can inform design.
Realistic simulations of virtual crowds have diverse applications in architecture design, emergency evacuation, urban planning, personnel training, education and entertainment. The last paper, "Real-Time Path Planning in Dynamic Virtual Environments Using Multiagent Navigation Graphs," by Avneesh Sud, Erik Andersen, Sean Curtis, Ming C. Lin, and Dinesh Manocha addresses the problem of real-time path planning and navigation for multiple virtual agents moving in a dynamic environment. They introduce a new data structure called "multi-agent navigation graph" or MaNG and show how to compute it efficiently using GPU-accelerated discrete Voronoi diagrams. Instead of the traditional approach of using the first-order Voronoi diagrams, the second-order Voronoi diagram of all the obstacles and agents is computed. The second-order Voronoi diagram provides pairwise proximity information for all the agents simultaneously. MaNG is computed by combining the first and second-order Voronoi graphs for global path planning of multiple virtual agents. The resulting MaNG offers a much more computationally efficient graph structure for path planning among multiple agents in dynamic environments. Only one MaNG is needed for planning of n agents instead of computing n first-order Voronoi diagrams for n moving agents using the classical approach. Furthermore, they also present techniques for local dynamics computation of each agent and use the proximity relationships computed by MaNG to compute interaction among multiple agents in real time. This approach can be easily integrated with other rule-based approach to model the avatar behaviors for simulating realistic virtual crowds.
This special section presents a collection of original research and contributions in virtual reality for IEEE Transactions on Visualization and Computer Graphics. It is our hope that it will stimulate novel and exciting ideas to further this field.