The Community for Technology Leaders

Guest Editors' Introduction: On Apples, Oranges, and the Interdisciplinary Nature of VR

Lawrence Rosenblum, Naval Research Laboratory
Sharon Stansfield, Sandia National Laboratories
Michael Zyda, Naval Postgraduate School

Pages: pp. 20-22

"Wow, virtual reality! How come it doesn't work like in <insert movie, book, or TV series here>?"

"So you do VR, huh? How is what you do different from <insert video game, theme park, or research institute here>?"

Those of us working in virtual reality have no doubt heard such questions many times. Flip responses aside, they're tough to answer. Why? Because it's like being asked the difference between an apple and an orange. Or how an apple differs from a fruit. Or worst of all, why your apple doesn't behave like the one in TV commercials. In reality, VR—that neat little Sci Fi concept—is a multidisciplinary effort covering everything from mechanical engineering to psychophysiology. And it's challenging, far more so than those of us enthusiastically embracing it might have originally thought.

People often ask us who's doing the best VR research. We respond by asking, which aspect of VR? Optics? Virtual humans? Data visualization? Many people do work relevant to VR. Some of them consider themselves VR researchers, some do not. All contribute to this exciting field.

Nowhere was the multidisciplinary nature of our field more evident than at the IEEE Virtual Reality Annual International Symposium (VRAIS 96) held in Santa Clara, California this past April. The technical program contained papers addressing many of the component technologies vital to VR's ultimate success. This special issue, which contains a selection of the best of these contributions, illustrates this diversity in microcosm. The articles here range from data analysis to telepresence and large multiuser virtual worlds, from reducing the number of polygons in a graphical model to reducing a person's fear of flying.

Each article has been expanded from the original and peer reviewed by experts in the relevant technology area. We hope that you find them as exciting as we do. And we extend to you an invitation to join us for VRAIS 97 in Albuquerque, New Mexico, March 2-5, 1997.

In this issue

Renze and Oliver tackle a long-standing problem facing the VR and real-time, interactive 3D graphics community—eliminating surplus polygons and reducing polygon flow to the graphics pipeline. They provide a general algorithm for reducing unstructured discretized data sets. We need reduced polygon synthetic models for virtual environments because graphics workstations run too slowly to process all the polygons we throw at them and still produce real-time frame updates.

The article by Hodges et al. continues their work on using VEs to help people deal with their situational fears. This time they focus on treating fear of flying—sure to be a booming business given the current state of the United States' deregulated and terrorized airline industry. In effect, the Georgia Tech group has brought VR into a mainstream research field while also beginning the all-important research on whether VEs are useful for training. Surprisingly, the traditional flight-simulation community almost never considered doing this. We need more such work in the next year for our field to continue to progress.

VR is heading toward internetworked VEs with large numbers of participants, and the work of Barrus, Waters, and Anderson advances this important area. They aim to achieve social VR with communication and interaction. Their work is one of the first efforts at large-scale, networked VEs not based on the interaction metaphor "shoot with a weapon." They focus on efficiently managing the flow of large amounts of data among large numbers of users by breaking up the virtual world into chunks that can be created, described, and communicated independently. Work like theirs will eventually achieve our ultimate vision of the virtual community.

Risch et al. of Pacific Northwest National Labs highlight corporate and governmental uses of VR technology. Their Starlight system interactively generates information-dense 3D graphical representations of multimedia data interrelationships. Thematically similar documents assigned to a 3D cluster can be explored by mouse click. We instantly have the ability to find similar research efforts going on across the world as long as we have access to on-line documents referencing the work.

Moezzi et al. take an extremely interesting step forward in our ability to track the human body with cameras. While their system is not yet real-time, they have demonstrated outstanding results. With this system, they can generate realistic 3D models of dynamic objects and recreate life-like motion for later use in video games and feature films. The Immersive Video system analyzes and composites recorded videos to create a full 3D sequence of the photographed event, which is then stored for immersive playback using a number of interfaces. Interactive viewers can explore the scene continuously from any perspective. Work like this gives us great appreciation for the technology synthesis that the field of virtual environments has become. ${\SHSQBX}$


Figure    The Naval Research Laboratory's Responsive (Virtual) Workbench provides a new capability in 3D mission planning. The terrain, as well as dynamic and stationary objects in the scene, appears in 3D to a team of users wearing stereographic shutter glasses. The NRL approach to developing Workbench applications emphasizes natural, easy-to-use multimodal user interfaces such as gesture and speech recognition. (Image courtesy of Lawrence Rosenblum, Naval Research Laboratory)


Figure    Sandia National Laboratories is exploring the use of virtual reality for situational training and mission planning and rehearsal in high-stress, hazardous environments. Here, a trainee/avatar in protective gear rehearses medical triage procedures in response to a toxic chemical or biological incident. (Image courtesy of Sharon Stansfield, Sandia National Laboratories)


Figure    Army Captain Russell Storms moves through the NPSNet virtual environment on the omnidirectional treadmill (ODT). The NPSNetODT demonstration is a joint research effort by USA STRICOM, the NPSNet Research Group, and Virtual Space Devices. (Image courtesy of Michael Zyda, Naval Postgraduate School)

About the Authors

Lawrence Rosenblumis Director of Virtual Reality Systems and Research in the Information Technology Division of the Naval Research Laboratory and Program Officer for Visualization and Computer Graphics at the Office of Naval Research. His research interests include VR, scientific visualization, and human-computer interfaces.
Rosenblum received a BA in mathematics from Queens College (CUNY) and MS and PhD degrees in math from Ohio State University. He serves on the editorial boards of IEEE CG&A, Journal of the Virtual Reality Society, and IEEE Transactions on Visualization and Computer Graphics. In 1994 he was elected chairman of the IEEE Technical Committee on Computer Graphics. In 1995 he received an IEEE Computer Society Outstanding Contribution Certificate for co-founding the IEEE Visualization conference series. He is a senior member of the IEEE and a member of the IEEE Computer Society, ACM, ACM Siggraph, the American Geophysical Union, and Sigma Xi.
Sharon Stansfieldhas been a senior member of technical staff at Sandia National Laboratories since 1988. She has performed research in such diverse areas as machine intelligence, cognitive models of robotic perception, and medical image interpretation. In 1991 she established the Virtual Reality/Intelligent Simulation Laboratory at Sandia. Funded research projects in this lab have included multi-player, distributed VR for situational training applied to battlefield medicine, law enforcement, and robot operation in hazardous environments; VR-based hypermedia navigation; virtual humans and their behaviors; and development of real-time avatars.
Stansfield received her PhD in computer science from the University of Pennsylvania in 1987. She is general chair for the 1997 IEEE Virtual Reality International Symposium and a member of several professional societies.
Michael Zydais a professor and the Academic Associate Chair in the Department of Computer Science at the Naval Postgraduate School, Monterey, California. His main research focus is in computer graphics, specifically the development of large-scale, networked 3D virtual environments.
Zyda received a BA in bioengineering from the University of California, San Diego in La Jolla in 1976, an MS in computer science/neurocybernetics from the University of Massachusetts, Amherst in 1978, and a DSc in computer science from Washington University, St. Louis, Missouri in 1984. He is a member of the National Research Council's Committee on Virtual Reality Research and Development, chair of the NRC CSTB Committee on Modeling and Simulation: Competitiveness through Collaboration, senior editor for virtual environments for Presence, and a member of the editorial advisory board of Computers & Graphics. Zyda is also a member of the Technical Advisory Board of the Fraunhofer Center for Research in Computer Graphics, Providence, Rhode Island.
65 ms
(Ver 3.x)