Guest Editors' Introduction: VR Reborn
NOVEMBER/DECEMBER 1998 (Vol. 18, No. 6) pp. 21-23
0272-1716/98/$31.00 © 1998 IEEE

Published by the IEEE Computer Society
Guest Editors' Introduction: VR Reborn
Lawrence Naval Research Laboratory

Grigore Rutgers University

Susumu University of Tokyo
  Article Contents  
  The Visual Channel  
  Software for VR  
  VR DISPLAY TECHNOLOGY  
  Interfaces and Nonvisual Modalities  
  Rebirth  
  References  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
As the 1990s began, Howard Rheingold in his book Virtual Reality 1 examined the previous 25 years of development leading to the research field and industry named virtual reality. The book, written for a mass audience, engendered more than a sense of achievement for the handful of pioneers who laid the groundwork for this new field. It also provided a sense of exuberance about how much realistic immersive environments would affect us. Unfortunately, the excitement turned into unrealizable "hype." The movie Lawnmower Man portrayed a head-mounted display raising a person's IQ beyond the genius level. Every press report on the subject included the topic of cybersex (which still pervades TV commercials). Fox TV even aired a series called "VR5."
Inevitably, the public (and, worse, research sponsors) developed entirely unrealistic expectations of the possibilities and the time scale for progress. Many advances occurred on different fronts, but they rarely synthesized into full-scale systems. Instead, they demonstrated focused topics such as multiresolution techniques for displaying millions of polygons, the use of robotics hardware as force-feedback interfaces, the development of 3D audio, or novel interaction methods and devices. So, as time passed with few systems delivered to real customers for real applications, attention shifted elsewhere. Much of the funding for VR began to involve network issues for telepresence (or telexistence) that would enable remote users, each with their own VR system, to interact and collaborate. Medical, military, and engineering needs drove these advances.
Is the idea of virtual reality a failure? We think not, but the field does face difficult research problems involving many disciplines. Thus, it should have been expected that major progress would require decades rather than months. This holds especially true in the area of systems, which require synthesizing numerous advances. Sometimes, the next advance depends on progress by non-VR researchers. Thus, we may have to wait for the next robotics device, advanced flat-panel display, or new natural language technique before we can take the next step in VR.
Let's examine what's happening in just a few of the key areas.
The Visual Channel
For obvious reasons (without it, you don't have much), the visual channel has received most of the emphasis during this decade, in both the research and commercial arenas. Advances in rendering speed and reductions in memory costs let us display far more detailed virtual environments than was possible a decade ago. Driven by such needs as large-scale disaster relief and antiterrorism operations, a large amount of research has gone into reconstructing urban terrain. Typically, algorithms require a human-in-the-loop for registration—the fly-through of the Berkeley campus shown at the Siggraph 1997 film show is an excellent example of this technology. However, several university efforts seek to develop fully automated methods. Commercial systems available for both urban reconstruction and more general modeling problems have advanced the ability to generate scenes for VR systems.
As we all know, a fundamental problem in VR is the need to render ever larger scenes in response to the user's movements. Latency above 0.1 second degrades the illusion of immersion, and even faster rates are desirable. Thus, VR needs have driven several threads of computer graphics research aimed at faster rendering. These include
  • Multiresolution: An object not being examined in depth doesn't need to show a very detailed, complex geometrical description. Thus, researchers have generated algorithms to modify the number of polygons used to render the object depending on the user's distance from and orientation with respect to the object. A major thread of recent graphics research, this approach has had significant successes.
  • Texture mapping: The advent of cheap texture-map memory permits storing many objects as textures. Of course, we can't interact with textured objects as if they were real objects. However, recently developed algorithms replace textured objects with modeled objects when the user gets "close enough" to them. Again, this increases rendering speed.
  • Image rendering: To obtain even faster speeds, algorithms are being developed that will essentially interpolate between precomputed scenes. Significant difficulties hinder performing this interpolation without introducing possible artifacts, but the speedups are significant.
Similar advances have taken place in lighting, shadowing, and other computer graphics algorithms for realistic rendering.
Software for VR
While some commercial packages handle VR development, most VR research laboratories find it necessary to develop in-house software suited to their research and applications interests. What has changed is the underlying graphics. Early in the decade, these in-house systems included their own rendering components. Today, advanced architectures are built atop commercial software—most often SGI Performer.
In 1990, little software existed for performing distributed VR. Today, DIVE, Bamboo, Cavern, Spline, and several other software systems offer users a distributed VR capability. The emerging need to perform fine-grained interactions with multiple users will require additional developments.
VR DISPLAY TECHNOLOGY
At the turn of the last decade, head-mounted displays predominated. Their limitations are well known: low resolution, limited field-of-view, ergonomic weaknesses, heavy weight, and the unwillingness of users to stay outside the real world for long periods of time. The 1990s saw a paradigm shift to projective displays that keep viewers in their natural environment. The two most prominent of these, the Responsive Workbench and the CAVE, have been described in CG&A's Projects in VR department several times over the past five years. Both use see-though, stereoscopic shutter glasses to generate 3D images. Current advances in generating lighter, sharper HMDs let low-budget VR researchers use them.
A variety of display technologies possess the potential to improve VR systems if they come to successful fruition. These include head-tracked autostereoscopic displays and retinal tracking displays that directly scan images on the eye using low-power lasers. The retinal tracking display, now under commercial development, has potential applications to outdoor augmented reality. It also serves as an illustration of the limitations imposed by the interdisciplinary nature of VR. Certain virtual and augmented reality systems would benefit from such a display. However, a lightweight, color retinal tracking system requires that a blue diode laser replace the bulky blue gas laser currently used. This is a problem for laser physics, not VR. It's a complex, interrelated world!
Interfaces and Nonvisual Modalities
Using other senses lags far behind the visual channel, although we see significant advances taking place.
Acoustics
Three-dimensional acoustics for VR has proved disappointing. Commercial systems to produce 3D sound are a decade old, but few implementations of this technology have made it into VR systems. The problem of 3D sound in an open room is more difficult than 3D sound through earphones. However, the CAVE provides an ideal display in which to incorporate 3D sound, and we know of laboratories performing the requisite research. Such 3D sound should not merely be directional, but should include second-order effects such as reverberation. Implementing sound as a cue in real, deliverable VR systems offers another area of opportunity.
Haptics
Touch and force-feedback interactions with virtual objects are an important component in the realism of the simulation. Actuator technology does not allow for interfaces that produce the requisite feedback without significantly limiting the user's freedom of motion or without producing fatigue due to weight. Newer technologies (see Tachi's article in this issue or the earlier article on the Phantom in the Projects in VR Department), although far from perfect, do begin to address critical needs in medical, military, and engineering applications. Better and cheaper products will follow.
Olfactory
Research into incorporating smell into virtual environments lags well behind both sound and haptics. Medical doctors tell us that smell provides an important cue in the operating room, so this isn't a topic to ignore. However, how to do it remains largely a mystery.
Locomotion
Navigating through virtual worlds also requires improved natural interaction. Mostly, we hold a wand or equivalent 3D extension of 2D devices and use these to navigate—hardly a natural interaction paradigm. Other efforts have included bicycle- and unicycle-type devices and treadmills to simulate walking and running. Some researchers are now pursuing a novel idea: walking in place to navigate, using shoes with built-in sensors. The VR software then interprets the foot motions.
Multimodal interaction
We know how to use wands, gestures, speech recognition, and even natural language. However, 3D interaction is still fighting an old war. We need multimodal systems that integrate the best interaction methods so that, someday, 3D VR systems can meet that Holy Grail of the human-computer-interface community—having the computer successfully respond to "Put that there."
Rebirth
We predict that the next decade will see extensive growth in virtual reality, a process already beginning. The drivers of this growth include
  • Cost: A high-end graphics workstation plus VR displays and peripherals costs some $250K today. PC systems, costing an order of magnitude less and with lots of texture memory and special graphics boards, increasingly can drive VR applications. Display technology also is becoming cheaper. Cheaper costs will allow many more people, both in the research and industrial communities, to explore the utility of VR systems.
  • Software architectures: After a decade, we're now seeing software, both research and commercial, that others can use. This keeps every system builder from having to reinvent the wheel and to relearn old mistakes.
  • Confluence: We see more interdisciplinary teams capable of combining the separate advances discussed above. This will be a critical factor for moving VR into the mainstream.
  • Fielded systems: Unlike the situation at the start of the decade, some VR systems now have been successfully field tested. The last few years of CG&A contain articles about the Georgia Institute of Technology system for treating acrophobia, the German National Research Center (GMD) system for automotive design, the Naval Research Laboratory systems for shipboard firefighting and for command and control, and the NASA/University of Houston system that trained astronauts for the Hubble space-telescope repair mission. Such systems demonstrate that VR today can help solve real problems.
The articles in this special issue illustrate many of the points discussed above. We chose them for quality, diversity of topic, and readership interest from the best papers of VRAIS 98 (now renamed the IEEE Virtual Reality Conference Series). The authors significantly revised and updated the selected papers following extensive additional review.
Under funding from the US Office of Naval Research, one of VR's pioneers, Frederick Brooks of the University of North Carolina, is visiting leading sites to examine current VR systems and see what's working and what's failing (and why). Brooks' findings will be the topic of his keynote talk at IEEE VR 99 in Houston, Texas next March. Those who previously have had the pleasure of hearing Brooks talk know that this will be a defining moment for VR, with critical insights into how the field is progressing and what advances it needs to develop usable systems. See you there! $\qquad\SSQBX$

References

Lawrence Rosenblum is Director of Virtual Reality Systems and Research in the Information Technology Division of the Naval Research Laboratory and Program Officer for Visualization and Computer Graphics at the Office of Naval Research. His research interests include VR, scientific visualization, and human-computer interfaces.Rosenblum received a BA in mathematics from Queens College (CUNY) and MS and PhD degrees in math from the Ohio State University. He serves on the editorial boards of IEEE Transactions on Visualization and Computer Graphics, Virtual Reality, and IEEE CG&A, where he edits the Projects in VR Department. He is a co-founder of the IEEE Visualization conference series and a director of the IEEE Technical Committee on Computer Graphics. He is a senior member of the IEEE and a member of the IEEE Computer Society, ACM, Siggraph, and American Geophysical Union.

Grigore Burdea obtained his PhD from New York University in 1987. He is an associate professor of Electrical and Computer Engineering and Director of the Human-Machine Interface Laboratory at Rutgers University.Burdea's current research interests are force feedback for virtual reality and its medical applications. He authored the books Virtual Reality Technology and Force and Touch Feedback for Virtual Reality, both published by John Wiley & Sons, and co-edited the book Computer-Aided Surgery, published by the MIT Press. Burdea will be the general chair of the IEEE Virtual Reality 2000 conference.

Susumu Tachi is a professor in the department of mathematical engineering and information physics at the University of Tokyo. His present research covers telexistence in real and virtual environments, real-time remote robotics (R-Cubed), augmented reality, and haptic interfaces for virtual reality. See http://www.star.rcast.u-tokyo.ac.jp/.Tachi received BE, MS, and PhD degrees in mathematical engineering and information physics from the University of Tokyo in 1968, 1970, and 1973, respectively. He is a founding director of the Robotics Society of Japan, a Fellow of the Society of Instrument and Control Engineers (SICE), president of the Virtual Reality Society of Japan (VRSJ), and chairman of the Imeko (International Measurement Confederation) Technical Committee 17 on Measurement in Robotics.