, Naval Research Laboratory
, Rutgers University
, University of Tokyo
Pages: pp. 21-23
As the 1990s began, Howard Rheingold in his book Virtual Reality1 examined the previous 25 years of development leading to the research field and industry named virtual reality. The book, written for a mass audience, engendered more than a sense of achievement for the handful of pioneers who laid the groundwork for this new field. It also provided a sense of exuberance about how much realistic immersive environments would affect us. Unfortunately, the excitement turned into unrealizable "hype." The movie Lawnmower Man portrayed a head-mounted display raising a person's IQ beyond the genius level. Every press report on the subject included the topic of cybersex (which still pervades TV commercials). Fox TV even aired a series called "VR5."
Inevitably, the public (and, worse, research sponsors) developed entirely unrealistic expectations of the possibilities and the time scale for progress. Many advances occurred on different fronts, but they rarely synthesized into full-scale systems. Instead, they demonstrated focused topics such as multiresolution techniques for displaying millions of polygons, the use of robotics hardware as force-feedback interfaces, the development of 3D audio, or novel interaction methods and devices. So, as time passed with few systems delivered to real customers for real applications, attention shifted elsewhere. Much of the funding for VR began to involve network issues for telepresence (or telexistence) that would enable remote users, each with their own VR system, to interact and collaborate. Medical, military, and engineering needs drove these advances.
Is the idea of virtual reality a failure? We think not, but the field does face difficult research problems involving many disciplines. Thus, it should have been expected that major progress would require decades rather than months. This holds especially true in the area of systems, which require synthesizing numerous advances. Sometimes, the next advance depends on progress by non-VR researchers. Thus, we may have to wait for the next robotics device, advanced flat-panel display, or new natural language technique before we can take the next step in VR.
Let's examine what's happening in just a few of the key areas.
For obvious reasons (without it, you don't have much), the visual channel has received most of the emphasis during this decade, in both the research and commercial arenas. Advances in rendering speed and reductions in memory costs let us display far more detailed virtual environments than was possible a decade ago. Driven by such needs as large-scale disaster relief and antiterrorism operations, a large amount of research has gone into reconstructing urban terrain. Typically, algorithms require a human-in-the-loop for registration—the fly-through of the Berkeley campus shown at the Siggraph 1997 film show is an excellent example of this technology. However, several university efforts seek to develop fully automated methods. Commercial systems available for both urban reconstruction and more general modeling problems have advanced the ability to generate scenes for VR systems.
As we all know, a fundamental problem in VR is the need to render ever larger scenes in response to the user's movements. Latency above 0.1 second degrades the illusion of immersion, and even faster rates are desirable. Thus, VR needs have driven several threads of computer graphics research aimed at faster rendering. These include
Similar advances have taken place in lighting, shadowing, and other computer graphics algorithms for realistic rendering.
While some commercial packages handle VR development, most VR research laboratories find it necessary to develop in-house software suited to their research and applications interests. What has changed is the underlying graphics. Early in the decade, these in-house systems included their own rendering components. Today, advanced architectures are built atop commercial software—most often SGI Performer.
In 1990, little software existed for performing distributed VR. Today, DIVE, Bamboo, Cavern, Spline, and several other software systems offer users a distributed VR capability. The emerging need to perform fine-grained interactions with multiple users will require additional developments.
At the turn of the last decade, head-mounted displays predominated. Their limitations are well known: low resolution, limited field-of-view, ergonomic weaknesses, heavy weight, and the unwillingness of users to stay outside the real world for long periods of time. The 1990s saw a paradigm shift to projective displays that keep viewers in their natural environment. The two most prominent of these, the Responsive Workbench and the CAVE, have been described in CG&A's Projects in VR department several times over the past five years. Both use see-though, stereoscopic shutter glasses to generate 3D images. Current advances in generating lighter, sharper HMDs let low-budget VR researchers use them.
A variety of display technologies possess the potential to improve VR systems if they come to successful fruition. These include head-tracked autostereoscopic displays and retinal tracking displays that directly scan images on the eye using low-power lasers. The retinal tracking display, now under commercial development, has potential applications to outdoor augmented reality. It also serves as an illustration of the limitations imposed by the interdisciplinary nature of VR. Certain virtual and augmented reality systems would benefit from such a display. However, a lightweight, color retinal tracking system requires that a blue diode laser replace the bulky blue gas laser currently used. This is a problem for laser physics, not VR. It's a complex, interrelated world!
Using other senses lags far behind the visual channel, although we see significant advances taking place.
Three-dimensional acoustics for VR has proved disappointing. Commercial systems to produce 3D sound are a decade old, but few implementations of this technology have made it into VR systems. The problem of 3D sound in an open room is more difficult than 3D sound through earphones. However, the CAVE provides an ideal display in which to incorporate 3D sound, and we know of laboratories performing the requisite research. Such 3D sound should not merely be directional, but should include second-order effects such as reverberation. Implementing sound as a cue in real, deliverable VR systems offers another area of opportunity.
Touch and force-feedback interactions with virtual objects are an important component in the realism of the simulation. Actuator technology does not allow for interfaces that produce the requisite feedback without significantly limiting the user's freedom of motion or without producing fatigue due to weight. Newer technologies (see Tachi's article in this issue or the earlier article on the Phantom in the Projects in VR Department), although far from perfect, do begin to address critical needs in medical, military, and engineering applications. Better and cheaper products will follow.
Research into incorporating smell into virtual environments lags well behind both sound and haptics. Medical doctors tell us that smell provides an important cue in the operating room, so this isn't a topic to ignore. However, how to do it remains largely a mystery.
Navigating through virtual worlds also requires improved natural interaction. Mostly, we hold a wand or equivalent 3D extension of 2D devices and use these to navigate—hardly a natural interaction paradigm. Other efforts have included bicycle- and unicycle-type devices and treadmills to simulate walking and running. Some researchers are now pursuing a novel idea: walking in place to navigate, using shoes with built-in sensors. The VR software then interprets the foot motions.
We know how to use wands, gestures, speech recognition, and even natural language. However, 3D interaction is still fighting an old war. We need multimodal systems that integrate the best interaction methods so that, someday, 3D VR systems can meet that Holy Grail of the human-computer-interface community—having the computer successfully respond to "Put that there."
We predict that the next decade will see extensive growth in virtual reality, a process already beginning. The drivers of this growth include
The articles in this special issue illustrate many of the points discussed above. We chose them for quality, diversity of topic, and readership interest from the best papers of VRAIS 98 (now renamed the IEEE Virtual Reality Conference Series). The authors significantly revised and updated the selected papers following extensive additional review.
Under funding from the US Office of Naval Research, one of VR's pioneers, Frederick Brooks of the University of North Carolina, is visiting leading sites to examine current VR systems and see what's working and what's failing (and why). Brooks' findings will be the topic of his keynote talk at IEEE VR 99 in Houston, Texas next March. Those who previously have had the pleasure of hearing Brooks talk know that this will be a defining moment for VR, with critical insights into how the field is progressing and what advances it needs to develop usable systems. See you there! $\qquad\SSQBX$