The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - November/December (2001 vol.21)
pp: 22-24
Published by the IEEE Computer Society
Researchers' experience 10 years ago at GMD is representative of what was then state-of-the-art in virtual reality. With access to a so-called graphics supercomputer in 1991, GMD researchers waited more than six months for the Data Glove to arrive in Europe and spent another three months getting it running with the SGI. Then, they proudly sat in front of a stereo monitor while using a point-and-fly metaphor for navigating through a 5,000-polygon model of a nearly empty living room. They switched a virtual TV button on with the glove finger, saw a news speaker's still image, and heard her voice. Enthusiastically, they flew around this TV set, over and over ….
Times have changed. VR has become a useful and productive technique that industry uses for product development, data exploration, mission planning, and training. Augmented reality systems are still in the prototype stage, but research systems for medical, engineering, and mobile applications are now being tested.
Today's emerging success for VR and AR research and applications resulted from

    • stable hardware and reasonably priced software that resulted in more and more powerful and successful VR demonstrations and industrial applications;

    • stable graphics application programming interfaces (APIs) such as OpenGL and Performer;

    • continuously increasing performance, allowing larger data sets to process in real time and with improved visual quality; and

    • curious researchers exploring for industrial applicability.

Today, research groups all over the world are investigating VR and AR. With less expensive projection solutions and PC-based graphics, VR has become affordable for university research and is even used to educate students.
Here we'll briefly review the state of the art in VR and AR systems and technology, point out upcoming solutions, and mention some ideas for future research.
Rendering
We often hear that rendering is solved. Really? APIs like OpenGL, Performer, or the OpenSG Initiative are based on scene-graph rendering of textured polygons and volumetric data using texture-buffer hardware. Real-time global illumination remains a big challenge, although researchers are investigating interactive ray tracing using PC clusters and developing radiosity techniques based on precomputing the scene-light characteristics. Interactive doesn't mean real time but rather a system response that allows interactive applications.
Newer rendering techniques, such as image-based rendering or surface splatting are promising ways to reduce the effort for displaying virtual world models. Also, researchers have previously explored ways to use a high-performance visualization server for image generation and a transport network for image shipping into a remote VR display system, and they'll continue to investigate these approaches.
Spatial audio rendering is still an advanced feature for VR and AR systems. We can't build human-to-human communication and collaboration without audio. Researchers have shown that audible feedback is a worthwhile supplement to visual feedback in interactive applications. Olfactory displays (limited to a handful of scents) have been implemented in Cave Automatic Virtual Environment (CAVE) type systems with fairly good results in training scenarios with hazardous environments.
Haptic displays are a different issue. Haptic rendering is based on physical simulations, which typically don't meet real-time constraints, especially considering the high refresh rate of haptic devices. We doubt that we'll be able to make haptic rendering and presentation general purpose for VR in the near term. However, haptic feedback will stay application-specific with tailored haptic devices.
NAVIGATION, INTERACTION, COLLABORATION
VR environments require user-controlled navigation in virtual space. In the past, this feature was programmed individually, and it's now supported by many commercial (and noncommercial) VR packages. Viewers can change the camera position in the scene graph and the object attributes for visual appearance and geometry. Interaction with the virtual objects is more painful. So far, the VR industry hasn't established standards (except de facto standards like pointing and positioning with 6 degrees-of-freedom input devices). Developers have reimplemented basic 2D interaction techniques (like menus) for selecting data or functions into the 3D environment, but it has been found that 2D interaction techniques don't extend well to 3D environments. Wireless handheld devices, such as personal digital assistants (PDAs), are now being experimented with to see if they can successfully perform 2D-menu-based tasks for VR.
Meanwhile, dual-handed user interaction—one device per hand—is common for interacting in VR. The dominant hand does precise interactions while the other performs navigation and positioning tasks. Multiuser interaction within the same virtual scenario needs individual tracking and image generation for each user. Really practical, robust, easy to use and understand tools are prop-type interaction devices, representing the virtual data set physically in the user's hand and allowing complex interaction techniques with only one device (like a handheld PDA) that's operated with both hands. Multimodal interactions in 3D are under experimentation at several institutions, and this integration of voice, gesture, and other perceptual modalities is a promising advance in VR that will likely be the primary method of interaction in the future.
Human collaboration occurs in virtual and augmented environments when at least two interacting users work cooperatively. Time-sequential solutions—those that pass control to one user at a time—reduce rendering and data consistency problems but are troublesome and unintuitive. Collaboration today is perhaps more a social challenge than a technical one. Users collaborating in a team have their individual expertise, and the system must support the users according to their experience and role in a collaborative task. Therefore, workflow processes in group collaboration must be analyzed carefully to design and realize effective collaborative application scenarios.
The AR and VR Infrastructure
Hardware costs a decade ago limited VR to a handful of research labs and universities. Today, low-cost solutions let many more researchers participate in AR and VR research, providing economically reasonable solutions for users.
Once performed with gloves and helmets, most VR systems are now projection based. There are many reasons for this shift. To us, the most significant one is that projection-based virtual environments (PBVEs) let a (small) group of users share an audio-visual experience. CAVE-like configurations, ranging from three- to six-sided, have been set up around the world. Most practical, in terms of the cost-to-application-coverage ratio, is the traditional four-sided one. Being completely immersed in a six-sided environment is an incredible experience, but imagine a user working inside a closed cube for longer than 15 minutes—heat, disorientation, and other inconveniences such as poor audio limit the experience. Many industrial users prefer multichannel walls and cylinders. Cylinders start with 130 degrees and go up to 230 or even 360 degrees for seamless large-screen, high-resolution presentations and images. Very few of them are equipped with floor projections, extending the immersive experience.
Many AR applications use head-mounted, see-through glasses combined with standard 6-DOF electromagnet or acoustical tracking systems or, for outdoor applications, Global Positioning System (GPS) and inertial-sensor-based orientation tracking. Special-purpose AR systems for medical surgery, however, don't burden the surgeon with head-mounted displays. Instead, these systems use transparent monitors integrated in the surgery table that's set up with a sufficiently high-precision tracking. Multiuser stereo displays are now in the works, based on the principle that each user (up to four at a time) can only see a fraction of the display screen where the individual's image is displayed correctly for each user's perspective.
Image generators are again an open issue. The past was easy because there was no choice. The high-cost image generators that we used were general-purpose machines that were useful for numerous applications. Today, the application requirements in terms of image and display resolution, geometric complexity, and fill rate define the machine type and configuration. Consumer graphics boards have notable performance but don't yet support professional applications with respect to reliability and the rich feature sets that most industrial VR applications use. PC clusters for display subdivision solutions and for parallel rendering with z-buffers are already available in prototype version, and PC clusters will replace the higher cost image generators of high-end graphics workstations.
Modern VR software packages include scripting mechanisms for rapid application development, testing, and adjustment, which have proven efficient for implementing AR and VR scenarios. Few of these systems are designed for distributed applications with integrated scene-graph consistency techniques. As network bandwidth increases and networking costs decrease, the demand for geographically distributed VR applications—especially from global operating companies that support collaboration in virtual teams—will increase. Future VR systems will need to integrate features such as audio and video streaming into VR, and then need to address session management and security issues.
We could discuss many other issues, but we need to leave space for the articles in this special issue. Our authors have done an excellent job spanning such areas as mixed reality, haptics, and VR applications. We've enjoyed assembling and processing this collection of articles and hope you'll enjoy and profit from reading them.

Martin Goebel is head of the Virtual Environments research division in the Institute for Media Communication of the German National Research Center for Information Technology (GMD). His research interests include augmented reality, virtual reality, scientific visualization, and real-time simulation. He has initiated and chaired several Eurographics workshops on virtual environments from 1993 to 1998 and was program co-chair of the Eurographics 95 and 98 conferences and of the IEEE VR 2001 and 2002 conferences. He received his PhD (Dr.-Ing.) in 1990 from Darmstadt University. He is a member of the IEEE Computer Society, the Eurographics Association, and the German Computer Society (GI).

Michitaka Hirose is a professor at the University of Tokyo, the Research Center for Advanced Science and Technology (RCAST). He received an ME and PhD in 1979 and 1982, respectively, from the University of Tokyo. His current research interests include systems engineering, human-computer interaction, and virtual reality. He is a member of the IEEE and ACM.

Lawrence Rosenblum is the Director of Virtual Reality Systems and Research in the Information Technology Research Division at the Naval Research Laboratory and Program Officer for Visualization and Computer Graphics at the Office of Naval Research. His research interests include VR, AR, scientific visualization, and human-computer interfaces. He received a BA in mathematics from Queens College and an MS and PhD in math from the Ohio State University. He serves on the advisory board of IEEE Transactions on Visualization and Computer Graphics and the editorial boards of Virtual Reality and IEEE CG&A, where he edits the Projects in VR department. He is a senior member of the IEEE and a member of the IEEE Computer Society, ACM, Siggraph, and American Geophysical Union.
22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool