Augmented reality is a formidable method of presenting information. Its in-situ nature enables the presentation of just-in-time information and data visualization in the context of physical objects and locations. Virtual reality lets users view and explore environments that are literally out of their reach. Both augmented and virtual realities can provide users with 3D virtual information in an intuitive manner.
We’re on the eve of a virtual and augmented reality revolution. The research community has been waiting for this to happen for more than 30 years, but many of the final pieces are now falling into place. The 3D graphics have been of sufficient quality for many years. The missing technologies have been head-worn displays and six-degrees-of-freedom tracking systems for position and orientation of the user’s head and hands. Additionally, the applications for virtual environments and appropriate user interaction methods had to be developed. Head-worn displays and the required tracking systems appear to be on the horizon, as many new commercial products will be released within months of this article’s publication.
The visual information from virtual and augmented realities can be provided by technologies other than head-worn displays (sometimes referred to as head-mounted displays [HMD]). As head-worn display technology becomes a consumer product, research into virtual environments will continue along several different paths. As with research into Menu Pointer Windows Icon systems, some researchers will pursue how to optimize these new display and tracking technologies for general use, whereas others will investigate new and novel technologies for future products. I’ve chosen for this issue a collection of research articles that mostly highlight some of the new and novel technologies of a post-HMD virtual environment.
In This Issue
Breaking away from desktop monitors is a key advantage to virtual reality. In “The Reality Deck — an Immersive Gigapixel Display,” Charilaos Papadopoulos and his colleagues describe a new level of immersive virtual reality in a Cave Automatic Visual Environment (CAVE) setting. The Reality Deck displays greater than 1.5 Gpixel of display resolution in a 360-degree, 33×19×11-foot rectangular enclosure. This display environment lets researchers investigate visual analytics with massively greater information density, reasonable sized groups, and true user movement.
In “A Full Body Steerable Wind Display for a Locomotion Interface,” Sandip D. Kulkarni and his colleagues extend the concept of a CAVE to include the tactile feedback of wind on the user in which both the strength and direction of the wind is under computer control. The addition of environmental effects increases the realism of the user’s experience in the virtual reality environment. The article reports on the technology’s application and describes in detail the technical issues in creating such displays. The authors clearly show that future virtual environments will provide much more than only visual and audio information.
Spatial augmented reality (SAR) is a method of displaying augmented reality information through the use of projectors. SAR systems can alter a physical object’s visual surface appearance, such as colors, textures, shininess, transparency, and small changes to geometry. In “Spatial User Interfaces for Large-Scale Projector-Based Augmented Reality,” Michael R. Marner and his colleagues describe new user-interaction paradigms for interacting with not only the virtual information, but also the physical object being projected upon. They outline a set of applications and appropriate spatial user interfaces for a set of real-world problems.
A major issue when projecting onto physical objects for SAR augmentations is that images remain in focus from standard projectors only on perpendicular planar surfaces. Daisuke Iwai, Shoichiro Mihara, and Kosuke Sato are addressing this problem in “Extended Depth-of-Field Projector by Fast Focal Sweep Projection.” They leverage liquid lens technology to allow for pixels in an image to remain in focus at different distances from the projector. The authors have demonstrated that their method improves the projected image quality compared to traditional projection technologies with a fixed focal length for objects standing still or moving.
Ismo Rakkolainen, Antti Sand, and Karri move away from projecting onto physical objects to projecting in the air in their article, “Palovuori Midair User Interfaces Employing Particle Screens.” They provide an overview of particle screens (sometimes referred to as fogscreens) produce a volume of reflective particles in midair, which can be projected onto. These screens allow viewing from multiple angles in public spaces. The authors go on to describe a set of interaction techniques specifically designed for these interactive immaterial displays. Their work presents the intriguing prospect of surrounding an existing object with a particle screen, allowing for the augmentation to float in midair above and around the object. Imagine, for example, projecting a 3D-captured avatar of a person into the middle of your living room and having a conversation with them.
HWDs face several open research issues. “Semi-Parametric Color Reproduction Method for Optical See-Through Head-Mounted Displays,” by Yuta Itoh and colleagues, outlines a new method of producing photorealism, which is of particular interest for virtual and augmented realities. Producing photorealistic virtual imagery with optical see-through displays is particularly challenging, and this article provides an excellent overview of the issues. The authors offer an outstanding potential solution to the problem by providing a new color-calibration method for optical see-through head worn displays.
In the video “Recent Trends in Augmented and Virtual Reality: The Industrial Point of View,” Matt Kammerait, VP of Innovation at DAQRI, outlines a set of challenges in developing and building tools for the industrial application of augmented reality. He explains how DAQRI has focused on “real world problems,” investigating how augmented reality could provide a cost savings for issues that industry is facing. Kammerait outlines several challenges regarding bringing a new device into the workplace and explains how the industrial setting provides a great set of opportunities. This video provides valuable insight into how to develop the partnership between augmented-reality provider and end-user company.
It appears that virtual and augmented reality will soon be well served by stable, affordable head-worn display devices. Computer graphics for generating virtual information continues to improve to serve the community, and sensing technologies for environment mapping and positioning are maturing at a steady pace. Yet, numerous challenges remain for wide-scale adoption of virtual and augmented reality.
At the top of the list are appropriate interaction methodologies, including devices and techniques. No standard methods of interaction currently exist for making selections or menu control, for instance, and there are no mature user interface software frameworks. We’re all waiting for the X Windows Motif-like framework to let us get on with building complex applications rather than creating everything from scratch. Nonetheless, the games community is giving us hope with development environments such as Unity and the Unreal Engine, and overall trends for the future virtual reality and augmented reality look very bright.
Bruce H. Thomas is a professor of computer science at the University of South Australia, where he is co-director of the Wearable Computer Laboratory. Thomas is currently an IEEE Senior Member, Fellow of the Australian Computing Society, and a Senior Member of the ACM. He has a PhD in computer science from Flinders University. His current research interests include augmented reality, virtual reality, visualization, wearable computers, user interfaces, CSCW, and tabletop display interfaces. Thomas is an associate editor of IEEE Transactions on Visualization and Computer Graphics. Contact him at email@example.com