Issue No.07 - July (2007 vol.40)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2007.227
From Pixels to Semantic Spaces: Advances in Content-Based Image Retrieval
Technological advances in digital imaging, broadband networking, and data storage are motivating millions of people to communicate with one another and express themselves by sharing images, video, and other forms of media online. Acquiring, storing, and transmitting photos is now trivial, but to manipulate, index, sort, filter, summarize, or search through them is significantly harder.
The Statistical Visual Computing Laboratory at the University of California, San Diego, has been considering the problem of content-based image retrieval for years. This effort explores many issues in image representation and intelligent system design, including the evaluation of image similarity, automatic annotation of images with descriptive captions, and the ability to understand user feedback during image search.
3D Body Scanning and Healthcare Applications
Philip Treleaven and Jonathan Wells
Today, 3D body-surface scanners are transforming the ability to accurately measure a person's body size, shape, and skin-surface area. Although developed primarily for the clothing industry, 3D scanners' low cost, noninvasive character, and ease of use make them appealing for widespread clinical applications and large-scale epidemiological surveys.
The time is ripe for exploiting the potential of whole-body scanners for routine clinical practice in the same way that other techniques, such as MRI, x-ray, and CT scanning have revolutionized imaging of the internal body. Three-dimensional body-surface scanning is poised to become a mainstream medical tool of major value. Now that appropriate hardware is available, the limiting factor is the software, which is rapidly becoming more sophisticated.
Virtual Reality: How Much Immersion Is Enough?
Doug A. Bowman and Ryan P. McMahan
Immersive VR—complex technologies that replace real-world sensory information with synthetic stimuli such as 3D visual imagery, spatialized sound, and force or tactile feedback—has generated much excitement. The goal of immersive virtual environments is to let users experience a computer-generated world as if it were real, producing a sense of presence, or "being there," in the user's mind.
To a large degree, VR researchers have succeeded in achieving this goal. Clearly, immersive VR is unique, but practitioners are using a relatively few examples of immersive VR systems in the real world. Even so, the technology has already provided success stories in fields such as military training, phobia therapy, and entertainment.
Immersidata Analysis: Four Case Studies
Cyrus Shahabi, Kiyoung Yang, Hyunjin Yoon, Albert A. Rizzo, Margaret McLaughlin, Tim Marsh, and Minyoung Mun
The moment-to-moment interchanges of cues and responses between humans and technological devices, products, and digital media teem with untapped information that could be stored, analyzed, and used.
After several years of studying various immersive applications and immersidata sets, the authors developed An Immersidata Management System, a framework consisting of four connected modules. AIMS treats immersidata as several multidimensional sensor data streams and addresses the challenges involved in its acquisition, storage, query, and analysis.
3D Display Using Passive Optical Scatterers
Shree K. Nayar and Vijay N. Anand
Since we live in a 3D physical world, a system that displays static and dynamic 3D images would provide viewers with a more immersive experience. With that goal in mind, the authors are developing an inexpensive class of volumetric displays that can present certain types of 3D content, including simple 3D objects, extruded objects, and 3D surfaces that appear dynamic when projected with time-varying images, at relatively low resolution.
Their displays use a simple light engine and a cloud of passive optical scatterers. The basic idea is to trade off the light engine's 2D spatial resolution to gain resolution in the third dimension.
The Shannon Portal Installation: Interaction Design for Public Places
Luigina Ciolfi, Mikael Fernström, Liam J. Bannon, Parag Deshpande, Paul Gallagher, Colm McGettrick, Nicola Quinn, and Stephen Shirley
The Shared Worlds research project, funded by Science Foundation Ireland, investigates the design and deployment of interactive artifacts in public spaces. The authors' approach views technology as a tool or mediator in human activities, thus requiring careful observation and analysis of user activities as a prelude to concept design.
A case study is presented that documents an interactive installation built for Shannon Airport, Ireland, that lets users select and personalize photographs of their own choice with annotations and drawings, then e-mail or upload them to a public image wall gallery projected in the airport transit lounge.