The Community for Technology Leaders

Aware Computing

Carl Chang, Iowa State University
Bill N. Schilit, Google

Pages: 20–21

Abstract—Computing has entered an age when devices are increasingly aware of the world in which they are used, and aware software is ever smarter about extrapolating and transforming simple sensor data into nuanced information about the environment in which the devices exist.

Keywords—aware computing; sensor data; context-aware computing; situation-aware computing; affective computing; human-computer interaction

Computing's evolutionary arc began with behemoth numerical calculators cosseted in dust-free rooms and has morphed into palm-sized supercomputers processing user-friendly video, image, and audio data in mobile, dusty, real-world settings. Parallel to this venue and use-case shift is the move from manual data input to data input from hardware sensors. Smartphones and tablets lead the trend in sensing location, acceleration, temperature, touch, gravity, pressure, humidity, proximity, light, and other physical qualities.

Teams from industry and research have long used sensors in real-world settings to expand their systems’ layers of awareness. In the mid-1990s, research labs developed location-aware mobile computers, such as in the Active Badge project at Olivetti Research and later the PARCTab project at Xerox's Palo Alto Research Center. Context-aware computers came next and could detect small amounts of information about their surroundings, such as nearby people and devices. Researchers continued to investigate computing's ability to maintain an awareness of user activities, such as running, walking, and driving, as well as to determine the user's social situation—whether a person is alone, face-to-face with another, or participating in a group activity. As sensing and awareness improved, they became central to features in smartphones and applications, driving an “age of context” in which personal devices enable a previously unthinkable degree of awareness about the wide world around us.

The data that drives computational awareness—once based solely on physical sensor signals—is still evolving. Aware applications can be based on processed data, such as Wi-Fi access points converted into geographic coordinates or coordinates converted into rich labels such as “good french fries.” The social layer—the data collected through a social network or by crowdsourcing—adds another dimension. As users enter small amounts of data about the world, applications can mine and synthesize this data, along with that from other devices, to provide ever-more nuanced awareness. The massive datasets emerging from the aggregation of user input, sensor readings, data mining, and crowdsourcing are driving computational awareness into new areas.

In This Issue

Aware computing, the theme of this month's issue, is the product of various cutting-edge technologies enabling rich sensing of the environment. Although this is a small selection from this vast field, these three articles give us a glimpse into the exciting new opportunities in aware computing.

Twitter, an online microblogging service, is for many the key to increasing awareness of the world around them. Although tweets take little effort or time to initiate or transmit, when a community collectively engages in exchanging information with tweets, simultaneous dialogue occurs and can quickly spiral into a formidable force, spreading information to and raising awareness in every participant. In “Rethinking Context: Leveraging Human and Machine Computation in Disaster Response,” Sarah Vieweg and Adam Hodges examine microblogging in the context of a mass emergencies arising from natural hazards. They point out that understanding tweets can in fact be an extremely difficult task in the aftermath of a natural disaster, and they advocate for a more effective integration of machine and human intelligence. With such human–computer symbiosis, public officials and first responders can better understand microblogging subject matter and regain control of a disastrous situation.

In “Rich Nonverbal Sensing Technology for Automated Social Skills Training,” Mohammed (Ehsan) Hoque and Rosalind Picard explore exciting techniques and applications in nonverbal sensing. Their article showcases MACH, a prototype system for social skills training. The program's rich sensing tools incorporate information from audio/video (facial gestures, prosody, and speech), physiology, social media, and ubiquitous sensors such as GPS to capture and analyze behavior to provide feedback that helps participants improve social skill–based interactions in scenarios such as job interviews and public speaking. The article demonstrates how computing advances—with the support of rich sensing tools and environments—will continue to change the relationships humans have with computers.

In “Tracking Mental Well-Being: Balancing Rich Sensing and Patient Needs,” Mark Matthews, Saeed Abdullah, Geri Gay, and Tanzeem Choudhury examine awareness systems for mental health. Because inferring pertinent behaviors is essential to assessing mental wellness, they propose that continuous sensing of social and physical functioning can promote successful lifelong management of mental illness. The article focuses on three main benefits: providing early warning of changes in mental health, delivering context-aware micro-interventions as needed, and helping patients understand their illness. These mechanisms, built on top of passive sensing, have great potential for improving patient care, but not without raising concerns around control, privacy, and risks resulting from errors in the system that have clear social implications. This article contributes both an exploration of emerging techniques as well as an opening dialogue on the balance between cutting-edge sensing technologies and patient needs.

Sensors for location, acceleration, and motion have enabled sophisticated awareness capabilities in mainstream products that have a major impact on people's daily lives. These technologies not only let us find people nearby, but they also enable richer experiences, such as not merely counting steps as a pedometer, but motivating us to exercise more and better.

The next wave of aware computing—driven by more diverse sensors and richer data—is still emerging from academic and corporate research centers. Although 2014 marks the 20-year anniversary of the emergence of “context-aware computing,” the papers in this special issue remind us that rich and multifaceted awareness of the user's world remains an increasingly important trend in modern computing.

Carl Chang is a professor of computer science and director of the Software Engineering Lab at Iowa State University. His research interests include situational software engineering, human–computer interaction, and smart health. He is a fellow of both the American Association for the Advancement of Science and IEEE and a member of the IEEE Computer Society. Contact him at
Bill N. Schilit is a research scientist at Google. His research interests include mobile and ubiquitous computing. Schilit received a PhD in context-aware computing from Columbia University. He is a fellow of the IEEE and a member of the IEEE Computer Society. Contact him at
63 ms
(Ver 3.x)