The Community for Technology Leaders

Physiological Computing

Giulio Jacucci, University of Helsinki
Stephen Fairclough, Liverpool John Moores University
Erin T. Solovey, University of Helsinki

Pages: 12–16

Abstract—As our minds and bodies increasingly drive effective control of computing technologies, our computers will adjust according to our physiological cues. Will it become difficult to distinguish where we end and our computers begin? The Web extra at https://youtu.be/_nLqjHTTVPg is a audio interview in which Rob Jacob talks with guest editors Giulio Jacucci, Stephen Fairclough, and Erin T. Solovey about how advancements in physiological computing might someday blur the distinction of where our bodies end and our computers begin.

Keywords—physiological computing; human–computer interaction; brain–computer interfaces


Physiological computing—the use of human physiological data as system inputs in real time—enables the creation of user-state representations so that software can respond dynamically and specifically to changes in the user's psycho-physiological state.1 Human–computer interaction paradigms tend to fall under this general system rubric, including brain–computer interfaces (BCIs), affective computing, adaptive automation and health informatics. By connecting brain/body to machine, we extend the central nervous system's boundaries, enabling us to communicate directly with technology via physiological processes that underpin thoughts, emotions, and actions.

Physiological computing systems fall into two broad categories: body schema extensions and mental status determinations. Body schema extensions deal with sensory-motor functions—those we use every time we manipulate our environment through our body. Body schema functions are guided by a sense of agency: I am the one doing this. For example, BCI offers an alternative input-control mode to extend the body schema.2 Mental status determinations deal with internal psychological states including mental workload, emotions, and motivation.

There are two important features that distinguish the two categories: mental state determinations, such as a changes in mood, are unintentional and arise spontaneously through interactions with events in the environment or from internal thoughts; in contrast, extensions of body schema involve volitional and intentional thought.

Biocybernetic Loop

Derived from cybernetic models of closed-loop control and communications,4 the biocybernetic loop serves as a unifying concept for all physiological computing systems5,6 and is composed of three generic stages of real-time data processing: collection, analysis, and translation. In the first stage, physiological data are collected via sensors. In the second stage, data are filtered and quantified in an appropriate way and are identified and corrected for artifacts. In the third stage, data are analyzed to achieve a reasonable and accurate quantification of physiological data that are then translated into a command that is executed at the human–computer interface.

The data collection, analysis, and translation processes have a number of important requirements:

  • physiological measures of psychological concepts must be validated,
  • sensor technology must collect high-quality data in the field,
  • data must be analyzed and-classified in real time, and
  • the translation from data to action at the interface must be responsive and coherent.

These four requirements can be studied in isolation from one another (and often are), but for successful integrated system development, each process within the closed loop should be mutually dependent on the others.

Simple applications of physiological computing are evident in consumer electronics including smartphones, smartwatches (wristbands), and smart-rings to monitor stress, moods, heart rate, and so on. Physiological sensing applications beyond these established products are also being adopted in the health and sports fields to monitor physical conditions, for example, brain signals, changes in skin conductivity (electrodermal activity), facial muscle activity (facial electromyography [fEMG]), heart rate variability, eye movement, and many others. The emergence of sensor apparatuses that are comfortable to wear and maintain signal fidelity is an essential development for reaping the full potential benefits of physiological computing systems.

Current Work in Physiological Computing

Emerging research themes for physiological computing systems include sensor development, real-time signal processing in the field, inference processing (for example, between psychological states and objective measures), data classification methods, and interface/-interaction design.

Recent advances in physiological sensor technology and machine learning have inspired increased development of such systems and expanded exploration of new paradigms; one example is human–computer symbiosis, which posits a deep mutual understanding between humans and the computers that exploit their implicit physiological signals.7 Design principles and patterns for this new class of interactive systems are shifting to better support changing cognitive or affective states in humans.8 This type of interactive symbiosis corresponds to symmetrical human–computer interactions in which information flows simultaneously from computer to user and vice versa.9 The implications of this nascent technology are potentially profound—offering the means to create technology that demonstrates intelligence through its task-context and user-intention sensitivity without any explicit information.10

Physiological computing faces challenges related to sensor robustness, sensor calibration, miniaturization, and integration in ergonomically designed, unobtrusive products. Moreover, identifying and recognizing physiological states remains an open research area requiring multidisciplinary investigations combining machine learning and psychophysiology.

Exploring potential physiological computing applications is ultimately contingent on how well we can identify psychological states that relate to our safety, health, and well-being—for example, mental workload, stress, or positive mood. Recent work, for example, includes quantifying cognitive workload to determine safety in supervisory tasks or driving,11,12 detecting data relevance to provide implicit feedback for information retrieval,1315 conducting research and driving adaptation in computer games,1618 developing interactive storytelling,19 training cognitive performance,20 and testing usability.21

The applications that use physiological computing yield a number of advantages, such as

  • enhanced interaction, particularly during eyes-busy or hands-busy applications;
  • improved implicit control and/or response mechanisms, such as automatic tagging of media content without explicit gesturing; and
  • promotion of desirable psychological states and mitigation of undesirable ones, with benefits ranging from better performance to greater overall health.

Such advantages will spur on further developments, improvements, and advances in sensor/actuator technologies and machine learning.

In this Issue

The contributions in this special issue exemplify advances in the field of physiological computing, particularly in the application areas, techniques, and open challenges.

In “Combining EEG with Pupillometry to Improve Cognitive Workload Detection,” David Rozado and Andreas Dünser demonstrate how multimodal approaches could be useful in designing more robust physiological computing systems. Their approach combines electroencephalography (EEG) and pupil-dilation measurements to detect cognitive workload in test subjects, showing how this combination improves detection rates in monitoring real-time cognitive workload.

In “Stress Detection Using Physiological Sensors,” Riccardo Sioni and Luca Chittaro provide an overview of various physiological sensors that capture stress-level data, and demonstrate these technologies using examples from their work in virtual reality. The article also includes a survey of related work, technological limitations, and opportunities for future research.

The increased availability and complexity of mobile devices taxes the finite human capacity for multitasking. In “Designing Brain−-Computer Interfaces for Attention-Aware Systems,” Evan M. Peck, Emily Carlin, and Robert Jacob describe the use of neuro-imaging to create attention-aware technologies that are capable of scheduling notifications around the user's current information load. This type of passive BCI has enormous potential, but there are important limitations associated with data complexity in this field. The authors describe sensor technology (functional near-infrared spectroscopy [fNIRS]), design principles for attention-aware systems, and an experimental demonstration of how this concept could work.

Physiological computing systems promise to further integrate our sense of self with computer technologies. As the associated sensors improve, and as our ability to capture and analyze data for integration with other technologies becomes more efficient, computers will continue to move closer and even into our physical bodies. The future of this field is indeed very bright. We hope you enjoy this special issue. To join a discussion on this topic, please visit the Computer Society Members LinkedIn page: www.linkedin.com/grp/home?gid=52513&trk=my_groups-tile-flipgrp.

Acknowledgments

This work was partly supported by TEKES, the Finnish Funding Agency for Innovation (Re:KnoW), and the European Commission through the FP7 Project MindSee 611570.

References



Giulio Jacucci is a professor in the Department of Computer Science at the University of Helsinki, a director of the Network Society Program at the Helsinki Institute for Information Technology (HIIT), and founder and board member of MultiTouch Ltd (www.multitaction.com), the leading developer of interactive display systems. His research interests include multimodal interaction for search and information discovery, physiological computing, and ubiquitous computing for behavioral change. Jacucci received a PhD in information processing science from the University of Oulu. Contact him at giulio.jacucci@helsinki.fi.
Stephen Fairclough is a professor of psychophysiology at Liverpool John Moores University. His research interests include physiological computing systems, affective neuroscience, psychophysiology, human motivation, and the effects of stress on health. Fairclough received a PhD in psychology from Loughborough University. He is an associate editor of IEEE Transactions on Affective Computing and is an executive member of the Human Factors and Ergonomics Society (European Chapter). Contact him at s.fairclough@ljmu.ac.uk or www.physiologicalcomputing.org.
Erin T. Solovey is an assistant professor of computer science at Drexel University, with a secondary faculty appointment in the Drexel School of Biomedical Engineering, Science and Health Systems. She also directs Drexel's Advanced Interaction Research Lab. Solovey's research interests include emerging interaction modes and techniques, such as brain–computer interfaces, physiological computing, and reality-based interaction. She received a PhD in computer science from Tufts University. Contact her at erin.solovey@drexel.edu.
FULL ARTICLE
73 ms
(Ver 3.x)