Nearly 20 years ago, Roz Picard published her seminal book "Affective Computing." This gave a unifying name to disparate research efforts aiming to decipher and model emotions through sensory means and computational methods. Simultaneous to the rise of affective computing in the 2000s was the emergence of unobtrusive approaches for measuring physiological variables. These variables included electrodermal activity, heart function, and breathing function--that is, adrenergic and cholinergic indicators of arousal--a state underlying all emotional responses, and thus of keen interest to affective computing. These unobtrusive measurement methods promised the capacity to longitudinally monitor the psycho-physiological state of individuals in naturalistic conditions--the best, and perhaps only, way to unlock questions on affect.
In the early to mid 2000s, facial thermal imaging gave the first ways to measure signs of arousal at a distance. In the late 2000s and early 2010s, it became feasible to measure heart function though visual imaging. Concomitantly, progress with wearable sensing exemplified by smartwatches enabled unobtrusive heart measurements to become ubiquitous. In the late 2010s, machine learning (ML) algorithms, in particular deep learning (DL) approaches, met unobtrusive physiological measurement methods, yielding significant performance advances in the accurate tracking of arousal and emotional responses. Imaging methods have complemented wearable technologies by revealing new understanding of physiological responses to emotion, such as spatial variations in blood flow and perspiration.
Imaging and wearable physiological monitoring schemes have come a long way since the 2000s, but there is still significant room for improvement. There is an insatiable need for improved accuracy in the measurements, suppression of noise, and acquisition of context, which will make these measurements more interpretable. In addition, significant challenges emerged with respect to the generalizability, personalization, and trustworthiness of ML methods. Addressing such challenges requires the availability of bigger and better multimodal datasets, open code, and reproducible study designs. This special issue is a call that solicits research contributions in all these directions and more, including systematic reviews of the state of the art.
The special issue will act as a reference point for years to come, in a topic that by its nature is scattered across different venues. This is a situation that contributes to poor communication among research teams. For example, there are fault lines between wearable and imaging researchers, although there should not be. After all, there is only one heart, and if multimodal datasets routinely include heart measurements from different modalities, this will be a boon to exchanging, training, and testing deep methods. Consequently, the special issue will provide a comprehensive view of a fragmented field, facilitating synergistic and transparent interactions that will help affective computing to overcome lingering research challenges. Potential topics of interest include, but are not limited to:
Measurement devices, methods, and analytical tools
Data and protocols
Social, ethical, and scientific context
Submissions due: September 26, 2022
Preliminary notification: January 1, 2023
Revisions due: January 9, 2023
Final notification: February 13, 2023
Final version due: February 27, 2023
Publication: July-September 2023
For author information and guidelines on submission criteria, visit the TAC Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. Abstracts should be sent by email to the guest editors directly at tac-upm@computer.org.
Contact the guest editors at tac-upm@computer.org.
Guest Editors
Ioannis Thomas Pavlidis
Computational Physiology Lab, University of Houston, USA
Daniel McDuff
Microsoft Research, USA
Theodora Chaspari
Human Bio-Behavioral Signals Lab, Texas A&M University, USA