[CLOSED] Call for Papers: Special Issue on Ethics in Affective Computing

Share this on:
Submissions Due: 30 May 2022

Affective computing (AC) has come a long way as a field since its inception in 1996. Whereas at first applications were considered only a distant, if exciting possibility, now technology for the analysis of face and voice behavior is readily available, and systems that generate expressive behavior in virtual agent systems start to challenge the uncanny valley. AC is now more or less routinely used for sentiment analysis of social media posts, the creation of virtual assistants by banks and insurance companies, and games developers play with increasingly life-like synthetic emotion and personality. Expressive behavior has also been linked to healthcare, personality, and social behavior.

But, as the field and its applications mature, concern about the way it is being used and described is growing; perhaps this is inevitable as emotions seem central to what makes us human and what gives life meaning. In this light, it is problematic that AC applications sometimes proceed independently from findings in affective science or a broader consideration of human well-being. It is also problematic that the automatic recognition of emotion is framed as a capability of AC, but often without clarifying what is actually measured–for example whether a label is related to typical observers’ interpretations of an expressions of affect, to the likely feeling or feeling present at the time of the expression, or something else, and how much uncertainty should be accorded the label given the context. In the absence of a definitive theory of emotion and renewed questions about the validity of some theories of emotion, this is an issue that as a field we must address seriously.

Even more worryingly, the first reports are now out that, in some cases, AC technology is being used not to motivate pro-social behavior, but to punish anti-social behavior; not to help people overcome and live well with their mental health issues, but to identify and exclude people with mental health problems; not to enable everyone to live free, safe lives, but to control the majority for the purposes of a minority; or not to empower consumers, but to manipulate them into buying products they don’t want or need.

Issues related to automated recognition are not the only concerns; the use of affect analysis to inform decision-making and predictions, to inform the timing of interventions, and the use of affect synthesis to give the appearance of robots or agents having emotions that they do not have are all potentially problematic, and the list is far from exhaustive. Many of these issues are now taken seriously by policymakers and by researchers outside of the AC domain. The European Union is even seeking to create rules to govern use of emotion recognition technologies in its recently proposed AI regulation. There is a greater need for dialogue between those creating AC systems and those concerned about the ethical and legal impacts. Building an awareness of what the technology does, its limitations, and how it engages with human values are key areas of concern.

In this special issue, we are interested in analysis from those within and outside the AC community on how we build ethical AC technologies by considering benefits, risks, and use cases. Topics of interest include:

  • Dual uses of AC technology
  • Trade-offs of using personal data for the benefit of others (healthcare, security, etc.)
  • Anonymization techniques of primary data (audio, video) as well as secondary data (identification of people by their behavior)
  • Overview of existing ethical software and data resources, and best practice for the creation of new ethical resources that can be widely shared
  • Misrepresentation of what technology measures (e.g. emotion theory)
  • State actors, big tech, and private citizens
  • How to engage with policymakers and how to inform citizens around AC
  • Challenges and opportunities of implementing privacy and ethics by design
  • Responsible research and innovation best practice
  • Governance of AC
  • Strategies for managing unintended consequences
  • Analysis of use cases from across industry, healthcare, law enforcement, and security, including surveillance of Uyghurs in China; Vibraimage; iBorder Control; social media sentiment analysis and manipulation, e.g. Facebook Emotional Contagion; work application screening; smart toys; emotion sensing in classrooms, and Edtech.

Important Dates

  • Abstracts due: 30 January 2022 (250 words)
  • Notification of invitation to submit full manuscript: Ongoing
  • Submissions due: 30 May 2022
  • Final decision: 30 July 2022
  • Final version due: 30 November 2022
  • Publication: Winter 2022/Early 2023

Submission Guidelines

For author information and guidelines on submission criteria, visit the TAC Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. Abstracts should be sent by email to the guest editors directly.

Questions?

Contact the guest editors at michel.valstar@nottingham.ac.uk.

Guest Editors

Jonathan Gratch, University of Southern California, USA
Gretchen Greene, The Hastings Center,  USA
Rosalind Picard, MIT, USA
Lachlan Urquhart, University of Edinburgh, UK
Michel Valstar, University of Nottingham, UK