Issue No. 04 - Fourth Quarter (2012 vol. 3)
Matthias Scheutz , Tufts University, Medford
Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents,” and more generally, artificial systems. In this paper, we discuss this dilemma in detail and argue that we should nevertheless develop affective artificial agents; in fact, we might be morally obligated to do so if they end up being the lesser evil compared to (complex) artificial agents without affect. Specifically, we propose five independent reasons for the utility of developing artificial affective agents and also discuss some of the challenges that we have to address as part of this endeavor.
Human factors, Robots, Computational modeling, Ethics, Process control, Computer architecture, Artificial intelligence, ethics, Affect processing, intelligent artificial agent, affect dilemma
M. Scheutz, "The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents?," in IEEE Transactions on Affective Computing, vol. 3, no. , pp. 424-433, 2012.