This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents?
Fourth Quarter 2012 (vol. 3 no. 4)
pp. 424-433
Matthias Scheutz, Tufts University, Medford
Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents,” and more generally, artificial systems. In this paper, we discuss this dilemma in detail and argue that we should nevertheless develop affective artificial agents; in fact, we might be morally obligated to do so if they end up being the lesser evil compared to (complex) artificial agents without affect. Specifically, we propose five independent reasons for the utility of developing artificial affective agents and also discuss some of the challenges that we have to address as part of this endeavor.
Index Terms:
Human factors,Robots,Computational modeling,Ethics,Process control,Computer architecture,Artificial intelligence,ethics,Affect processing,intelligent artificial agent,affect dilemma
Citation:
Matthias Scheutz, "The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents?," IEEE Transactions on Affective Computing, vol. 3, no. 4, pp. 424-433, Fourth Quarter 2012, doi:10.1109/T-AFFC.2012.29
Usage of this product signifies your acceptance of the Terms of Use.