The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - Fourth Quarter (2012 vol.3)
pp: 386-387
Published by the IEEE Computer Society
The sudden escalation in informational and computational technologies is quickly making things possible that were impossible just a few years ago. As these new possibilities become realities, very real ethical dilemmas arise which are challenging the very foundations of ethics, traditionally conceived. One need only consider the 3D printers that are about to hit the market and that will allow individuals to print working firearms at will. Such a possibility will, no doubt, have policy makers wondering how to handle the situation in the absence of existing laws to cover such an inevitability.
Challenges are mounting on other fronts as well, issues with predator drones and autonomous weaponry being among them. Such issues may well make the topic of this issue seem trivial. It is not. For instance, one of the ethical issues attached to affective computing reaches to the foundations of ethics by challenging our common sense belief that truth-telling is a value and that deception is simply wrong, at least in most contexts. In brief, the problem can be stated this way: If robots are to be widely adopted in society, they need to be like us. Thus, giving them simulated emotions seems essential. For instance, when it comes to the use of robotic pets in eldercare, lifeless, unaffective robots would be poorly suited to the task for which they are designed. At the same time, to give such robotic pets the ability to act in such a way as to make us feel good seems to be simply deceptive. If deception is wrong simpliciter, then so are simulated emotions; but if the use of simulated emotions is wrong, then implementing the affective qualities needed to make some machines able to function as needed would also seem wrong. Something is either amiss with our common understanding of the ethics of deception, or research in affective computing, which often amounts to designing machines precisely in order to deceive us, is misguided. The situation is not limited to such innocuous creatures as mere pets either, though when we realize that a robotic pet may simultaneously be a weapon or a spy, the issues start to compound.
In the first paper of this section, “Are Emotional Robots Deceptive?”, Mark Coeckelbergh hits the central issue just mentioned head on. Taking a common sense approach, Coeckelbergh notes that robots must be suitably designed to respond appropriately in such a way that humans understand what is genuinely being communicated in order to facilitate open cross-entity communication. However, this must be done carefully in such a way that humans do not dismiss robot communication with what he calls a “deception response.”
In “Red-Pill Robots Only, Please,” Bringsjord and Clark challenge approaches like Coeckelbergh's. Playing off the Matrix of movie fame, blue-pill robots are engineered to deceive, and embracing them will lead to a cascade of moral issues by pushing our society further away from values associated with truth toward those associated with pleasure. Our love for “digital illusions” is consonant with their argument and may indicate that there is already cause for concern, even prior to the prevalence of affective, blue-pill machines.
Sullins keeps us on the pleasure track with “Robots, Love and Sex: The Ethics of Building a Love Machine.” Admittedly, something always sounds a little goofy and unimportant, if not slightly embarrassing, when raising the topic of sex robots, though few have any doubt that they will be among us in record numbers. Sullins invites us to take the issue seriously by putting forth the notion of “erotic wisdom,” while simultaneously arguing that we must lay down some constraints when it comes to designing machines that can manipulate human psychology at such a deep level.
Steering a sensible course between the issues, Cowie argues in “The Good Our Field Can Hope to Do, the Harm It Should Avoid” that, while most affective applications are morally neutral, simulated affects might well amount to a kind of deception. However, the situation is not as simple as good versus bad since there are several moral positives that can come from research in this area. This paper enumerates some of the moral positives and negatives that pertain here to underscore the balancing act that researchers must undergo when approaching the design of affective machinery.
In “The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents,” Scheutz takes a little bit of a different angle, noting that robots without affects and affective sensibilities may well cause more harm than those with them, but this also transforms them into patients for our moral regard. In this paper, Scheutz argues that we must nonetheless build them offering five reasons to do so before closing with a brief enumeration of the challenges ahead.
Finally, Guarini offers a critique of my own work in ethical theory with his paper “Conative Dimensions of Machine Ethics: A Defense of Duty.” I have argued elsewhere that ethics, traditionally conceived, hangs on a fundamental contest between our affective desires and our sense of obligation; as such, ethics, traditionally conceived, is outmoded and ill-suited to solve problems arising from and within autonomous systems. (See Guarini's paper for references.) Guarini counters with a defense of deontology, noting that the conflict between affect and obligation that motivates Kantian ethics might be reworked along the lines of obligation-obligation conflict that can preserve a notion of duty applicable to machines.
Together the papers make a nice set, and I am pleased with the way they (unintentionally) build off of each other. Nonetheless, my hope here is that the reader will walk away from this volume with more questions than answers. Indeed, it is the job of the ethicist to complexify first in an effort to lay out the nuances of an issue before arriving at a conclusion. These are early days for the field, and answers at this point would be premature; but given the speed with which the field is developing, opening up the questions is essential.
I would like to thank the several referees who assisted with evaluating the papers included herein and their authors, who received this criticism with grace and dignity. I would also like to thank Jonathan Gratch for the opportunity to compile this volume.
Anthony F. Beavers
Guest Editor

    A.F. Beavers is with the Department of Philosophy and Cognitive Science, The University of Evansville, 1800 Lincoln Ave., Evansville, IN 47722.

    E-mail: afbeavers@gmail.com

For information on obtaining reprints of this article, please send e-mail to: toac@computer.org.

Anthony F. Beavers is a professor of philosophy, director of cognitive science, and also director of the Digital Humanities Laboratory at the University of Evansville in southern Indiana. His primary research is in the philosophy of information, including the challenges that informational and computational technologies are posing for macroethics. He has published several papers on and around these themes, along with editing several special journal issues. He is the 2012 recipient of the World Technology Award in Ethics and currently serves as president of the International Association for Computing and Philosophy.
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool