Guest Editors' Introduction: Machine Ethics
JULY/AUGUST 2006 (Vol. 21, No. 4) pp. 10-11
1541-1672/06/$31.00 © 2006 IEEE

Published by the IEEE Computer Society
Guest Editors' Introduction: Machine Ethics
Michael Anderson , University of Hartford

Susan Leigh Anderson , University of Connecticut
  Article Contents  
  AAAI 2005 Fall Symposium  
  The articles  
  Conclusion  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Past research concerning the relationship between technology and ethics has focused largely on the responsible and irresponsible uses humans make of technology; a few people have also been interested in how human beings ought to treat machines. In all cases, only the humans have engaged in ethical reasoning. We believe the time has come for adding an ethical dimension to at least some machines. Adding this dimension acknowledges the ethical ramifications of recent and potential developments in machine autonomy.








In contrast to computer hacking, software property issues, privacy, and other topics normally ascribed to computer ethics, machine ethics is concerned with how machines behave toward human users and other machines. A goal of machine ethics is to create a machine that's guided by an acceptable ethical principle or set of principles in the decisions it makes about possible courses of action it could take. The behavior of more fully autonomous machines, guided by such an ethical dimension, is likely to be more acceptable in real-world environments than that of machines without such a dimension.
AAAI 2005 Fall Symposium
This special issue stems from the AAAI 2005 Fall Symposium on Machine Ethics. The symposium brought together participants from computer science and philosophy to clarify the nature of this newly emerging field and discuss potential approaches toward realizing the goal of creating an ethical machine.
The projections that researchers have made for autonomous technology are limitless. South Korea has recently mustered more than 30 companies and 1,000 scientists toward the goal of putting "a robot in every home by 2010." DARPA's Grand Challenge to have an autonomous vehicle drive itself across 132 miles of desert terrain has been met, and a new Grand Challenge is in the works to have vehicles maneuvering in an urban setting. The US Army's Future Combat Systems program is developing armed robotic vehicles that will support ground troops with "direct fire" and antitank weapons.
From family cars that drive themselves and machines that discharge our daily chores with little or no assistance from us, to fully autonomous robotic entities that will begin to challenge our notions of the very nature of intelligence, the behavior of autonomous systems will have ethical ramifications. We contend that machine ethics research is key to alleviating concerns with such systems. It could be argued that the notion of autonomous machines without such a dimension is at the root of all fears concerning machine intelligence. Furthermore, investigation of machine ethics, by making ethics more precise than it's ever been before, could lead to the discovery of problems with current ethical theories, advancing our thinking about ethics in general.
The articles
In this special issue, two articles explore the nature and significance of machine ethics. Colin Allen, Wendell Wallach, and Iva Smit provide motivation for the discipline in "Why Machine Ethics?" James Moor considers different possible meanings of adding an ethical dimension to machines, as well as problems that might arise in trying to create such a machine, in "The Nature, Importance, and Difficulty of Machine Ethics."
Often cited as representing an ideal set of ethical principles for machines to follow are Isaac Asimov's "laws of robotics." In his fiction, however, Asimov himself wrote often and convincingly about the ambiguities, inconsistencies, and complexities with these principles. Finally, in "The Bicentennial Man," Asimov clearly rejected these laws on ethical grounds as an ideal basis for machine ethics.
Turning from speculative fiction to more plausible bases for machine ethics, three articles here explore a range of machine learning techniques to codify ethical reasoning from examples. In "Particularism and the Classification and Reclassification of Moral Cases," Marcello Guarini advocates a neural network approach that classifies particular ethical judgments as acceptable or unacceptable. Bruce McLaren's "Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions" details a case-based-reasoning approach to developing systems that can provide guidance in ethical dilemmas. In "An Approach to Computing Ethics," an invited article, we team with Chris Armen to develop a decision procedure for an ethical theory that has multiple prima facie duties. Using inductive-logic programming, the system learns relationships among these duties that reflect the intuitions of ethics experts.
Deontic logic—a formalization of the notions of obligation, permission, and related concepts—is a prime candidate as a basis for machine ethics. In "Toward a General Logicist Methodology for Engineering Ethically Correct Robots," Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello describe how deontic logic might be used to incorporate any given set of ethical principles into an autonomous system's decision procedure. Tom Powers' "Prospects for a Kantian Machine," on the other hand, assesses the feasibility of using deontic and default logics to implement Immanuel Kant's categorical imperative.
Christopher Grau discusses utilitarianism, another well-known ethical theory that might serve as a basis for implementation. "There Is No 'I' in 'Robot': Robots and Utilitarianism" investigates utilitarianism's viability as a foundation for machine ethics from both human-machine and machine-machine perspectives.
Conclusion
Not everyone might be comfortable with the notion of machines making ethical decisions, preferring machines to defer to human judgment. However, it's important to observe that such a position entails curbing present and future machine autonomy to an extent that could severely hamper the investigation of machine intelligence itself. That said, there's every reason to believe that we can develop ethically sensitive machines. Ethics experts continue to make progress toward consensus concerning the right way to behave in ethical dilemmas. The task for those working in machine ethics is to codify these insights, perhaps even before the ethics experts do so themselves. We hope this special issue will encourage you to join us in this challenge. Visit www.machineethics.org for more information.

Michael Anderson is an associate professor of computer science at the University of Hartford. His research interests include machine ethics and diagrammatic reasoning. He received his PhD from the University of Connecticut in computer science. He's a member of the Yale Bioethics and Technology Working Research Group, the AAAI, and Sigma Xi. Contact him at the Dept. of Computer Science, Univ. of Hartford, 200 Bloomfield Ave., West Hartford, CT 06117; anderson@hartford.edu.

Susan Leigh Anderson is a professor of philosophy at the University of Connecticut, Stamford campus. Her research interests focus on applied ethics. She received her PhD in philosophy from the University of California at Los Angeles. She's a member of the Yale Bioethics and Technology Working Research Group and the American Philosophical Association. Contact her at the Dept. of Philosophy, Univ. of Connecticut, 1 University Pl., Stamford, CT 06901; susan.anderson@uconn.edu.