« Back

Moral Machines: Teaching Robots Right From Wrong

Moral Machines

by Paul Scerri

 

As agents and robots move more and more from the lab to the real world, the possibility that they can cause physical, psychological, or monetary harm increases. In recent years, deaths and serious physical injury have been caused by malfunctioning robots. Some amount of blame for the recent global economic crises has even been placed on intelligent trading agents that didn't fully comprehend the impacts of their actions. As the prevalence, availability, capabilities, and autonomy of agents and robots increases, it's critical to examine how to minimize any harm caused by malfunctions or unintended consequences. As engineers, we need to develop practices and techniques that minimize any harmful impacts of our technology.

Moral Machines by Wendell Wallach and Colin Allen provides one possible starting point for understanding how to make our agents better members of the societies in which they will reside. Specifically, the book promotes the idea of autonomous moral agents, which are agents that can reason about the moral consequences of their actions. They argue, mostly persuasively, that it's not sufficient for designers to aim for agents that don't malfunction in harmful ways, but that the agents must explicitly take moral implications of their actions into account. Agents must not simply effectively achieve the goals they're designed for, but they must be capable of determining that it's morally correct to not achieve their goals and then choose not to. For some types of agents, this argument is reasonable—for example, health care agents that might need to balance their function for getting a patient to take their medication against the privacy of the patient. However, as the authors acknowledge, the idea of allowing some types of agents to reason morally is a tricky prospect, with military robots or stock-trading robots being the prime examples.

The book's primary technical contribution is in examining the philosophy of morality and how various paradigms might be encoded in an agent. The high-level conclusion is that real moral reasoning will prove difficult for agents, just as it is for people. In fact, people can innately make moral judgments that appear incredibly difficult within current reasoning AI approaches. Wallach and Allen show why moral reasoning is computationally difficult and will require so much more in-built knowledge, sensing capability, and reasoning than is currently in most agents. This should be of concern to society as agents take on more power in society, while being so far from capable of moral reasoning.

While discussing a potential path to achieving the goal of autonomous moral agents, Wallach and Allen argue that the process of designing algorithms for morality in agents will force philosophers to move away from the general principles that typically describe a moral paradigm into tractable and specific rules and procedures for acting morally. This process is going to be difficult, and the same set of concrete rules and procedures might not apply in different cultures or settings. Agents may have to learn the moral rules applicable in a particular setting. Philosophical feedback on implementing morality in agents will be required to refine the rules and procedures. This two-way process between agent researchers and experts in some domains is familiar to agent researchers, as agents make impacts in areas ranging from biology to economics to language development. If AI and philosophy do come together, we'll have come full circle, and it will be fascinating to see how the common objectives and vastly different techniques of the fields can be brought together.

Wallach and Allen limit themselves to talking about highly anthropomorphized agents, which can make some of their ideas moot. Some deployed agents might interact with the world in human-like ways and at human-like speed, but most will probably interact in dramatically different ways and have dramatically different capabilities. Such agents might be faced with completely different moral challenges than humans because of the speed and depth at which they can extrapolate events or access information. For example, will agents with incredible forecasting and information gathering abilities need to decide to keep certain facts to themselves so that we can experience surprise, hope, or even know defeat on occasion? Agents and robots are likely to fundamentally change how we live our lives over the next century, and it's an unfortunate oversight in an otherwise interesting book that the authors didn't consider the novel moral challenges that agents with very different capabilities from us might face.

While this book is targeted at philosophers, it also provides useful information and discussion for agent researchers. If one is working in an area where there's potential harmful impact by developed agents, this book can provide a framework for thinking about how to deal with those issues. It might also be a useful starting point for a graduate-level course on ethics and morality for agents and robots.

Paul Scerri, Robotics Institute, Carnegie Mellon University