We are in the midst of a public debate about artificial intelligence. This discussion began sometime last fall, following a popular movie about Alan Turing and some announcements about robotic automobiles. Like many public debates, the discussion has been shaped by fear. Many of the comments reflect a concern that machines will soon replace human beings and make people obsolete. In December 2014, physicist Stephen Hawking told a BBC program that “The development of full artificial intelligence could spell the end of the human race.” PayPal founder Elon Musk was even more provocative when he claimed that our current approach to artificial intelligence was “summoning the demon.” Lost in this debate is the fact that artificial intelligence is a field with many different points of view and that many researchers aim to enhance human productivity rather than replace human beings.
Want More Tech News? Subscribe to ComputingEdge Newsletter Today!
Broadly speaking, we can divide artificial intelligence into four large groups. Each group has slightly different goals and often substantially different methods. We will call the first “classical artificial intelligence.” Indeed this group is trying to build computers systems that replicate human behavior and could be fairly accused of actually trying to replace human beings with machines. Although this field is as old as computing itself, many identify classical artificial intelligence’s founders as people such as John McCarthy and Marvin Minsky, who worked at MIT in the mid-1950s.
Initially, researchers in classical artificial intelligence worked on problems such as natural language translation, symbolic reasoning, and game playing. This branch of research has created some interesting and useful technology but has often failed to reach its goals. An example of such technology is the expert systems that came from research in the 1970s and 1980s. These systems were supposed to capture the expertise of human beings by using sets of rules. One early system tried to model a physician’s diagnostic expertise. Although it worked in a limited framework and was able to exhibit behavior that was similar to that of physicians, it often reached conclusions that were at odds with conventional medical practice. As a result, such systems have rarely replaced physicians or other experts. Rather, they have provided us with useful technology and found some use in production management.
From the start, classical artificial intelligence research was plagued by exaggerated promises. Researchers regularly misjudged the effort required to reach the goals that they set for themselves. In the 1960s, some researchers turned away from the problem of duplicating human intelligence to the idea of creating computer systems that could augment human intelligence. The founder of this field is usually identified as J.C. Licklider, who worked for the research firm Bolt, Beranek and Newman in Cambridge, Massachusetts. In a paper called “Man-Computer Symbiosis,” Licklider wrote, “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”
This second field has come to be known as “human-computer interaction” and represents one of the larger subdisciplines of computer science today. Because it had more modest goals than classical artificial intelligence, it has done a better job of achieving its targets. It has been responsible for the GUIs that we commonly use and has done much of the research on the algorithms and processes that make cell phones and mobile platforms so appealing to us.
A third field emerged in the mid-1980s and is called “machine learning.” The leaders of this field recognized that classical artificial intelligence had not achieved its goals and that computer systems could do more than they were accomplishing in the field of human-computer interaction. The founders of the field traced their ideas back to work done by Herb Simon of Carnegie Mellon University. (Simon has proven to be a highly influential contributor to computing and has provided ideas that have shaped many parts of computer science.) Instead of trying to duplicate human intelligence, they sought to develop programs that monitor the operation of a machine or an organization. They would gather information and use that information to refine their operations.
Machine learning drew heavily from mathematical statistics tools to develop different kinds of identification and classification algorithms. One of the pioneering researchers in artificial intelligence, Edward Feigenbaum, has argued that these algorithms have proven remarkably successful. They have been used in practical systems that identify objects, find patterns in data, and develop strategies for robots.
The final subfield of artificial intelligence is the newest. It flips the relation of computers and human beings. In classical artificial intelligence, computer systems attempt duplicate the behavior of human beings. In this field, human beings attempt to handle tasks that are difficult for computers. Transcription is perhaps the best example of such a problem. Many medical records systems rely on human beings to transcribe doctors’ notes and then process the information using conventional computational methods. One of its founders, Carnegie Mellon professor Luis von Ahn, calls this field “artificial artificial intelligence.” More commonly, the field is known as “collective intelligence.”
Collective intelligence designers create computing systems that take advantage of two different aspects of human behavior. First, humans can recognize complex patterns in ways that cannot be easily computerized. From these patterns, we can recall a variety of ideas and reason our way to sophisticated conclusions. A simple melody, for example, can bring a host of ideas to mind.
Second, groups of humans generally know far more than any individual can know. A group will have many points of view, many assumptions, and many ways of reasoning to reach a conclusion. Experience has shown that these different points of view lead to a very rich understanding of an issue. According to one of the founders of the field, MIT professor Tom Malone, “collective intelligence is groups of individuals acting collectively in ways that seem intelligent.”
Perhaps one of most well-known products of collective intelligence is the online encyclopedia Wikipedia. Wikipedia contains the contributions of thousands of individuals. It is connected to artificial intelligence because it often provides general information for artificial intelligence systems. The IBM system Watson, which can play a quiz game called “Jeopardy!,” used Wikipedia as one of its fundamental sources of information.
All four of these subjects contribute to the field of artificial intelligence. Hence, when we discuss how artificial intelligence will change human beings and human society, we are actually talking about four different approaches to the subject—each with a different vision. One would simulate human intelligence with computer systems. The next would augment that intelligence. The third would learn from the natural world about how to do tasks more efficiently, and the last would combine the lessons that many people have gained over their lifetimes. While it seems likely that each of these four fields could radically change human experience, it also seems difficult to see how it would entirely replace human beings. At base, the human experience remains the foundation on which all of them build.
Although artificial intelligence seems unlikely to replace human beings, as Stephen Hawking speculated, it will likely change roles that human beings play in society. This phenomenon is known the “hollowing of the work force.” Computer technology tends to create a few new jobs that require sophisticated skills and many jobs that require low-level skills, while it reduces mid-skill jobs. As we look to the impact of artificial intelligence on human experience, we need to consider how to best match human workers to the roles that computers create as well as design computers that best fit human society.
More Related Articles
- The Current State of Industrial Practice in Artificial Intelligence Ethics
- How Artificial Intelligence Is Transforming Mobile App Development
- Artificial Intelligence Chip – Explore the Recent Innovations Made by Front Row Players
- The Use of AI in Cyber Security
About the Author
David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org