The Old and the New; The Computer and the Brain
By David Alan Grier

Digital binary brainThere is great value in reading classic literature, but computer scientists are often reluctant to read anything but the most recent articles. Statistics from the IEEE and ACM digital libraries suggest that we rarely look at anything more than two or three years old. Yet, by constraining our vision, we overlook the richness of our field. We see only the questions that concern us today and miss ideas that have shaped 60 years of research. In the few times that we do turn to classic literature, we tend to use it only to justify our own work, to say that our contributions have always been central to the field or that we are solving universal problems. Rarely do we turn to this literature to test our ideas, to ask if we have contributed valid and lasting ideas to the field.

Of the early works on computer science, three or four should be of interest to most computer scientists. High Speed Computing Devices (McGraw-Hill, 1952) shows how we developed the basic ideas of computer architecture. Preparation of Programs for Digital Computers by Wilkes, Wheeler, and Gill (Addison-Wesley, 1951) lays the foundation for software development. “Three Models for the Description of Language” by Noam Chomsky (MIT, 1956) gives the ideas that shaped computer languages.

To these three, a well-read computer scientist should add John von Neumann’s The Computer and the Brain (Yale University Press, 1957). It has recently been published in Chinese in a new translation (Jiangsu Peoples Publishing House, 2011). Less technical than High Speed Computing Devices or Preparation of Programs, it provides insight into one of the founders of computer science and how the field evolved in the 1950s.

In the early 1950s, the computing field could be seen as having two distinct subgroups. The first was dominated by engineers and the second by mathematicians. The two groups were in constant discussion with each other, but they dealt with two slightly different problems. The engineers worked with physical devices and tried to turn them into computing machines. The mathematicians studied how computing machines could be applied to specific problems. Von Neumann made contributions to both, though he was always a mathematician. He became interested in computing in the late 1930s and worked with people as diverse as Alan Turing and John Todd. Turing developed an abstract computing machine in 1938 as a mental device that could be used to prove the truth of an abstract mathematical theorem. Todd ran the Admiralty Computing Service for the British government, a practical office that employed human clerks and adding machines to compute the answers to engineering problems. Between his experience with the two, von Neumann formulated the basic concepts of computing machinery and programs.

 

Related:

Does Insurance Have a Future in Governing Cybersecurity?

Human Behavior Aware Energy Management in Residential Cyber-Physical Systems

Machine Learning Systems and Intelligent Applications

Developing Children’s Regulation of Learning in Problem-Solving With a Serious Game

 

During the Second World War, von Neumann was involved with the ENIAC project, an effort by the US Army to create an electronic computing device. While working on the project, von Neumann contributed to a document called “The First Draft of a Report on the EDVAC.” This paper defines the basic architecture of the modern computing, including the memory, the processing unit, and the control unit. The authorship of this document is disputed. Recent scholarship has shown that several members of the project helped develop the ideas of the report. Indeed, certain elements of the report, such as the description of the binary circuits for addition and subtraction, were well known to the computer community. However, von Neumann helped to abstract and generalize these ideas. He helped connect the ideas about computing to the evolving scholarship on the brain, suggesting that computing was mechanizing certain aspects of thought.

The Computer and the Brian followed the first draft report by approximately 10 years. In the intervening years, von Neumann had established himself as a leader in the computing field. He had built a computer at the Institute for Advanced Study, where he worked. That computer helped demonstrate the value of the ideas found in the draft report on the EDVAC. It served as a model for a dozen other machines, including the IBM 701 and the Russian MESM, one of the machines that introduced computing to China.

In 1955, von Neumann agreed to give a series of talks at Yale University on the topic of the computer and the brain. Shortly after starting work on the project, he fell ill and was hospitalized with a disease that the doctors declared to be incurable. Everything “went from bad to worse,” explained his wife, “though there was still some hope left that with treatment and care the fatal illness might be arrested.” In the hospital, he continued to work in the lectures, though he soon reached the point where his “exceptional mind could not overcome the weariness of his body.”

As a result, The Computer and the Brain is a fragmentary and incomplete book. The pages show that he was racing to organize his ideas and put them into words. In many ways, the book builds upon the ideas that were included in the “First Draft of a Report on the EDVAC.” He presents the basic structure of a computing machine, a structure that had been refined and tested over the prior decade. As in the draft report, he describes the machine in terms of the basic elements of the brain: neurons, synapses, and connections. As he goes back and forth between the electronic computer and the biological brain, he gets close to the question, “To what extent does a computer think?”

As a mathematician, von Neumann knew that to show that a computer could think, he would need to show two related but slightly different things. First, he would need to show that the human brain could do everything that a computer could do. Next, he would need to show that a computer could do everything that the brain could do. In set theory, if set A contains every element of set B and set B contains every element of set A, then the two sets are equal.

Von Neumann had already completed half of this proof. In the “First Draft of a Report on the EDVAC,” he had used a biological metaphor as the basis of his description of the computer. In the process, he had shown that the brain could do everything that the computer could do. In the Computer and the Brain, von Neumann wanted to compete the second part of the proof. However, he could not. Furthermore, he knew that the mathematical literature showed that there were things that could be conceived by the human mind but could not be represented in a mechanical device.

Unable to complete a proof that the computer and the brain were equivalent, von Neumann started to lay a framework for future generations. He argued that there might be another way of looking at computing that would show how the brain and the computer were related. He introduced the concept of “short codes,” an idea that we now call macroprogramming. He argued that we might be able to find a new way of describing computation that would be based on his fundamental model but would better capture the way the brain worked. The “outward forms of our mathematics,” he wrote “are not absolutely relevant from the point of view of evaluating what the mathematical or logical language truly used by the central nervous system is.” However, the two would be related, he argued, and hence the computer could help us understand how the brain operated.

Reading books like von Neumann’s The Computer and the Brain helps us to become more sophisticated computer scientists. They tell us little about the current state of research. Certainly, the study of artificial intelligence has progressed substantially since 1955. The technologies that we are now pursuing, such as deep learning, have produced results far beyond anything that von Neumann saw in his own time. Yet, they also open some fundamental questions about the brain and consciousness, questions that our current methodologies may never be able to answer. Hence, like von Neumann, we might need to consolidate what we know and look ahead to new technologies.There is great value in reading classic literature, but computer scientists are often reluctant to read anything but the most recent articles. Statistics from the IEEE and ACM digital libraries suggest that we rarely look at anything more than two or three years old. Yet, by constraining our vision, we overlook the richness of our field. We see only the questions that concern us today and miss ideas that have shaped 60 years of research. In the few times that we do turn to classic literature, we tend to use it only to justify our own work, to say that our contributions have always been central to the field or that we are solving universal problems. Rarely do we turn to this literature to test our ideas, to ask if we have contributed valid and lasting ideas to the field.

Of the early works on computer science, three or four should be of interest to most computer scientists. High Speed Computing Devices (McGraw-Hill, 1952) shows how we developed the basic ideas of computer architecture. Preparation of Programs for Digital Computers by Wilkes, Wheeler, and Gill (Addison-Wesley, 1951) lays the foundation for software development. “Three Models for the Description of Language” by Noam Chomsky (MIT, 1956) gives the ideas that shaped computer languages.

To these three, a well-read computer scientist should add John von Neumann’s The Computer and the Brain (Yale University Press, 1957). It has recently been published in Chinese in a new translation (Jiangsu Peoples Publishing House, 2011). Less technical than High Speed Computing Devices or Preparation of Programs, it provides insight into one of the founders of computer science and how the field evolved in the 1950s.

In the early 1950s, the computing field could be seen as having two distinct subgroups. The first was dominated by engineers and the second by mathematicians. The two groups were in constant discussion with each other, but they dealt with two slightly different problems. The engineers worked with physical devices and tried to turn them into computing machines. The mathematicians studied how computing machines could be applied to specific problems. Von Neumann made contributions to both, though he was always a mathematician. He became interested in computing in the late 1930s and worked with people as diverse as Alan Turing and John Todd. Turing developed an abstract computing machine in 1938 as a mental device that could be used to prove the truth of an abstract mathematical theorem. Todd ran the Admiralty Computing Service for the British government, a practical office that employed human clerks and adding machines to compute the answers to engineering problems. Between his experience with the two, von Neumann formulated the basic concepts of computing machinery and programs.

During the Second World War, von Neumann was involved with the ENIAC project, an effort by the US Army to create an electronic computing device. While working on the project, von Neumann contributed to a document called “The First Draft of a Report on the EDVAC.” This paper defines the basic architecture of the modern computing, including the memory, the processing unit, and the control unit. The authorship of this document is disputed. Recent scholarship has shown that several members of the project helped develop the ideas of the report. Indeed, certain elements of the report, such as the description of the binary circuits for addition and subtraction, were well known to the computer community. However, von Neumann helped to abstract and generalize these ideas. He helped connect the ideas about computing to the evolving scholarship on the brain, suggesting that computing was mechanizing certain aspects of thought.

The Computer and the Brain followed the first draft report by approximately 10 years. In the intervening years, von Neumann had established himself as a leader in the computing field. He had built a computer at the Institute for Advanced Study, where he worked. That computer helped demonstrate the value of the ideas found in the draft report on the EDVAC. It served as a model for a dozen other machines, including the IBM 701 and the Russian MESM, one of the machines that introduced computing to China.

In 1955, von Neumann agreed to give a series of talks at Yale University on the topic of the computer and the brain. Shortly after starting work on the project, he fell ill and was hospitalized with a disease that the doctors declared to be incurable. Everything “went from bad to worse,” explained his wife, “though there was still some hope left that with treatment and care the fatal illness might be arrested.” In the hospital, he continued to work in the lectures, though he soon reached the point where his “exceptional mind could not overcome the weariness of his body.”

As a result, The Computer and the Brain is a fragmentary and incomplete book. The pages show that he was racing to organize his ideas and put them into words. In many ways, the book builds upon the ideas that were included in the “First Draft of a Report on the EDVAC.” He presents the basic structure of a computing machine, a structure that had been refined and tested over the prior decade. As in the draft report, he describes the machine in terms of the basic elements of the brain: neurons, synapses, and connections. As he goes back and forth between the electronic computer and the biological brain, he gets close to the question, “To what extent does a computer think?”

As a mathematician, von Neumann knew that to show that a computer could think, he would need to show two related but slightly different things. First, he would need to show that the human brain could do everything that a computer could do. Next, he would need to show that a computer could do everything that the brain could do. In set theory, if set A contains every element of set B and set B contains every element of set A, then the two sets are equal.

Von Neumann had already completed half of this proof. In the “First Draft of a Report on the EDVAC,” he had used a biological metaphor as the basis of his description of the computer. In the process, he had shown that the brain could do everything that the computer could do. In the Computer and the Brain, von Neumann wanted to compete the second part of the proof. However, he could not. Furthermore, he knew that the mathematical literature showed that there were things that could be conceived by the human mind but could not be represented in a mechanical device.

Unable to complete a proof that the computer and the brain were equivalent, von Neumann started to lay a framework for future generations. He argued that there might be another way of looking at computing that would show how the brain and the computer were related. He introduced the concept of “short codes,” an idea that we now call macroprogramming. He argued that we might be able to find a new way of describing computation that would be based on his fundamental model but would better capture the way the brain worked. The “outward forms of our mathematics,” he wrote “are not absolutely relevant from the point of view of evaluating what the mathematical or logical language truly used by the central nervous system is.” However, the two would be related, he argued, and hence the computer could help us understand how the brain operated.

Reading books like von Neumann’s The Computer and the Brain helps us to become more sophisticated computer scientists. They tell us little about the current state of research. Certainly, the study of artificial intelligence has progressed substantially since 1955. The technologies that we are now pursuing, such as deep learning, have produced results far beyond anything that von Neumann saw in his own time. Yet, they also open some fundamental questions about the brain and consciousness, questions that our current methodologies may never be able to answer. Hence, like von Neumann, we might need to consolidate what we know and look ahead to new technologies.

 


David Alan Grier circle image

About David Alan Grier

David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org.