Issue No. 04 - July-Aug. (2012 vol. 27)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MIS.2012.73
Fei-Yue Wang , State Key Laboratory of Management and Control for Complex Systems
Coincidentally, 23 June 2012 marked anniversaries of two significant lives: it was the centennial of Alan Mathison Turing's birth; and it was also the day of the Dragon Boat Festival (also called the Duanwu Festival) in China, commemorating the death of 3rd-century Chinese poet Qu Yuan. The father
of computer science and artificial intelligence, Turing was a powerful influence in shaping early concepts of computation, algorithms, and intelligence. Duanwu is a national celebration of Qu Yuan, who, according to folklore, committed ritual suicide by wading into Lake Miluo weighted down with a large rock, supposedly in protest against the era's corruption.
I was struck by the fact that two such luminaries in their respective fields would cross paths in such a way, especially since Turing also took his own life. It is hard to fully grasp how instrumental Turing's contributions were—the Turing machine established the foundation of modern computer science and technology, and the Turing Test motivated and planted the seed for artificial and computational intelligence. His work changed our world and will continue to do so. What will the future look like? What will be the capabilities of future artificial intelligence and intelligent systems? In another 100 years, will we even be allowed to remember Turing's birthday?
If Raymond Kurzweil's predictions come true, in fewer than 20 years we may lose our right to remember Turing. At the point of what Kurzweil calls the "singularity," machine intelligence will exceed human intellect, and humans will succumb to machine control. Turing's test will continue, but in reverse: machines will become the experimenters while humans are the subjects. Under the watchful eyes of robot professors, robot graduates will study human intelligence, theorizing about when human intellect will reach that of their machine counterparts.
I have a hard time believing in the singularity, but I do believe that the rapid evolution of the Internet may lead to an "intelligence explosion." As Max Born states in My Life and My View, discussing the relationship between eminent scholars such as Turing and society at large, "The total number of men increases with each improvement of the conditions of life. If the percentage of the gifted remains roughly constant, their absolute number grows at the same rate as the total number of men. And with each technical invention the possibility of new combinations increases. Hence the situation is similar to that of the calculus of compound interest; one has what the mathematicians call an exponential increase." Thus, "the knowledge of nature and the power springing from it have been steadily growing—if with fluctuations and retrogressions—with the increasing acceleration characteristic of a self-supporting (exponential) process." Finally, "the day inevitably had to come when the change in the conditions of life produced by this process would be considerable during one single generation, and there would appear a catastrophe. This impression of catastrophe is increased by the fact that certain nations have not taken part in this technical development and have to adapt themselves to it without preparation."
So far it seems that Born is correct. For the most part, our catastrophes have been self-created and remain between people. Humankind still has the option of self-correction. But if the machines do end up controlling us, there will no longer be that option.
Even in the absence of a technological singularity, we have to find a way to avoid Born's proposed technological catastrophe. The difficulty lies in the fact that this is not a purely technical problem; it is a political problem—more precisely, a societal, human problem.
The Way Forward
So what of a solution? I personally have no idea where to find a perfect or complete answer; it may be an impossible task. But I do believe that whether we end up considering the issue from a political or a technological standpoint, Karl Popper's "open society" and its related "piecemeal engineering" are great starting points. First, in The Open Society and Its Enemies, Popper conducted a systematic criticism of historicism, holism, and their related philosophies. In a sense, he rejected the notion that machines could overcome humanity and considered such an idea an outrageous demand to exact knowledge. To me, the idea of a technological singularity is not so much outrageous itself as it is an outrageous imagining of exact knowledge.
Popper's opinions, particularly those on Plato, Hegel, and Marx, also made him unable to offer a comprehensive definition of any ideal social model. Thus, other than his statement that "the magical or tribal or collectivist society will also be called the closed society, and the society in which individuals are confronted with personal decisions, the open society," we cannot glean much of his thoughts on open societies. Nevertheless, he writes that "the transition from the closed to the open society can be described as one of the deepest revolutions through which mankind has passed," and warns that this process will be a long one: "The Greeks started for us that great revolution which, it seems, is still in its beginning—the transition from the closed to the open society." Popper also dismisses the possibility of achieving an open society through utopian engineering, because the demands of such an approach on humankind intellect and morality are immense, and the necessary level of intellect and constant morality cannot be guaranteed. Inevitably, such engineering leads to a totalitarian state. Popper believes that the road to an open society can only be a gradual one, through "piecemeal engineering"—that way, there are more realistic expectations of human intellect and morality, avoiding the need for deception and, thus, avoiding a totalitarian or authoritarian government. Interestingly, the assumption prompted by piecemeal engineering is in agreement with Herbert Simon's "bounded rationality."
Of course, an "open society" that presents individuals with personal decisions does not necessarily mean that those decisions will determine the formulation of social policy. If that were the case, an open society would be more utopian than a utopian society. Personally, I think the core principle of an open society can be described using a single sentence borrowed from Pericles's famous funeral speech: "Although only a few may originate a policy, we are all able to judge it."
These words set me to contemplating Twitter and its Chinese counterpart, Weibo; the relationship between piecemeal engineering and "Twitter technology;" and from there, social computing and computational dialectics under the artificial, computational, and parallel (ACP) methodology, in which those three concepts are technical extensions of thesis, antithesis, and synthesis for dialogues, consensus building, and implementation, respectively. Finally, I thought of open societies and "computational societies." Now, I still cannot describe what exactly a computational society is, but it is something like the following formula:
The computational society = the open society + cyberspace + intelligent systems.
I believe that in a computational society, bounded rationality could be extended to its maximum possible limit with the support of computing machines. The world would be more open, impartial, and fair, one where "individuals are confronted with personal decisions" and "we are all able to judge." We could avoid not only the technological singularity but also Born's technical catastrophe.
Let's move towards the computational society, via Twitter-like smart technology!