I have a dream (not a nightmare) that one of these days a computer system will apply to become a member of the IEEE Computer Society. My nightmare is that we will refuse the application, expecting all of our members to be humans.
Ok... I can feel readers dividing into three camps. The 'it will never happen' camp, which is a matter of faith on the part of some folks (even religious in some cases.) If this is your view, you can probably stop reading at this point. There is the "not in my life time" group that conceeds that computing systems may become conscious, intellegent and/or otherwise capable of applying for membership but we are a long way from that point. The third group is saying "well duh" -- the singularity identified by Vernor Vinge (1993 NASA presentation
) and promoted by Ray Kurzwiell (video from Stanford Singularity Summit
) is headed our way, and we of all professionals, should be lined up to greet the newcommers.
This is one of the "impossible" things that I have raised with students, including retirees in some of the classes I teach. Multiple paths leading to an additional replicator with intellegence (Richard Dawlkin's concept
) exist. These include emulating the human brain (one of the grand challenges) in a computer, evolution via mechanisms such as genetic algolrythims (patents have already been issued to inventions from this path: John Koza;Scientific American Feb. 2003
) or some brilliant stroke of insight from some computer scientist. But there are other paths that may lead this way as well, and more likely in my opinion. As we build complex systems, and complex networks of complex systems there may well be emergent behaviours. This path would be un-anticipated, and simply become evident as the entitie(s) actions become evident. All of these are fairly common tropes for science fiction -- just as nuclear submarines emerged from Jules Verne and communications sattelites from Arthur C Clark. The problem with science fiction is the need for bad things to happen to provide creative-tension in the story, and typically this means technology is evil -- the "Critonization" of technology as I like to think of it (with Michael Criton as exemplar of this form of fiction.) Sometimes, the technology is on the good side -- Heinlein's "The Moon is a Harsh Mistress
", or Bruner's "Stand on Zanziba
r" for example. I some how suspect emergent sapient replicators won't be particularly bad or evil. They may not even chose to interact with us at all. And, I suppose, they may not feel the need to join the Computer Society. But just in case, I think we ought to make sure they can if they want to do so.
There are doubts about Ray Kurzweil's projections in the December issue of IEEE Spectrum
-- however, this focuses on Kurzweil's timeline, not his hypothises. The first question is "is it possible?" and then "if so, when?". I do not doubt that computers will pass the Turing test one of these days, and when they do, we will complain that the test wasn't sufficent, or will argue that the computer was programmed for that objective and not really intellegent. I recall from my college years the assertion that an AI might win at checkers (Samuelson had already proven this) but never be a master at chess (Deep Blue didn't beat Kasparov for a few decades -- and of course the argument became "it just did that with brute calculations".)
I segment Kurzweil's projections into two areas. What computational systems will be able to do (at some point, getting caught up on which year is not really relevant) -- and whether we can "upload a human mind" into a different host, be it computer or whatever else is available. I anticipate the first will occur decades at least before the second. The "Rapture of the Nerds" (Spectrum 2008
) approach fails to make this seperation, setting the bar at full mind-upload before we reach the singularity.
But then check out the next article in Spectrum, "The Brain of a New Machine
" this projects rat like intellegence using new technology within 5 years. Here we see a significant distinction as well - evolution vs intellegent design. It is interesting to look at John Koza's experience with genetic computing where a bit of un-natural selection lead to patentable devices with characteristics the humans involved could not have designed, and compare this with the idea of brute calculations applied to a problem where the algolrythem is known. I for one expect emergence, perhaps guided by intent, to be the first source of machine candidates for CS membership rather than something designed to be intellegent.
So, here are a few questions we can ponder for the next decade or so: If machine intellegence is possible, when will it occur? And, which path(s) are most likely to yeild consciousness? I note that Bersace and Chandler in their Brain of a New Machine article side step this issue asserting that their devices will "behave as if they are intellegent, emotionally biased, and motivated without" consciousness -- I know some folks like that already.