Issue No.01 - January (2004 vol.37)
Published by the IEEE Computer Society
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2004.10000
The Problem With AI
I read Neville Holmes's column about artificial intelligence with great interest ("Artificial Intelligence: Arrogance or Ignorance?" Nov. 2003, pp. 120, 118-119).
The family patriarch, a rocket scientist by profession, once said, "Before one can have 'artificial intelligence', one needs 'natural intelligence.'" As Mr. Holmes opined, the implication is that the people attempting to implement AI would first have to understand the fundamentals of human intelligence. It was also a jibe at those who employ the noble moniker to describe trivial mechanisms.
The problem with today's so-called AI is largely that, as Mr. Holmes describes, the processes cannot yet adapt algorithms. While Cellary's assertion regarding the ability to use knowledge to make decisions seems to be on the right track, I doubt that normal people would consider someone who spews facts, yet is unable to assimilate new data, as being intelligent.
The key to having an intelligent process is that the entity has enough sensory inputs and meaningful outputs—a feedback loop as per Norbert Wiener—to validate internally formed theory on reality. Without the ability to train in a substantially closed-loop fashion, new things cannot be learned.
The debasing of the AI concept by ascribing the term to applications unworthy of the name is why AI pioneer Marvin Minsky declared, "AI has been brain-dead since the 1970s." The software industry has relegated "AI technology" to interactive help and video game opponents. Of course, scientists often use terms that don't have the same meaning to the general public. Perhaps AI has become one of those words.
I like the term "algoristics" that Mr. Holmes proposes to distinguish static expert systems from the type of intelligent systems we have yet to implement, and I hope that others will adopt it—even if only to keep researchers on the same page.
Gerad Welch, Rochester, Minn.; email@example.com
Neville Holmes responds:
It is perhaps relevant to note that not so long ago, a column called Open Channel used to appear on the back page of Computer. Heading it was a quotation ascribed to Charles McCabe of the San Francisco Chronicle: "Any clod can have the facts, but having an opinion is an art."
In his discussion of AI, Neville Holmes does not sufficiently take into account the results from current brain research. This research, which uses positron emission tomography and magnetic resonance imaging, reveals two distinct types of memory: declarative (memory) and nondeclarative (procedural).
Essentially, declarative memory is what we can express in words or bring to mind as a mental image; it is explicit or conscious memory. In contrast, nondeclarative memory is the collection of skills we acquire with practice or repetition—our habituation or conditioning. For example, artists, musicians, and athletes are masters of specific procedural knowledge. As Larry R. Squire and Eric R. Kandel explain in Memory: From Mind to Molecules (W.H. Freeman, 2000), much of what Holmes refers to is procedural memory.
Although these two divisions differ from Howard Gardner's independent dimensions of intelligence that Neville Holmes refers to, there is no conflict between them. They are just different ways to study memory. One is more structure based, while the other explains various memory features.
Human intelligence, call it a problem-solving capacity of any kind, uses both kinds of knowledge. Basically, human intelligence does not really know which knowledge domain it is using. Gardner's multidimensional view of intelligence comes into play here.
What we can put into a computer program directly, as software or as data, is basically of a declarative nature. A piece of software—whether AI or conventional—can collect procedural knowledge, but this occurs indirectly. The software must be prepared for that purpose and must be trained to perform a specific task, such as face or speech recognition.
This procedural knowledge is also a bit restricted by what can be captured as digital data—unlike human intelligence, which uses all knowledge facilities available. This restriction limits a program's intelligence, as does not knowing what data we use.
Jan Giezen, Delft, Netherlands; firstname.lastname@example.org
Neville Holmes responds:
The points that Jan Giezen makes all boil down to a fundamental issue that I have raised before: Computing people have much too simple ideas about the human brain ("Would a Digital Brain Have a Mind?" Computer, May 2002, pp. 112, 110-111).
To say that human memory is of only two kinds is a gross simplification. I haven't read the book that Giezen cites, but other books that I have read, such as Memory: Phenomena and Principles by Norman E. Spear and David C. Riccio (Allyn and Bacon, 1994), emphasize the complexity of human memory, and indeed that of other animals. Popular writings distinguish many other kinds of memory, episodic memory being one that springs to mind.
To equate human memory with intelligence is wrong. Dumb people can have good memories, and smart people can have bad memories. This is where Gardner's work comes in. There are many quite different and relatively independent intelligences. It seems to me that an intelligence in Gardner's sense is a talent for exploiting perception and memory to produce high-quality behavior in a particular area.
Divisible Load-scheduling Discovery
Since the publication of my article, "Ten Reasons to Use Divisible Load Theory" ( Computer, May 2003, pp. 63-68), I have become aware of an article by R. Agrawal and H. V. Jagadish published in 1988 ("Partitioning Techniques for Large-Grained Parallelism," IEEE Transactions on Computers, Dec. 1988, pp. 1627-1634) that, independently of the earliest work mentioned in my article (also published in 1988), discusses divisible load modeling.
Although apparently unknown to many divisible load-scheduling researchers until recently, this article has some noteworthy features and firsts. It models the divisible load-scheduling problem using Gantt-like charts and includes solution reporting time in much the same way that others have done since then. This appears to be the first paper to discuss a linear programming solution (unlike the algebraic solution discussed in our first paper), a proof on the optimal order for solution reporting, and an experimental evaluation of divisible load scheduling. All in all, it is quite a forward-looking paper.
Thomas Robertazzi, Stony Brook, N.Y.; email@example.com