Issue No. 03 - May/June (2004 vol. 19)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MIS.2004.10
Nigel Shadbolt , University of Southampton
Plenty of Room at the Bottom
In 1959, Richard Feynman delivered an extraordinarily prescient view of the potential of what we now call nano-technologies. He discussed the problem of manipulating and controlling things on a small scale. He was well ahead of his time in imagining what small might mean. In the late '50s, while people imagined motors being reduced to the size of a fingernail, he foresaw a world by 2000 in which we could etch information onto materials at atomic scales. He calculated that ultimately, all the information then available in all the world's libraries would fit in material one two-hundredths of an inch wide.
In several of these columns, I've tried to convey a sense of wonder for the orders of magnitude, large or small, that the area of intelligent systems now encompasses. This is difficult because such quantities are beyond our grasp. Our perceptions aren't tuned to the very large or the very small; we struggle to appreciate figures on a cosmic or nano scale.
Seeing Through the Magnifying Glass Darkly
We seem able to assimilate and admire a few orders of magnitude either side of our own natural scale. The cathedrals of the Middle Ages still continue to impress us because we can compare their scale to our own and understand a structure tens of times taller than ourselves. We can still stay in touch and so marvel at a Saturn V moon rocket 60 times taller than a human.
So it is with the small—that first memorable glimpse of a piece of the natural world under an optical microscope. The details that appear at one hundred times magnification are still part of a structure we can discern. As we descend into the powers of magnification revealed by electron microscopes, we enter quite different worlds. They're alien worlds somehow removed from our everyday experience. And so it is with the very small in computing. This is a pity—it's one reason we no longer marvel at the equivalents of the medieval cathedral—the Pentium chip or latest hard drive. We know at an intellectual level that they're of a certain size and complexity, but they seem hard to admire. Perhaps it's also because they're now ubiquitous; they're manufactured cheaply and reliably.
Every Little Bit Helps
We can already see computation being effected on the smallest scales Feynman envisaged. Researchers have effected primitive information processing using single or else very few molecules, atoms, and electrons. The physics of the very small is mind-bending stuff. In the late '90s, scientists were already showing that single molecules could conduct electricity. Work has been ongoing to produce electronic devices where a single chemical molecule replaces a transistor and its associated wiring. This would lead to processors millions of times smaller, faster, and more energy efficient than current devices. So called carbon-based computing uses, among other materials, carbon nanotubes to build transistors that are just 1.2 nanometers (1.2 billionths of a meter) in diameter—hundreds of times smaller than our current smallest mainstream transistor components.
The semiconductor industry's own assessment (G. Bourianoff, "The Future of Nanocomputing," Computer, Aug. 2003) assumes that by 2016 we'll have CMOS (complementary metal-oxide semiconductor) technology very close to its fundamental limits. This will be a world in which core CMOS components are 22 nm large, channel lengths are 9 nm, and the electrons used per switching event will be a mere 50.
The very small ultimately takes us into realms outside classical physics altogether. Devices working at the quantum level use single electrons as the fundamental encoding of binary information states. Quantum dots are nanoscopic devices that can hold a well-defined number of electrons. Such devices are currently 30 nm across and will likely become even smaller. Current excitement surrounds the use of a number of these dots arranged to form quantum-dot cellular automata. Last year, Georgia Tech researchers demonstrated how we could use quantum devices' light-emitting properties to implement nanoscale photonic devices.
Of course, one really interesting possibility of the very, very small is to exploit the new information-processing algorithms that become available once we move into a world where superposition lets quantum registers record all possible binary states simultaneously. The attempt to build architectures at this level is active but extremely challenging.
The challenges at the very small invoke those other fundamentals of physics—power and heat. Some researchers regard these challenges as the critical ones to overcome. As we miniaturize, the problem of dissipating heat becomes more and more difficult, as anyone can attest who has had a fat Pentium processor humming away on his or her lap. Ironically, cooling the chip exacerbates the problem of powering it! A 100-watt chip cooled to a cryogenic 4 degrees Kelvin, just above absolute zero, might be great for conductivity but needs 7 kW to power it.
Any of these problems has a legion of scientists and engineers working to find ways to alleviate it or else revolutionize the approach. For example, problems of power and heat take you into fields such as spintronics—building devices that exploit an electron's spin, rather than its charge, to perform their functions. Manipulating spin takes much, much less energy than manipulating charge.
From Computers to Machines
It isn't just the computational circuitry that's shrinking and giving us more and more power all the time. Micromachines can be built that are close to some of the more fanciful aspects of Feynman's talk where he imagined hordes of micromachines turning out copies of themselves at a microscopic level. Researchers have recently built a micro pinball table in which silicon cantilevers act as the flippers and 150-micron-diameter magnetic beads are the balls. The table measures 25 millimeters square. A world of microelectromechanical machines shows us devices with gears no bigger than a grain of pollen, capable of sorting individual blood cells. It has produced steam engines the size of dust mites. This technology promises a world that could realize Feynman's notion of putting the surgeon inside the patient.
Here we can begin to see the opportunities and challenges afforded by the very small. Miniaturization on the scale discussed here will disrupt traditional business models and consumer markets. For example, the media industry might have to come up with entirely new business models because it's unclear whether you can enforce digital rights when a matchbox will hold all the recorded music there is, or entire film and book archives. Profligacy is one of the major features of the very small—you can afford to manufacture so many. How do you retain any control on the content in circulation on all these devices?
Countless numbers of devices everywhere would exploit the intrinsic redundancy of large populations to ensure that enough are working to support any desired performance level. Such information-processing profligacy will mean that all sorts of security will be, depending on your viewpoint, compromised or attainable. Surveillance, monitoring, and tracking will become easier and cheaper, more pervasive and routine. Lethality at the very small scale will also be available. Ubiquitous nanotechnology will present unique problems of decommissioning and clearance.
How AI Fits in
What of the AI of the very small? At an obvious level, AI will benefit by miniaturization just as it has in the past. The increase in computational power will continue to enable new methods to solve hard problems in fields such as vision and pattern recognition, and language and speech processing. It will enhance almost all existing intelligent systems application contexts. However, we'll have specific new challenges to confront when we intersect AI with a world in which thousands, possibly millions, of small machines or devices interact. Research areas will include protocols for interagent communication, methods for agent coordination, techniques to exploit teams of agents in distributed problem-solving tasks, and ways of dealing with aberrant or failing components.
Is this all simply a restatement of accounts that we can find on our bestseller lists—for example, Michael Crichton's Prey, which caused such a storm when it appeared late in 2002? Much has been written about the accuracy of the predictions and assumptions in Crichton's book. But it has had an effect. It's noteworthy that in 2003, the UK's Royal Society and Royal Academy of Engineering commissioned a study on the reality and hype, and the perceptions and misperceptions, surrounding nanotechnology. There's clearly an important job of science communication to be done here—the need exists to develop an informed discussion about the ethical and social implications of our exploring all that room that Feynman opened up. As AI researchers, we must be very aware that our subject is intrinsically tied to the science of diminishing scale. Some recent reports make that link explicit (for example, see Greenpeace's commissioned report in this area; the sidebar lists the URL). We need to understand for ourselves where our technology is headed and endeavor to appreciate that journey's moral and social dimensions.
On a completely different note, very many congratulations to IEEE Intelligent Systems Editorial Board members Subbarao Kambhampati and Craig Knoblock, who have just been elected AAAI Fellows.