, University of Southampton
Pages: pp. 2-3
The influence of biology and the life sciences on our discipline is a topic I've discussed in this column on several occasions. I make no apologies for this. I believe that progress often occurs at the boundaries between disciplines. Insights from one subject inform thinking in another. Throughout history we've built artifacts whose inspiration derives from biology and the natural world. The earliest rock art of Homo sapiens features extraordinarily vibrant images of animals. Organic forms also inspired the exquisite architecture of Antonio Gaudi's Sagrada Familia Cathedral in Barcelona.
Some might argue that science and technology haven't been influenced to such a large extent. The wheel isn't a solution found in nature, nor is tool use in any sense inspired by examples from the animal kingdom. Our computers implement a binary logic that's hard to discern in nature, using a general architecture that's remote from anything that has evolved.
However, it isn't hard to recognize the influence of biological processes and methods on our science and technologies. Norbert Wiener's cybernetics was very much influenced by feedback and control processes that he observed in biological systems. Warren McCulloch and Walter Pitts' characterization of the neuron owed much to their understanding of biology, mathematics, and electronics. In artificial intelligence and intelligent systems, we've also kept the faith with living systems even when not aiming to build exact simulations. AI and IS have been fundamentally interested in the phenomenology of living systems—perception, decision making, action, and learning.
So we might say that we've been doing nature-inspired computing all along. However, NIC's modern exponents assert that we should look much more closely at nature in its totality—to consider phylogeny (evolution of species), ontogeny (development of individual organisms), and epigenesis (lifetime learning). If we pay attention to these aspects, we see that biological information processing is very different from classical computing architectures. Biological systems' elementary components respond slowly compared to solid-state switches—but they implement much higher-level operations. A second striking feature, particularly during development, is biological systems' self-assembly growth, which lets them achieve high interconnection densities. A third fundamental point is that biological systems are implemented without being planned.
No one can doubt that nature has been doing a great job maintaining life and solving complex problems for millions of years. Perceptual systems have evolved to recognize and classify complex patterns, immune systems have emerged that can recognize and eliminate foreign bodies, and ant colonies display swarm intelligence and find optimal paths to food sources. In each case, we can identify specific pieces of work in AI and IS that have developed computational methods informed by these behaviors and processes.
But it doesn't end here—in modern life sciences, we're beginning to approach and understand biological systems at all levels of scale. Biological systems are being investigated in powers of 10: from molecules at the nanometer scale to synapses measured in micrometers, from a single neural cell to networks of neurons, and from cortical maps of a few millimeters to those extending centimeters across the brain (see "Brain Power" in the May/June 2003 issue for more discussion of the neural facts of life). The ultimate ambition is an understanding of the central nervous system. But of course this isn't the end of the matter if we want to understand the individual organism's place in an immediate family group, a larger social unit, the overall population, or the total ecology.
Why should we be interested in these various levels of scale within biology? One of the most compelling reasons is complexity. We see complexity all around us in the natural world—from the cytology and fine structure of cells to the organization of the nervous system, and from molecular genomics to coral reef ecologies. Biological systems cope with and glory in complexity—they seem to scale, to be robust and inherently adaptable at the system level. Our technological systems are manifestly becoming more and more complex—from systems on chips to interest-router connectivity, and from software module interactions to new operating systems. So, can we exploit or recruit insights from biology to help understand the complex computational systems that we've produced?
In modern biological research, networks are ubiquitous at all levels of scale. A gene network is defined by connections between specific genes, whose outputs are influenced by proteins (expressed by other genes), metabolites, and small signaling molecules. Other networks are layered on top of genes, such as protein interaction networks, signaling networks, and metabolic networks, which themselves underpin arrays of interacting networks that link cells together. An organism's physiology requires the linking of many gene, protein, metabolic, and cellular networks, with feedback interactions between them.
If we look at the modern Internet, we see an example of a complex, multilayered, and modular structure. Physical devices provide the foundation, and frame formats determine how packets course through the physical links. The IP packet protocol lies above the link layer and enables communication between various computational networks, creating the experience of a single virtual network. The TCP transport layer running over IP ensures that large streams of data traffic are transported reliably, in the right order. All applications such as FTP and the WWW are layered on top of TCP, and thus we obtain our multilayered interacting network. In this network, the dynamics of decentralized packet transfer can respond to congestion protocols in the higher levels invoked by requirements generated at the application layers. The result is a complex tiered network responding to feedback loops between the various tiers.
But can biological networks inform an understanding of our cyberspace infrastructure? Perhaps only if we take evolution seriously. Any biological network's properties will reflect not only current function but also evolutionary history. An evolutionary approach is at the heart of modern attempts to understand the role and function of biological networks. An evolutionary analysis is providing insights into the robustness and adaptive character of such networks. The Internet has grown and evolved by several orders of magnitude in 20 years, with innovations relying on the flexibility that the layered design affords. Perhaps we need to pay more attention to the evolving character of our most important infrastructure.
Nature might provide the most direct inspiration of all by letting us build devices that effect direct information processing. Molecular electronics attempts to mimic solid-state components with molecular structures. Molecular wires, rectifiers, and transistors have all been constructed on very small scales. Biomolecules such as the protein bacteriorhodopsin have shown great promise for natural computing. This protein serves as a solar power source for a bacterium. Light at a certain wavelength can change how the protein behaves; this behavior is accompanied by a color change. This lets us, in principle, encode states and then read them off. This biomolecule and others like it might have serious applications in high-density memory. Researchers have already used a genetically engineered version to build devices that can perform optical character recognition.
The most famous biomolecule is DNA. A decade ago, Leonard Adleman showed that by exploiting the pattern recognition inherent in DNA hybridization, you could implement combinatorial calculations. Moreover, this type of process is inherently parallel. The apparent ease with which you can manage DNA hybridization and use it to encode problems spurred a rush of research exploring architectures for DNA-based computing. Despite a considerable amount of research in the area, it isn't clear how far such approaches can scale.
Biology and nature can, of course, inspire at the systems level, too. For a number of years, an annual Workshop on Neuromorphic Engineering has taken place in Telluride, Colorado. Carver Mead coined the term neuromorphic engineering; for him the goal is to design, simulate, and build artificial neural systems whose architecture and design principles are based on those of the biological nervous system. Biological nervous systems are embedded in bodies that have complex sensors, that exhibit exquisite neuromuscular control, and that are outstanding examples of biomechanical efficiency. The Telluride workshops have a tradition of hands-on construction of systems. In the 2003 workshop ( www.ini.unizh.ch/telluride/current/index.html), held this past summer, subjects ranged from visually triggered motor reflexes, to noise suppression routines for the auditory processing of spoken speech, to central pattern generators controlling locomotion, to integration of input across modalities such as vision and audition.
In addition, funding bodies around the world are increasingly interested in trying to inspire technical innovation in computer science, AI, and IS by studying biology. The US Department of Defense has invested heavily in what it calls biomimetic research. How much real biological inspiration has found its way into deployed systems is perhaps still a moot point. The next large international conference in this area is the Eighth International Conference on the Simulation of Adaptive Behavior, scheduled to take place in Los Angeles in July ( www.isab.org/sab04). SAB 04 will showcase some of the most interesting work demonstrating how AI and IS benefit from and contribute to work in biology, ethology, and a host of life sciences.
In the context of this special issue on E-Science, it's interesting to note that April last year saw NIDISC03, held in Nice, France ( http://web.umr.edu/~ercal/nidisc/nidisc03.html). What was this event's topic? Nature-inspired (Grid) distributed computing!