The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2005 vol.20)
pp: 6-9
Published by the IEEE Computer Society
ABSTRACT
<p>Machines Mimicking Humans—Danna Voth</p><p>Dual Photography: New in Computer Vision?—Benjamin Alfonsi</p>




Machines Mimicking Humans
Danna Voth
The more we come to understand human biological systems, the more biomemesis, the study of imitating nature, is proving to be a powerful inspiration to roboticists.
Lured by the promise of unlocking the secrets to highly efficient systems, some researchers are focused on replicating nature and creating machines that echo human robustness, multitasking capabilities, intelligence, and autonomy. Work on projects based on human systems includes the development of a silicon retina, tactile robot skin, and the ability to self-replicate.
Silicon sight
At University of Pennsylvania, Kwabena Boahen directs research that explores how a human eye processes visual stimuli and uses that information to build neuromorphic microchips. The chips emulate the efficiency of the human retina's neuronal synapses as they send processed visual information to the brain. The human visual system can process large amounts of information without using a correspondingly large amount of energy by sending chemical signals between neurons, activating synapses between them, and sending electrical signals to the brain. "We are trying to process the incoming images in the same way that the retina would do it," Boahen says.
A typical computer system, which uses generic hardware with specialized software, differs from the biological system, which uses specialized "hardware" with specialized "software." The human visual system works by creating neural connections in the course of doing specific tasks, a process often described as "neurons that fire together wire together." This system depends on customized neural networks (the hardware) working in tandem with special chemical and electrical signals (the software).
Doctoral students in Boahen's lab recreated the natural system's data processing on two kinds of microchips. Kareem Zaghoul morphed all five layers of the retina in his Visio1 chip, which simulated responses of the retina's four major types of ganglion cells. Ganglion cells receive signals from photoreceptors and then transmit pulses of electricity, known as spikes, along the optic nerve to the brain.
"Kareem's chip was designed by copying neural circuits, morphing them into silicon by replacing synapses with transistors," Boahen says. The transistors output a unique 13-bit address every time one of the ganglion cells spikes. A receiving chip decodes the addresses and recreates the spike at the right location in the silicon neuron.
Another student, Brian Taba, evolved the design philosophy from the circuit level to the developmental level. "Brian's chip was designed by copying developmental processes, morphing them into silicon using softwiring, which can be rerouted on the fly," Boahen says. Softwiring, the process of routing spikes instead of wires, works when each neuron is assigned an address that's used to route spikes to it by presenting that address to the chip. Conversely, when the neuron spikes, the chip outputs the neuron's address and uses it to access RAM, where the addresses of the neurons it's connected to are stored. Taba exploited Zaghoul's chip's capability to output unique addresses. By substituting one address for another, Taba's chip could recreate the natural creation of growth cones, the outreaching projections of neurons that connect to form a synapse—firing together as they wire together.
Now Boahen and his students are trying to create a silicon copy of the visual cortex's layers—a very complex task that he feels will become easier the more it is modeled on biological systems. "One copy of this chip, which will have a 120 × 160 array of neurons, will model each layer of cortex," Boahen says. The research has potential as a human prosthetic and in robotics applications in which a machine needs visual input to do a job, such as in space or a dangerous environment.
Robot skin
At the University of Illinois at Urbana-Champaign, Chang Liu is developing a tactile robot skin that can make four sense determinations by one touch. Embedded with a 4 × 4 array of sensors, a 1-in. square of polyimide material less than 1 mm thick applies signal processing algorithms to detect a material's hardness, roughness, temperature, and thermal conductivity. The skin then processes that information and makes a determination about the sensory input. In the lab, the skin has successfully identified five materials—rubber, plastic, wood, steel, and painted steel.


Researchers at the University of Illinois at Urbana-Champaign are working on a tactile robot skin with an embedded array of sensors, wiring, and collocated membranes

"We want the sensor to tell information rather than sending a bunch of data," Liu says. "That's how biology does things in many cases—the peripheral neurons perform an amazingly wide variety of computations so that, by the time the signal gets to the brain, it's already highly processed."
The array produces 16 sensing nodes, called pixels. Each node communicates with the outside world through a collection of wires that lie on the substrate surface. Two collocated membranes in the substrate, along with the sensors, make the tactile measurements. The skin determines a material's hardness by measuring the deformation produced in the skin on its two collocated membranes. Similarly, the skin records roughness by the protrusions and depressions measured by the membrane displacement. To measure temperature, it uses a temperature sensor. And, to measure thermal conductivity, the skin turns on a tiny heater near the temperature sensor. Thermal conductivity helps indicate whether a material is wood or metal.
Liu uses a maximum-likelihood algorithm to make determinations about the data the skin senses. Based on creating a lookup table, the algorithm requires advance knowledge about the array's response. Because creating the response table takes time and memory space, Liu wants to explore how easy it is to map the algorithm to an integrated circuit. Instead of digital computer signal processing, Liu says, "we want it done by analog integrated circuitry. That's how nature does things, and that's one of the best ways, if we can do it."
Liu cites significant signal-processing challenges in something so small and is working to overcome them. "Processing at surface level is a fairly sizable challenge because you want to put capable circuitry that can condition the signal and integrate four different data streams from each pixel," Liu says. "You want to put circuits right next to the sensor to minimize noise, but circuits and polymers, they don't mix that well."
The skin might find its way onto robotic arms and hands, helping robots perform tasks that use tactile senses, such as gripping slippery objects without breaking them and lining up ridges on mechanical parts—for example, threads on a screw. "Tactile sense is very important for robotics for exploration, for many applications of space activity, medical care, and health related practices," Liu says. "We'd like to make sensors with large areas, instead of 1 in. squared. We'd like to make them 1 ft. squared or 1 yd. squared sensor arrays."
Self-replicators
Biological systems' robustness depends on self-replication, and Hod Lipson, in his work at Cornell University, has designed robots that self-replicate. Lipson based his replicating robot on a single-cubical building block he calls a molecube. A molecube splits into two parts along a diagonal plane, letting one-half swivel in increments of 120 degrees. Electromagnets are embedded on the faces, and each cube contains the program that enables its recreation through a sequence of swiveling and magnetic bonding commands. Provided with new material (other molecubes), the program in the original molecube engages and commences building a copy of itself by swiveling into a position to attract, position, then deposit the new material. Lipson has run the program on different machines comprising three or four basic block robots as well as simulated 2D robots.


Cornell University's self-replicating robot is based on a single-cubical building block called a molecube.(courtesy of Cornell University)

More interestingly, Lipson found that the molecubes can evolve their own program for self-replication. By providing the molecubes with programs that mutate with random directions and putting them into a soup of other molecubes, Lipson discovered that molecubes that evolved self-replicating programs proliferated. Those molecubes most successfully used the resources until they gradually dominated the environment.
"Self-replication is its own reward in nature," Lipson says. Although lots of improvements are yet to be made (these robots can do nothing more than copy themselves), he sees a potential for deploying the self-replicating robots in space applications and hazardous environments where damaged robots will need to self-repair. If the robots could be designed with many more components and greater complexity, they could perform many useful tasks. Right now, the robots are dependent on their environment and must be fed appropriate material to copy themselves. Lipson hopes they can become capable of searching for, recognizing, and decomposing resources to rebuild themselves.
Much robotics research seeks ways to create machines that can go where humans can't go and do things humans can't do safely. It seems the way to such discoveries increasingly means studying human biological systems for inspiration and tools for creating robustness and efficiency. The challenges biological systems present offer us new ways to look at organization and communication—as well as the very definitions of human and mechanical.
Dual Photography: New in Computer Vision?
Benjamin Alfonsi
At SIGGRAPH 2005, the 32nd International Conference on Computer Graphics & Interactive Techniques, a group of Stanford University scientists presented a paper on dual photography, a technology that makes it possible to "see" behind objects. The research is at the forefront of a new field being referred to as computational photography.
Dual photography appears deceptively simple—a digital projector and digital camera work in unison to provide information about a subject obstructed from view by studying patterns of light reflecting on the subject. However, the project raises broad questions about the ever-growing field of computer vision: What is it? What is it not? Does computational photography fall within its parameters?
Pick a card, any card
To illustrate how dual photography functions, you need only consider its well-documented hook. The subject is a playing card, facing away from the camera, and the dual photography system can "see" what card it is.
"The 'card trick' makes it seem as if we are seeing through the card, but of course we are not," explains Mark Levoy, associate professor of computer science and electrical engineering at Stanford and member of the dual photography research team. "We are only seeing the front of the card by its reflection in another object."
According to Steven Marschner, an assistant professor of computer science at Cornell, the paper demonstrates that, similar to the way in which a surface doesn't have to be directly illuminated by a light source to be seen, a surface doesn't have to be directly observed to be measured.
"By arranging structured illumination patterns, this indirect observation can form images," says Marschner, who first conceived the dual photography project three years ago while completing a post-doctoral fellowship at Stanford. "The benefit is that we can obtain an image of anything that we can illuminate with a raster scan; where the camera or other light sensor is located is less important."
Inside the matrix
At the project's core is what researchers term a transport matrix algorithm, which arranges illumination patterns by determining what portions of the projector must be illuminated for the image to be made clear. Constructing this algorithm presented the project's biggest challenge, says Hendrik Lensch, a visiting assistant professor at Stanford's computer graphics group. Because today's average digital projector has about one million pixels, sampling each one individually would be too time consuming. The algorithm provides a solution, albeit a partial one.
Using a hierarchical selection method, the algorithm helps determine which areas (or blocks) to illuminate in parallel to identify an unknown object. After measuring the exact contribution from each projector pixel to each camera pixel—that is, the amount of light reflected by the scene if a particular projector pixel is turned on—the reflectance or transport values are stored inside the matrix.
"We start by illuminating with all projector pixels turned on and then subdivide the screen space into four blocks, which are then further subdivided until we reach the pixel level," explains Lensch. " Whenever we see that the contribution [camera images] of two blocks are completely disjointed, we know that these two blocks do not interfere and all subsequent subdivisions of these blocks can be performed in parallel."
"I find their technique interesting in the speed of the algorithm, but computation time will definitely be an issue to consider," says Nikos Papanikolopoulos, professor of computer science and engineering at the University of Minnesota.
Computer vision vs. computer graphics
"Dual photography could lead to imaging in ways people hadn't thought about before," says Pradeep Sen, a doctoral candidate at Stanford. Perhaps, but does dual photography, and computational photography in general, really qualify as computer vision?
"This technique is definitely not in the computer vision field," says John S. Zelek, associate professor of systems design engineering at the University of Waterloo. "It is perhaps at the crossroads of computer graphics, image processing, and photography."
According to Zelek, some in the computer vision field might even take offense to the characterization of this research as computer vision.
"Computer vision usually refers to obtaining understanding of the scene, i.e., what is actually out there in the real world," he explains. "Computer graphics is the making of pictures, and image processing is the manipulation of a picture to make it easier for a person to understand, such as medical pictures, for example."
Cornell's Marschner disagrees.
"I think it is fair to say this is computer vision research," Marschner says. "It has the potential for advancing computer vision because it provides a new kind of image capture technique that will be useful in many settings."
Dual photography's future
Although no prototypes of the system are immediately in the works, researchers say future applications would likely include feature film lighting and virtual computer games. "One might imagine, for example, a computer game that has real world objects onto which the virtual characters cast shadows," says Sen.
"This technique may have applications in computer vision if lighting is controlled and the created virtual camera viewpoints can be used to acquire 3D scene information," says Zelek. "But in most robotic applications, one cannot control the lighting."
Papanikolopoulos believes the AI community will view dual photography more as a possible tool than a groundbreaking theory.
"The idea [behind dual photography] is a very appealing one," Papanikolopoulos says. "But I am concerned about the engineering complexities in implementing it.
12 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool