The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2005 vol.20)
pp: 5-9
Published by the IEEE Computer Society
ABSTRACT
Segway to the Future, <em>Danna Voth</em> <br/> <p>DARPA's Segway Human Transporter carries sensors, manipulators, computers, and software packages to help researchers explore robotics problems on a human scale. DARPA gave modified Segway HTs to a dozen research groups and challenged them to create solutions for operating autonomous mobile robots in dynamic, unstructured environments. Many researchers have taken their experiments even further.</p> <p>Mimicking Bat Echolocation, <em>Benjamin Alfonsi</em></p> <p>University of Maryland scientists are developing an advanced, integrative theory on brain behavior relations that they can apply to robotics. Their Microchipoptera project aims to create a flying bat robot that uses silicon analogs of bat neural circuits to mimic the nocturnal creature's unique echolocation system.</p>




Segway to the Future
Danna Voth
Imagine trying to solve a problem and being given just half the answer. That's what happened to a dozen research groups when DARPA presented them with a modified version of the Segway Human Transporter. Running on batteries, the Segway HT dynamically balances and moves on two wheels controlled by microprocessors and unique gyroscopes. But instead of transporting humans, the modified version carries sensors, manipulators, computers, and software packages that help researchers explore robotics problems such as perception, cognition, and manipulation on a human scale. The robotic mobility platform (RMP) has a small footprint, a zero turning radius, the ability to move over diverse terrains, and the capacity to carry up to 100 pounds. These features let researchers experiment without having to bother with creating the locomotive part of their robotic systems.
DARPA funded Segway's RMP development as part of its mobile autonomous robot software project and challenged researchers to create solutions for operating autonomous mobile robots in dynamic, unstructured environments. That challenge proved so enticing that many researchers have taken their experiments with the platform beyond the DARPA project. John Morrell, Segway's director of systems engineering, says the company has sold another 25 machines to universities, research centers, and small companies for robotics research.
See me, feel me
Perception problems and obstacle avoidance challenge Oliver Brock in his robotics work at the University of Massachusetts Amherst. Brock wants to create a "robotic mule" that can follow a human and avoid obstacles using the Segway platform, vision cameras, and a laser-range finder. "To create a robot that could perform robustly in unstructured and dynamic environments, we had to come up with a framework that generates behavior that can satisfy contradictory objectives," Brock says. Drawing from control theory techniques, the UMass robot uses a prioritized null-space composition of controllers, a method called the cascading filter. Starting with all possible commands that the robot can perform, the first filter strains out commands that satisfy the first objective. The cascading filter then strains out each additional specified behavior. Using this method "allows you to argue in a consistent and coherent manner about possible conflicts that could occur between behaviors," Brock says. For example, if one behavior says "go left" but then says "go right," the robot wouldn't average the two commands and go straight. Instead, it would reason about whether going left is more important than going right and then pick the best path.
Another obstacle avoidance approach using the RMP combines a unique navigation system with a new laser range finder. The University of Michigan developed the Fuzzy Logic Expert Navigation system for NASA's 2009 Mars Rover and adapted it to the Segway platform. FLEXnav tells the robot where it is without always using GPS, so it works indoors or outdoors under conditions when GPS won't function. FLEXnav determines the robot's position by measuring relative displacement from a known starting point, much like an odometer estimates distance on the basis of wheel rotations or pace sizes. Odometry methods can incur errors over time, however, and require occasional repositioning data from gyros and GPS.
The Segway's unique balancing technology presents an interesting problem for position estimation, explains Johann Borenstein, a research professor at the University of Michigan Advanced Technologies Lab. "It tilts forward and backward without relationship to the terrain," he says, so the lab developed a special set of sensors that measure the Segway's relative tilt to the terrain. Because the navigating problem involves avoiding obstacles, the lab developed a reflexive obstacle avoidance system. The system first used a SICK laser, which collected obstacle data and projected it on a 2-1/2-dimension grid, a representation of the environment on a checkerboard. Each checkerboard cell represents 10 × 10 cm of the real world and holds obstacle height information. Using the 2-1/2-dimension information about the environment, an algorithm computes the best direction for the robot to move. Now the lab has a brand-new sensor called the Swiss Ranger, a prototype that measures distances and detects ranges to objects. "We are very excited about this sensor," Borenstein says. "It has a vastly better potential."
Plays well with others?
Building a team of one RMP-based robot and one human on a Segway HT, Manuela Veloso hopes her robot will close the perception-cognition-action loop. Both Veloso at Carnegie Mellon University and Jeffrey Krichmar at the Neurosciences Institute are creating RMP-based soccer-playing robots scheduled to face off at the Robocup US Open on 8–10 May 2005. Veloso wants to find out if humans and robots can collaborate on specific tasks. Her robot follows an odometry model, which predicts the effects of its actions, checks its actual state, and updates its beliefs and predictions about further actions. "We are trying to have the robot combine the assessment of the world with the decision of what to do in the current situation," Veloso says.
Krichmar wants to discover how the human brain parses multiple objects in a scene and what happens in the brain when learning occurs. His robot employs a computer brain called Darwin VIII, which has a detailed model of the visual cortex and the motor capabilities to play soccer. Darwin VIII theoretically solves the visual binding problem. This problem attempts to explain how the brain connects visual features such as shape, color, and object motion, which are located in different parts of the brain, to understand their relationships to each other. Darwin VIII solves the problem, Krichmar says, "through synchronized neural activity among widely dispersed neural areas." Darwin VIII gains its soccer skills through reinforcement-learning algorithms. For instance, if the ball is valuable and the robot moves closer to the ball, the achieved increase in value reinforces that movement. The Darwin brain activates synaptic connections between neurons that are potentiated by this increase in value. Conversely, if the robot moves the ball farther away, the connections between the neurons active during that moment are depressed. "We try to be as basal to the neural biology as we can, and then we put our devices in a situation where they have to explore their environment, and we watch," Krichmar says. "So while we watch we can get a clue [about] how real brains work."


A robot with a computer brain called Darwin VIII contemplates the value of a soccer ball in relation to its distance from it(photo courtesy of Jeffrey Krichmar)

Give me a hand
Manipulation problems also engage researchers. The Massachusetts Institute of Technology's Eduardo Torres-Jara says that "manipulation is never useful if you can only do it in a fixed space, so we want to have a mobile platform that can adjust the hand or arm to take objects from one place to another." MIT developed an arm called Cardea, which was attached to the RMP and could interact with objects without breaking them. MIT's next generation of arms has hands that are flexible and sensitive enough to fold around and grab objects. Combining the grasping technology with mobile capabilities lets Torres-Jara explore complex manipulation problems.
Working in behavior-based manipulation, Torres-Jara uses reactive behavior-based architectures with sensory data that build on previous behaviors. The method works better in an unstructured environment, he says, where planning methods require too much computation to map out every possibility and are too slow to react to the dynamic situations that occur when the robot is moving. "To do more complicated things, we try to inhibit the action of several sensors over the actuators," Torres-Jara says. One model he uses to do that is the subsumption architecture that MIT's Rodney Brooks devised. A subsumption architecture combines controllers so that under some conditions the output of one controller (such as driving) is inhibited while another controller (such as the avoiding-object controller) has priority. So, when a robot nears an obstacle, it stops moving before colliding with the object. Torres-Jara has created Obrero, an extremely sensitive robotic hand with force control and tactile sensing. He wants to put it onto the Segway platform to attempt such tasks as approaching a table and identifying objects on the table and then grabbing one and moving it to a different position. "We'd like to have platforms that can work in human environments so we can move robots out of labs and into houses so they can actually interact with humans," Torres-Jara says.
Scientists are also combining some of the RMP-based robotics research to create robots that can aid humans. Myron Diftler, NASA's Robonaut deputy project manager, says, "We're looking for ways to reduce crew workloads. We've developed robots that have basically body parts that are very similar to humans, roughly the same size as a space-suited astronaut, with the level of dexterity approaching that of an astronaut wearing a pressurized glove." Robonaut B, NASA's second humanoid robot model, uses various lower bodies including the Segway RMP and has applied research from other institutions to gain grasping control, force control, short-term memory, and enhanced vision. The robot employs context-specific grasping primitives, cognitive models, template matching, and a superposition of behaviors developed through sensory motor learning when a human teaches it to do a specific task.
NASA demonstrated Robonaut's capabilities last spring at the DARPATech conference when Robonaut (attached to the RMP) completed a series of tasks, beginning with scanning a room to locate human heads and acquire one as a goal. Once it processed that information, Robonaut moved toward the human, avoiding obstacles along the way, and stopped in front of the human. Then Robonaut scanned for a tool that the human held in his hand, grasped the tool, and moved it to a stowed position. After reacquiring the human's head as a goal, Robonaut followed the human to a work site. The RMP's mobility demonstration provided NASA with some insights into potential difficulties. When Robonaut comes into contact with mirrors or other reflective surfaces, it sometimes confuses its individual human acquisition with a person's reflection. Such confusion can also occur when other humans step into the path of its primary human acquisition.
Another robot built on the RMP will follow soldiers as an aid, keeping pace with them in the field and recognizing arm and hand gestures as soldiers instruct it to perform specific tasks. Applied Perception, working with the Army's Future Combat Systems program, is creating a proof-of-concept mobile robot that can be embedded in a platoon and interact naturally with individual soldiers. Todd Jochem, president of Applied Perception, is using the mapping and following technology that USC developed in partnership with Gaurav Sukhatme. Jochem says that the prototype will use computer vision to track individual soldiers and active sensors, such as laser scanners, to track a person at longer ranges or in conditions where passive camera sensors won't work. The robot will follow the soldier, map the environment, and provide feedback to the soldier using voice interfaces. The robot's first demonstration is scheduled for this summer. Jochem sees other uses for the robot, too. "We're interested in how we can get these things into general civilian use," he says, "employing this platform as a tool to interact with humans, such as an intelligent walker for the elderly."
Given a chance to work on robotics problems without spending time creating the mobility facet, researchers are discovering new questions as well as new answers. Many problems engaging scientists are based in biology, such as brain functions; visual, sound, and touch sensing; or human body parts such as arms and hands. However, mobile robotics must deal with problems that humans haven't mastered in the real world either, such as getting along. Complex decision-making solutions must help the mobile robot move into contact with objects while protecting itself and the objects from harm. Robots must have capacities for changing priorities mid-task when the environment changes, and robots must be able to distinguish similarities in objects and humans. As Morrell says, "Probably to some people, going out into the real world is just increasing the complexity without having mastered the simpler environments of being indoors in a lab. At the same time, it is a good sanity check on an awful lot of things."
Mimicking Bat Echolocation
Benjamin Alfonsi
Scientists at the University of Maryland are working to develop an advanced, integrative theory on brain behavior relations that they can apply to robotics.
Their Microchipoptera project ( www.isr.umd.edu/Labs/CSSL/horiuchilab/horiuchilab.html) aims to create a flying bat robot that uses silicon analogs of bat neural circuits to mimic the nocturnal creature's unique echolocation system.


The Microchipoptera project's narrowband sonar system. Operating on a frequency of 40 kHz, the system can track moving targets in real time. Its fixed microphones produce a difference in echo amplitude with azimuthal direction.

Research approach
Echolocation happens when a creature emits a sound and then listens for its echo to determine direction and to recognize different locations and objects. For example, certain kinds of bats (some of the suborder Megachiroptera and almost all of the suborder Microchiroptera), dolphins, porpoises, and a few species of cave-dwelling birds use echolocation.
By comparing a bat's brain with that of a nonecholocating animal, you can gain insight into what's relevant to the bat, what's relevant to the other animal, and what's of interest to both, says Timothy K. Horiuchi, associate professor of electrical engineering and Microchipoptera's research director.
"This is a neural-modeling effort," explains Horiuchi. "[We are] trying to connect what we are learning from neurophysiologic studies of individual neurons and what we are learning about bat behavior into a computational framework that describes how relevant information is extracted from the storm of incoming sensory information."
Much of the research focuses on one of electronic engineering's central challenges—asynchronous signal processing. Like countless devices from cell phones to submarine sonar, bat echolocation involves the processing of analog acoustic data (sound waves) into electrical signals. In the case of bats, this happens through the neurons that make up the bat's brain and central nervous system.
"It appears that bats have different neurons that are tuned for different, specific ranges," Horiuchi says. "Whichever neuron responds tells the bat what the range is."
According to Horiuchi, constructing this type of data is relatively simple, and scientists can easily use it to trigger range-specific behaviors such as insect capture or collision avoidance. By contrast, the bat's azimuthal-navigation ability is centered (literally) on the cochlea, a structure of the inner ear that bats depend on for echolocation.
"The cochlea decomposes sound spectra into parallel streams of electrical pulses (or 'action potentials')," explains Horiuchi. "This appears to be implemented by neurons in the auditory brainstem of many animals.
"When the input to a neuron exceeds a threshold, the neuron fires a pulse, so by different input weightings between the left and right ears, a population of neurons can be made to respond differentially to different echo angles."
Implementation and testing
In the project's lab, analog and asynchronous digital VLSI systems attempt to mimic neural circuits. Neural circuits typically exhibit a multilayered information flow, with analog computations occurring between neurons in a layer followed by digital transmission of the resulting pulses to the next layer, back to itself (in feedback), or to earlier stages.
The team simulates this process on a chip by building arrays of neurons that subtract logarithmically encoded input intensities.
"Generally this means that we're taking in analog data (sound waveforms from microphones) and mimicking the electrical processes found in neural circuits," Horiuchi says.
"In bats, neurons that respond to objects at a specific range are found in the inferior colliculus and respond to different ranges based on the internal dynamics of the individual neuron. When the ultrasonic vocalization is emitted, the cells are suppressed, and if the timing of the suppression ends right when an echo arrives, the neuron fires a pulse. On a chip, this is implemented by an array of neuron circuits that have slightly different internal dynamics."
From there, it becomes a matter of precision-manufacturing the microchips to exacting specifications. The University of Maryland team uses the Mosis Fabrication Facility ( www.mosis.edu). When designed correctly, the chips can be exceptionally low power, with power consumption in the 1 uW to 1 mW range, depending on the resolution and system in question.
Of course, the Microchipoptera project has its own unique challenges.
"We have encountered some difficulty recording neurons while the bat is in flight, performing insect capture, and avoiding obstacles," says Horiuchi. "Although a tremendous amount of neurophysiological data has been collected from the bat while it is held stationary, the bat typically does not echolocate as it would when it is capturing insects or flying normally."
Still, Horiuchi says his team is pleased with the progress they've made in the past five years. "We've built several hybrid analog-digital VLSI chips that contain neural models of echo range detection."
Applications and implications
Possible future applications of bat echolocation research include prosthetic devices for the visually impaired (although finding a nonintrusive way of presenting information to the user is a significant challenge) and improved aids for the hearing impaired.
The most likely beneficiary of bat echolocation research, however, is robotics. Because bats routinely navigate and hunt prey in environments such as forests, they provide an enviable model for echolocation in robots, which often must navigate unpredictable environments.
"There is still much to be learned before manmade systems reach that same performance level," according to Jonas Reijniers, a principle scientist with the Circe (Chiroptera-Inspired Robotic Cephaloid) project, which will finalize construction of a bat head prototype later this year (see www.circe-project.org/index.htm).
Horiuchi believes bat echolocation research additionally illustrates the important interplay between biological systems and intelligent systems. "The concept of intelligence is not a singular thing, and the study of biological systems points more and more to smart creature design and smart strategies and the ability to learn from experience," he says.
Reijniers agrees, stressing the importance of viewing intelligent systems within the context of their physical embodiments and biological environments. "An organism's morphology is an essential component of its information-processing machinery," he says.
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool