Pages: pp. 4-9
A vision for robotics is for machines to do tasks that humans either can't do well or can't do safely, such as dangerous search-and-rescue work.
However, this vision will remain out of reach until robots can move in ways that currently are impossible or impractical. For example, to look for natural-disaster survivors, the machines would have to climb steep, uneven ground; navigate through debris; and fit into small openings. To provide such capabilities, researchers are working on robots that can move in new ways.
Better movement systems will let robots function in a wider variety of settings than is currently feasible and also become more resilient when, for example, they are bumped or pushed.
Researchers have struggled with such issues, explains Georgia Institute of Technology assistant professor Daniel Goldman.
Now, though, they are looking at embodied-intelligence techniques to enable the development of robots that carry out different movement patterns with less complexity.
Scientists are also interested in improving movement capabilities by better design of physical structures rather than via improved algorithms.
They hope these approaches will help them develop robots that can flexibly respond to environmental changes and initiate movements quickly enough to maintain balance under various conditions.
They also want to create machines that climb by swinging like an ape, burrow like a lizard, and run using components that mimic human muscles and tendons.
Several research projects are focusing on new types of robotic movement.
Robots have trouble flexibly shifting between different gaits or motion patterns on their own in response to changing environments, such as when they move from flat, hard ground to sloped, uneven terrain.
Researchers at the Bernstein Center for Computational Neuroscience Göttingen, Georg-August-Universität Göttingen, and the Max Planck Institute for Dynamics and Self-Organization are developing a new kind of robotic controller ( www.nld.ds.mpg.de/research/projects/control-and-selforganization-for-autonomous-robots) that uses stronger links between sensors and neural networks to better adapt to changing conditions with less computational overhead.
In humans and other animals, different gaits and movement patterns are controlled in the nervous system by a type of neural network called a central pattern generator. Previous robotics research relied on a separate CPG for each type of desired gait. The robotic system monitored its cameras, accelerometers, and gyroscopes and then chose the best CPG for a given environment.
However, this approach doesn't adapt well to minor or sudden environmental variations.
The new research has led to a single adaptable CPG capable of flexibly generating multiple gaits as required by the surrounding terrain. The researchers have incorporated granular sensor data into their systems' neural networks, which process and coordinate the information so that the robot can adapt and move more smoothly.
The robots can then adopt either preprogrammed gaits or gaits they have determined are appropriate for various situations on the basis of machine learning from real-world experience.
The researchers plan to add support for incorporating data about environmental elements that the robot saw at one point but that are no longer visible. This would let a robot complete a large-scale maneuver even when there isn't ongoing sensor data for the entire route, explains Max Planck scientist Marc Timme.
He said many applications—including drone aircraft, highly interactive robots, and machines that help with exploration and disaster-victim searches—could benefit from this research.
Walking robots built using traditional actuators such as motors do well in controlled, predictable environments.
But in the real world, they typically have trouble maintaining balance because they don't respond quickly to unexpected forces such as a kick or push.
A key problem has been that the actuators don't have enough power to balance a large, bipedal robot that's wobbling. Also, they can't provide sufficient power quickly enough to maintain balance before the robot falls.
Researchers at the University of Tokyo's Johou Systems Kougaku Laboratory, led by Masayuki Inaba, have developed a new actuating system capable of high-speed, high-power movements ( www.jsk.t.u-tokyo.ac.jp/research/system/urata.html).
They retrofitted a 53-kilogram (116.8-pound) Kawada Industries bipedal robot with actuators consisting of high-voltage, high-current, liquid-cooled motors capable of short, quick bursts of force.
To deliver the required short bursts of strong current, they use a powerful capacitor, rather than either batteries or electricity from an outlet whose cord could limit or interfere with a robot's movement.
The researchers also developed a sensor-based balance-control system capable of detecting if a robot starts to lose stability and identifying the best of 170 preprogrammed responses in just 1 millisecond.
Traditional research on climbing robots has focused on either their speed of ascent or the different ways they adhere to a wall.
University of Utah researchers, led by assistant professor of mechanical engineering William Provancher, are studying how robots could climb while using the least amount of energy.
By being energy efficient, the recursively named ROCR (ROCR is an Oscillating Climbing Robot; http://heml.eng.utah.edu/index.php/ClimbingRobots/ROCR) would extend the time that it could perform routine tasks such as surveillance, inspection, and maintenance of building exteriors, bridges, or other structures, says Provancher.
The ROCR can climb with 20 percent efficiency, representing the portion of electricity used that is converted into movement. Other climbing robots are only 15 percent efficient.
A motor causes the 1.2-pound robot's tail to move from side to side as its two appendages—which are covered with microspines—alternately attach themselves to a carpeted wall and let go, moving the machine in much the same way that a monkey uses its tail to swing through trees (see Figure 1).
Figure 1 University of Utah researchers are working on energyefficient, battery-powered climbing robots that could perform tasks such as surveillance, inspection, and structure maintenance. A motor causes the robot's tail to move from side to side as its two microspine-covered appendages alternately attach themselves to a carpeted wall and let go. This moves the machine in much the same way as a monkey uses its tail to swing through trees. (Source: William Provancher, University of Utah; used with permission.)
The researchers explored a variety of tail and appendage movements, studying which were most effective in terms of working with gravity and momentum. They identified the most energy-efficient combinations for ROCR and for future climbing robots that might have different sizes, shapes, control systems, and gripping mechanisms.
Currently, ROCR climbs only flat surfaces. Provancher said future research will look at ways to climb around or over additional types of surfaces, as well as other gripping technologies such as magnets, suction cups, or electrostatic adhesives.
Robots often have trouble moving through granular material such as sand, gravel, or debris from a natural disaster because there haven't been good models for traveling though such complex, multifaceted environments, says Georgia Tech's Goldman.
His research team is working on better models ( http://gtresearchnews.gatech.edu/sandfish-robot/), addressing issues such as how the material itself moves when an object passes through it.
Goldman says that scientists have done considerable work on models for robots moving through liquids. However, he notes, granular materials create more difficult problems because their surface is highly irregular, and a wider variety of chemical forces dictate how the grains adhere to one another.
Georgia Tech scientists have studied the ways that animals—particularly sandfish lizards, which can burrow into and swim through sand in the Sahara Desert—move through coarse substances. They are using this information to design a robot that moves horizontally by undulating its tail and vertically by varying the position of its head.
The scientists also evaluated the movement of different wedge-shaped blocks through a granular medium consisting of small plastic spheres. This could help identify the best shapes for robots that must travel through such material.
Traditional bipedal robots move by adjusting the angle of joints attached to rigid limbs. Humans use muscles, tendons, and bones.
MIT's Athlete project ( www.isi.imi.i.u-tokyo.ac.jp/~niiyama/projects/proj_athlete_en.html) is studying animals' musculature to replicate its flexibility, which could let robots respond to shocks better than they can with traditional approaches.
Ryuma Niiyama, a postdoctoral fellow at MITs Robot Locomotion Group, has developed a flexible, bipedal machine that mimics the way a human runs by using elements that act like the muscles and tendons in a person's leg, hip, and lower abdomen. Each robotic leg uses seven sets of air pistons, arranged like human muscles and tendons. The machine's foot is a flexible piece of bent metal.
The robot can run, jump half a meter (1.64 feet) before falling, and land softly after being dropped from a height of 1 meter (3.28 feet).
However, additional movement flexibility creates more ways a robot can fall.
So far, says Max Planck's Timme, most robots move in only a few ways and don't coordinate multiple motions well.
However, researchers plan to continue developing new models. They want to create robots that can move with greater flexibility and precision and machines that can run, burrow, and climb. This would enable their use in a greater range of activities.
The University of Utah's Provancher expects to see more work on miniaturization, which would allow the construction of smaller robots that could move effectively in more environments.
He also predicts the development of robots that could utilize data from their own and other sensors to guide their movements, as well as improvements in the computational approaches used for robotic control.
Future research will require close collaboration between physics and AI researchers to build robots that model the way that animals move.
In the short run, Max Planck's Timme says he expects to see more advanced autonomous robots such as drones, which eventually will lead to machines that interact with humans and each other in coordinated ways.
A University of Washington researcher has developed a way to use artificial intelligence, statistical theory, and other technologies to study the huge amounts of data associated with analyzing human genomes.
Assistant professor of biostatistics Daniela Witten has designed AI programs to do in days what used to take scientists years to accomplish.
Understanding human genomes could let scientists better understand how multiple genes work together or inhibit one another in the expressions of cancer or other ailments and then develop personalized treatments, said Witten.
Witten is exploring how machine-learning algorithms could make sense of data available from gene-sequencing experiments. This would be important because, for example, a statistical study of a cancer cell's DNA could identify the DNA pairs responsible for certain characteristics, thereby making further analysis much more manageable.
Until a few years ago, though, researchers would have to spend years studying a single biomolecule—such as DNA, RNA, or a protein—because computer systems and algorithms were slow. They couldn't keep up with improved technology that can now measure billions of biomolecules and commonly generates 3 Gbytes of data per experiment.
Analyzing this information is particularly challenging because the number of variables is much greater than the number of samples analyzed, Witten explains. "You might be able to measure millions of biomolecules (variables) in a single experiment, but you'll never be able to enroll millions of patients (samples) in your study," she explains.
With current analytical approaches, researchers could draw inaccurate conclusions if they have only a few samples to work with.
Witten looked at ways to not only speed up the genome-study process but also to accurately analyze data when the number of variables exceeds the number of samples and to generate useful knowledge about what the information represents.
Her application uses statistical machine-learning techniques that focus on making predictions based on properties learned from a set of training data.
The technology speeds up the genome-analysis process and draws accurate conclusions from the resulting data by looking at the relationships and interaction patterns among multi-ple genes and proteins simultaneously, rather than just comparing two at a time. This provides useful information even when the number of variables is much larger than the number of samples.
Most recently, Witten has developed network-learning tools that require less computation than previous approaches to identify the relationships among variables.
Witten notes that her technology has benefited from today's increased computational and storage capabilities, as well as better statistical-learning tools.
University of Southern California professor of statistics Gareth James says previous work in this research area looked at ad hoc approaches for estimating the complex patterns among genes that help or inhibit disease processes. Witten, he notes, has developed highly efficient techniques for more accurately conducting these computations.
Witten says her research holds promise for several areas of biology such as uncovering cancer's underlying mechanisms.
According to James, the research could also help in other fields that use machine learning, such as financial analysis.
Additional fields that could benefit include signal processing, computational linguistics, computer vision, and weather forecasting.
A team of academic researchers is working on a system that would let people who aren't technical professionals design and make their own robots relatively inexpensively in just a few hours.
This would be far different than today's costly and time-consuming robot design and manufacturing process.
A multidisciplinary team from Harvard University, MIT, and the University of Pennsylvania are working on the five-year project, called An Expedition in Computing for Compiling Printable Robotic Machines, which recently received a $10 million US National Science Foundation grant.
Advances in printable-robotics technology could democratize robot design and use, enabling people who aren't scientists or who don't have a lot of money to more quickly and easily create, manufacture, and market creative, useful robots.
Producing, programming, and designing a functioning robot traditionally involve hardware and software design, machine learning and vision, and advanced programming techniques.
The process also often requires the purchase of special parts and the complex design of a robot control system, notes University of Pennsylvania professor Insup Lee.
The new academic project promises to utilize 3D printing technologies to build robots. 3D printers use specialized printing heads that deposit thin layers of a bulk material such as a plastic or metal so that they form the desired shape. The system then fuses the material as necessary via heat or an adhesive.
The researchers plan to both extend the uses of existing 3D printers and develop new techniques for working with multiple types of materials simultaneously.
Developers could specify what they want the robot to do, such as walking or climbing. A compiler would translate these high-level specifications into the mechanical parts and control system necessary to accomplish such behavior.
For example, the compiler would convert the 3D design into a series of instructions that tell a specific type of printer how to lay down different kinds of materials to make the necessary parts.
The researchers are designing algorithms for telling printers how to create complex, working robotic parts such as limbs, gears, and motors.
All this reduces design complexity for people who aren't robotics experts.
The scientists are also developing a high-level, easy-to-use CAD-like tool that provides a GUI to guide the design process. Robotic developers could work with templates of desired functions, such as walking or grasping, which they could arrange into a model of a working robot. They would then use the compiler to generate the shape.
The researchers are designing an easy-to-use API that would let developers specify types of robotic behavior using a familiar programming language such as C++ or Java.
The academic team has already created prototypes for an insectlike robot that could be used to search hazardous areas and a gripper that could be attached to a robotic appendage.
Printable robots are an exciting and promising challenge for education and research, says Universidad Carlos III de Madrid visiting professor Alberto Valero-Gomez.
Last year, his school began exploring an integrated approach to teaching robotic-system design with printable robots, a few of which are shown in Figure 2.
Figure 2 Universidad Carlos III de Madrid researchers made these robots with 3D printers. (Source: Visiting professor Alberto Valero-Gomez, Universidad Carlos III Madrid used with permission.)
The classical design approach focuses on programming, Valero-Gomez explained. "Thanks to 3D printers," he says, "the teaching program may also include mechanical design. In this way, students might discover the tight relation between hardware and software." Thus, he notes, students learn how mechanical design changes can sometimes solve problems better, faster, and more robustly than software approaches.
Moreover, he said, with 3D printers, production times and associated costs are small.
The technology could also make it easier to develop fleets of robots quickly and inexpensively, and let a decentralized community of developers independently produce robotic parts based on shared digital designs.
Two key challenges for the researchers are developing customized controllers that can be easily programmed for various robotic and printer systems, and creating parts that are easy to print but still sturdy.
Better open source tools for the development and sharing of hardware and software designs could help make the 3D printing of robots more accessible to a wider audience.
Affordable tools for sharing, evolving, and collaborating on 3D hardware designs are also scarce. 3D design tools such as Computer Aided Three-Dimensional Interactive Application ( www.3ds.com/products/catia) and Solid Edge ( www.plm.automation.siemens.com/en_us/products/velocity/solidedge/index.shtml) are relatively expensive and not well suited for community development, says Universidad Carlos III's Valero-Gomez.
OpenSCAD software ( www.openscad.org/) for designing objects is prom-ising but lacks the versatility and flexibility of using programming languages such as C++ and Java, he notes.
To address this issue, Valero-Gomez and his associates are working on the C++ Object Oriented Mechanics Library ( http://iearobotics.com/oomlwiki/doku.php).
He says technologies like this; improvements in open source, low-cost 3D printers; the development of better manufacturing materials; and increased interest in the technology will advance the use of printable robotics.
In the short run, the University of Pennsylvania's Lee says, the new project's main goal is building a better platform for teaching students to think about robotic design and software together.
In the long run, he notes, the work will create an equal playing field for people who want to build robots but don't have access to the complex skills and expensive equipment that traditionally have been necessary.
"This is an expedition to explore the limits of 3D printing and robotic programming together, and it is not clear when it will be practical," Lee says. "Today it is possible to do simple things. Making robots that have practical uses is an ambitious task and may take several years."