Issue No. 12 - December (2006 vol. 39)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2006.432
B.S. Bourgeois , Naval Res. Lab., Washington, DC
D.A. Sofge , Naval Res. Lab., Washington, DC
P. McDowell , Naval Res. Lab., Washington, DC
The ultimate goal of our research is to provide teams of unmanned underwater vehicles (UUVs) some of the abilities of animals to adapt to their environment using their memories, without requiring exhaustive trial-and-error testing or complex modeling of the environment. We focus on UUVs because they offer the promise of making dangerous tasks such as searching for underwater hazards or surveying the ocean bottom more safe and economical for government and commercial operations. We adopt a team concept to reduce overall mission cost using several low-cost subordinate UUVs to augment the sensor capabilities of a higher-capability lead UUV. Our goal is to develop a team of robots that would have the capability to learn their roles and improve team strategies so that the team can meet its overall goals in dynamic unstructured. Our research uses a sensor-input-based metric for success combined with a training regimen based on recently collected memories - a temporal series of sensor/action relationships - in which robots with "ears" listen for a leader robot and attempt to follow, and where the ensuing formations are a result of emergent behavior.
underwater vehicles, learning (artificial intelligence), remotely operated vehicles, robots, sensors, UUV, memory-based learning, unmanned underwater vehicles, robot teams, sensor-input-based metric, Robot sensing systems, Underwater vehicles, Animals, Testing, Hazards, Oceans, Environmental economics, Government, Costs, Vehicle dynamics, Learning algorithms, Unmanned vehicles, Robotics
S. Iyengar, B. Bourgeois, D. Sofge and P. McDowell, "Memory-based in situ learning for unmanned vehicles," in Computer, vol. 39, no. , pp. 62-66, 2006.