AI practitioners are playing a critical role in developing NASA's next generation of flight software. The goal is spacecraft autonomy. If we achieve this key ingredient of space exploration's next phase, NASA can have many more space platforms operating at once, make more effective use of limited communications resources, and attempt bolder mission concepts involving direct investigation of remote environments.
The last three years have been ones of challenge and even vindication for AI practitioners at NASA. Opportunities have arisen that most of us had in mind when we first chose careers in the space program. We wanted, and are now finally getting, the chance to contribute directly to the spacecraft missions NASA conducts, by deploying AI software not only for ground support but also directly on the space platform. This software will play an integral part in the concept and success of NASA's missions.
The changes behind these opportunities have their roots in the well-known "faster, better, cheaper" challenge issued by NASA Administrator Daniel Goldin. Mission and spacecraft designers, flight project managers, and technologists all are asked to make thoughtful contributions toward new kinds of missions that utilize new technologies and manage risks in new ways. The goal is not only to find ways to shorten mission-development lifecycles and reduce launch and operations costs (the "faster, cheaper" parts), but also to initiate a new era of exploration characterized by sustained, in-depth scientific studies at increasingly remote environments (the "better" part). Spacecraft autonomy plays a specific and essential role in this view of NASA's future mission set: the closing of planning, decision, and control loops onboard the space platforms rather than through human operators on the ground, to not simply enhance, but to enable bolder and unprecedented space-mission concepts.
An early achievement within NASA's "faster, better, cheaper" paradigm was the recent Mars Pathfinder mission, with its endearing rover Sojourner. The method of landing at Mars was clearly new and aggressive: more or less throwing the lander and rover at the planet within a cushion of airbags to absorb the impact. The technique proved an unqualified success, and it was only a matter of hours before the first images of the Martian surface were available on the Web. Soon thereafter, Sojourner had crawled down a deployment ramp to begin months of valuable scientific studies directly on the planet's surface.
The rover employed some simple autonomy capabilities: Sojourner could terminate traverse activities by detecting an expected landmark, typically a rock, at the end of the traverse. This simple form of landmark-based navigation will become increasingly critical for the much longer traverses planned for future Mars rover missions, where techniques based on dead reckoning will not scale. The rover used a laser-based system for detecting obstacles, painting a known pattern of laser light on nearby objects and interpreting the size and distortion of the pattern to infer the proximity and crude shape of obstacles. Sojourner's locomotion system is a fine example of achieving a kind of autonomy through engineering design. The system is extremely robust, allowing the rover to safely negotiate objects up to one-half of its own height, thereby rendering them nonobstacles and eliminating the need to actively characterize them and reason about how to avoid them.
Sojourner's limited autonomy capabilities grew out of research and development work carried out several years ago at NASA, when the relevance of AI techniques for the missions was much less generally accepted. The landscape has changed. There might still be disagreement about what forms of autonomy are needed, and how best to go about developing and deploying these new capabilities, or even about what autonomy is
, exactly, as NASA embraces a shift that is as much cultural as it is technological. (For more discussion on this, see my interview with Robert Rasmussen on p. 76.) Nevertheless, the importance of autonomy—and the AI that underlies it—for many of the future missions is readily apparent and agreed upon. 1, 2
This special issue on autonomous space vehicles reports on much of the current work at NASA aimed at designing, developing, deploying, and evaluating autonomy capabilities for space platforms (see the " Autonomous space vehicles
" sidebar). This lead article will place this exciting AI work fully in its NASA context, and specifically in the context of the future planned missions of exploration—fascinating in their own right—that require, and in some cases cry out for, autonomy.
The Strategic Value of Autonomy
Spacecraft autonomy will pay off for NASA in three ways:
reducing mission costs (an example of "cheaper"),
making more efficient use of always-limited communications links between the ground and the space platform (an example of "faster"), and
enabling whole new mission concepts, each involving some new form of loop closing onboard the remote vehicle (an example of "better").
AI, and in particular, model-based techniques, can reduce costs across the entire NASA mission lifecycle. They can let system designers understand constraints explicitly and quantitatively in the earliest mission-concept design studies. They can also provide modeling languages and tools to capture appropriate knowledge in the first stages of detailed design, and be carried forward to the rest of the mission lifecycle. These tools can contribute to new software-engineering concepts and techniques for generating, testing, and reusing autonomy software, and finally, and perhaps most obviously, can improve mission operations. The degree of success in reducing operations costs by migrating traditionally ground-based functions to the spacecraft, providing a more direct link between mission scientists and the space platform, and in general decoupling the space vehicle from the traditional form of ground support, will most likely be the first criteria against which autonomy capabilities are evaluated. Certainly, there must be a shift from a paradigm of large, dedicated ground teams for each mission to smaller ground teams shared among several missions.
While the imperative of reducing mission lifecycle costs is easily understood, the greater strategic value of autonomy might lie elsewhere. For years, data-collection technologies as embodied in sensors and instruments have been easily outstripping the capacity of data-analysis techniques and technologies. The normal science data-processing and data-analysis lifecycle for a NASA mission involves downlinking all raw data and assembling a ground-based archive, on which the mission science team and later the science community at large perform offline analysis, typically for years. With concomitant advances in data-mining, image-analysis, and machine-learning technologies on the one hand, and onboard computing technologies on the other, the very real possibility of performing some forms of science-data analysis onboard the spacecraft, in near real time, has emerged. Onboard analysis offers two advantages:
capturing transient opportunities that require the quick, reliable recognition of scientifically interesting events (such opportunities clearly are lost in the normal course of delayed offline analysis), and
making more efficient and flexible use of the precious downlink resource, through downlink prioritization and, in some cases, the onboard construction of more compact, perhaps more useful science products from the raw data.
But perhaps the most exciting—and important—use for spacecraft autonomy is in the enabling of new kinds of missions, ones not previously within reach because they require the space platform to operate in an unprecedented closed-loop fashion in its environment. The greatest strategic payoff for autonomy is here, because the potential is nothing less than the launching of the next major phase of space exploration, beyond the highly successful reconnaissance missions that have already been completed. These future missions will involve sustained in situ scientific studies, with themes as compelling as the search for life in the universe.
Future Mars rover missions provide a good example of the need to close loops between science-related detection and mission planning. During long traverses from one preselected science site to another, the rover should detect potentially significant scientific phenomena and halt the traverse, conducting preliminary analyses and waiting for further instructions.
Another new form of loop closing involves constellation missions comprising multiple space platforms. Here loop closing takes the form of coordination among the platforms, which is most interesting when they carry different assets. An example from Earth orbit is the spaceborne detection of environmental hazards such as forest fires or volcanic eruptions. The first satellite to detect such an event might not have the most appropriate instrument for studying it, but when it sends out an alert across an entire Earth-observing fleet, other instruments can be brought to bear, each platform making its own decision on whether and how to contribute to the study of the event.
These are just a few examples of future mission concepts where the contributions of autonomy will be as essential as those coming from any traditional form of spacecraft-engineering or mission-design expertise. (For a discussion of the affect autonomy will have on space exploration by a wide range of experts from throughout the space-exploration community, see "World Impact" on pp. 78-80.)
Components of Spacecraft Autonomy
The capabilities that contribute to spacecraft autonomy fall into six categories: automated guidance, navigation and control, mission planning, scheduling and resource management, intelligent execution, model-based fault management, onboard science-data analysis, and autonomy architectures and software engineering. The articles of this special issue will treat nearly all of these areas in depth:
Automated guidance, navigation, and control is the form of autonomy with the longest history and is what most spacecraft and mission people first (sometimes only) think of when asked about autonomy. The area includes target-body characterization and orbit determination, maneuver planning and execution, precise pointing of instruments, landmark recognition and hazard detection during landing, and formation flying.
Mission planning, scheduling, and resource management addresses spacecraft activity planning from high-level mission goals and activities replanned when science or engineering events occur. Planned activities are automatically checked against available spacecraft resources and hard temporal constraints from the mission timeline.
Intelligent execution is about task-level execution, monitoring and control, contingency management, and overall coordination of spacecraft activities. The capability also provides a measure of protection against software failures.
Model-based fault management comprises anomaly detection, fault diagnosis, and fault recovery. Model-based reasoning techniques let designers achieve reliable fault protection without the comprehensive space-platform safing, loss of mission context, or immediate ground intervention required when faults occur.
Onboard science data processing includes trainable object recognizers and knowledge-discovery methods applied to, among other objectives, prioritizing science data for downlink. Scientists evolve goals by modifying onboard software as a better scientific understanding of the target emerges throughout the mission.
Autonomy architectures and software engineering are the glue that binds together all these capabilities. This area addresses basic separation of reasoning engines from models and knowledge, design of modeling languages and modeling tools, code and test generation, development of specific autonomy software testing concepts, and creation of architectures and development environments that promote easy, flexible software reuse from mission to mission.
NASA is currently developing most of these autonomy capabilities as part of an initial emphasis on autonomy for spacecraft or engineering functions. Such capabilities directly address loop-closing and cost-reduction goals. But as time goes on, autonomy development will increasingly target the science side of the missions. That work has begun even now. Once a critical mass of autonomy capability is in place, we can also expect an intersection with other computer science technologies. In particular, we can easily imagine scenarios where alerts based on science-event detections on remote space platforms are downlinked, then broadcast over a future, extended version of the World Wide Web, where even members of the general public can receive light-delayed but otherwise real-time imagery of volcanic eruptions on Jupiter's moon Io, for example. Indeed, such a scenario seems almost a logical conclusion of the technology development taking place in autonomy and other areas right now.
The notion of spacecraft autonomy raises concerns in the minds of many people at NASA—about technological maturity, risk, feasibility from a systems-engineering viewpoint, and actual benefits. Here, I'll enumerate some of these concerns and provide short responses. Neither the list of concerns nor the responses should be taken as complete or final. The emergence of spacecraft autonomy at NASA is taking place against a general background of cultural change, but the questions concerning autonomy have already moved beyond "why?" to "how?"
A common concern is an example of a systems-engineering issue: Will there be adequate computing resources onboard future spacecraft to support the more sophisticated flight software that is implied by autonomy? The answer appears to be a relatively recent "yes." In parallel with autonomy technology development, NASA is also pursuing aggressive technology development in flight computers and memory. While this concern might have been a show-stopper only a few years ago, we now anticipate that scaleable processors in the 100+ MIPS range and gigabytes of onboard storage will be routinely available for future missions. Such specifications are well within the real-time and footprint needs of autonomy software currently under development. New forms of software fault tolerance are being developed as well, to contribute to solving the problem of operating in high-radiation environments, usually approached solely as a hardware fault-tolerance problem.
The communications resource is more interesting. As I've noted, instrument and sensor technologies routinely advance the capacity for collecting data onboard space platforms. Concurrent technology development in communications, particularly in optical communications, will help to offset this trend by increasing link-bandwidth capacity. However, the situation is a perfect example of race conditions and will probably never be eliminated. Given this, abilities for performing onboard science-data analysis to either prioritize downlink or intelligently summarize science data will almost certainly play an important role in addressing this particular resource challenge as well.
Another concern has to do with whether autonomy development will really reduce cost. After all, first-use applications of new technology rarely provide cost savings. This is true, and the straightforward response is that technology development costs must be amortized across several mission uses before the savings are apparent. But there is a different and subtler answer to this concern as well.
When new concepts and technologies are introduced in one part of the mission life cycle, new costs often appear elsewhere in the life cycle, in a strange kind of manifestation of an apparent conservation law. The way to prevent this phenomenon is to introduce new concepts and technologies across the mission life cycle, not only for their direct and complementary contributions to cost reductions, but also so that there is completeness and no easy cracks for new costs to fall through. Without such awareness, autonomy software testing could easily represent one of these cracks. Autonomy software, which is intended to support reasonable decision-making in scenarios that have not been anticipated, cannot be meaningfully tested with a scenario-enumeration or even a scenario-sampling approach. New testing concepts and approaches must emerge (see the article by Michael Lowry and Daniel Dvorak in this issue). Fortunately, autonomy work has contributions to make to design, development, integration, test, and, of course, operations. The best way to realize autonomy's cost-reduction potential is to apply new ideas in software engineering, and probably systems engineering as well, right across the mission lifecycle.
Yet another concern has to do with the perceived additional risk implied by any new technology. However, a technology does not entail risk in itself; rather, how the technology is used determines the level of risk. For example, science-autonomy developments suggest that the critical downlink resource might be usefully partitioned on future spacecraft between raw data, data that matches a recognizer, and data that passes some form of "interestingness" measure and needs to be examined by a scientist as a candidate discovery. How to weight the use of these downlink partitions is up to the mission designers and scientists. In fact, the technology might be used differently, perhaps more boldly as the mission unfolds, for several reasons:
more confidence in the technology,
primary science goals for the mission are met,
better basis for using recognizers, and
reduced support for continuing the mission.
The point is that the technology provides more options and flexibility, but risk posture is still for mission and science personnel to decide upon.
The risk issue for autonomy also involves concern about loss of predictability of spacecraft events or, equivalently, loss of precise tracking of spacecraft state. Strictly speaking, this observation is true, but it typically ignores the reasons why it is true. Autonomy software consciously considers the onboard context in which activities occur; this context can include both the spacecraft internal state and the environment. This property of autonomy software makes it difficult to test, most certainly, but it is also targeted towards an unprecedented form of robustness that traditional spacecraft sequences do not provide. Autonomy software can be resilient, continuing to try to find alternate ways of executing commands and achieving mission goals despite execution glitches, faults, and other unanticipated events. Traditional sequences might safely preserve the spacecraft, but the mission gets interrupted, pending ground intervention, when a sequence or contingency cannot execute properly. The flip side of unpredictability is effectively grappling with uncertainty, and this is much of autonomy's promise. NASA's autonomy technology developers fully acknowledge that they have not yet convincingly demonstrated autonomy software's robustness. However, future in situ missions all involve space platforms interacting directly with their environments, raising the stakes on the amount of uncertainty with which developers will have to deal. Autonomy does imply a trade between predictability and robustness in execution, but it is a well-considered trade, and an appropriate one for the times, in light of the nature of the future missions.
The Missions of Exploration
As I've argued, autonomy has strategic importance for many of NASA's planned missions. These missions are organized into three so-called enterprises:
Space Science, with primary responsibility at the Jet Propulsion Lab,
Earth Science, with primary responsibility at the Goddard Space Flight Center, and
Human Exploration and Development of Space (HEDS), with primary responsibility at the Johnson Space Center.
The three mission sets impose different kinds of drivers on autonomy technology development. In the Space Science mission set, the central difficulties associated with light-time-delayed and tenuous communication, coupled with the sparse prior information available on deep-space planetary targets, make the need for autonomy to respond, in context, to unanticipated engineering and science events, fairly obvious and imperative. This is particularly true in the upcoming wave of in situ missions where direct interaction with a remote planetary environment adds more uncertainty to what is already largely unknown. Planetary exploration (and someday, extra-solar system exploration) will always place the most severe demands on autonomy. For this reason, the majority of the mission examples given here are drawn from the Space Science Enterprise, which in no way slights the contributions of autonomy to the Earth Science and HEDS Enterprises.
The looming challenge in the Earth Science Enterprise is grappling with truly overwhelming amounts of data—on the order of terabytes a day—that fleets of Earth-observing space platforms will collect and downlink. Another challenge is automated planetary monitoring for hazards such as forest fires, volcanic eruptions, and poorly understood phenomena such as El Niño (see Figure 1
). Earth orbit is also the first place where formations and constellations of spacecraft will appear—with their attendant control and coordination challenges.
Figure 1. Automated analysis of Earth observing data (all images in this article courtesy of JPL).
The driving consideration in the HEDS Enterprise is to find the right ways to combine human and machine intelligence into a single, effective system. One unique challenge is to automatically track the state accurately enough when a human enters a control loop so that the updated context can be made available once control reverts to the machine—a kind of cognitive clutch. Any applications of autonomy in the HEDS Enterprise will be always stringently evaluated against human safety concerns.
I turn now to a quick survey of some of the fascinating upcoming missions, describing their science and exploration goals—many of them unprecedented—and examining specifically what autonomy has to offer.
The Mars 2003 and Mars 2005 missions will return rovers to the surface of Mars. These missions will have more ambitious goals than Pathfinder/Sojourner in the number of sites to be investigated, the breadth and depth of science investigations to be conducted, and the total amount of terrain to be traversed (see Figure 2
). The basic mission goal in each case is to collect and cache a sample of Mars rocks and other surface material (one or the other cache will be retrieved and returned to Earth as part of the Mars 2005 mission), performing in situ analysis both to support the selection of cache material and to return intermediate data in the normal way during the missions. The '03 and '05 rovers will each carry a full complement of scientific instruments and sensors to continue the investigation of conditions and possibilities for life on ancient Mars, among other goals.
Figure 2. A Mars Rover for long-traverse missions.
The rovers will operate in two major modes—conducting science investigations at a site and traversing between sites. To maximize the scientific return of these missions, autonomy will give rovers the onboard capability to interrupt a traverse based on the detection of scientifically interesting phenomena (outcroppings, unusual mineralogical signatures, or evidence of water). Autonomy will also let the rover adapt its performance by learning models of rover performance in the Martian environment. Even a few-percent increase in locomotion efficiency and resource usage can translate into significant additional scientific throughput when integrated over the entire mission.
The Europa Orbiter mission, which is slated for launch in 2003, will perform focused investigations of this most intriguing of Jupiter's moons. Europa fires the imagination because of current theories on the existence of a subsurface ocean. Tidal effects due to the proximity of immense Jupiter and orbital resonances among the Jovian satellites exert forces of considerable magnitude at Europa, great enough perhaps to release the thermal energy that could result in a layer of liquid water beneath the surface (Europa has long been known to be mostly a water-ice object, from Earth-based spectroscopic studies). Recently, organic material has been detected on the surface of Ganymede and Callisto, two other Jovian satellites, raising the stakes further on the possibilities for Europa to harbor the three basic ingredients of life: water, an energy source, and organic material. (See "Europa: life elsewhere?" pp. 81-84, for a further discussion of these investigations.)
Europa has a dramatically disrupted surface; one indirect indication of a subsurface ocean is the scale of tectonic movements on the Europan surface (see Figure 3
). Autonomy can help here. The Europa Orbiter spacecraft can arrive with archived image data of Europa's surface from the previous Voyager and Galileo missions. The spacecraft will also begin to collect new data that can be archived onboard. Then there is a local basis—at three different time scales—to detect change on the surface of Europa. If the orbiter finds such evidence of tectonics, it can tag the specific images for high-priority downlink, in a natural and compelling example of using onboard data analysis to pursue science goals while efficiently addressing the constraints of deep-space communications.
Figure 3. Change detection on planetary surfaces.
Origins is a new NASA program charged with investigating the ultimate origins—of the universe, of galaxies, of life. The Planetfinder mission might turn out to be its flagship mission. Planetfinder will be a deep-space interferometer most likely comprising several elements. Using interferometry to null the light coming from nearby stars (out to 50 light-years), and then systematically searching for planetary companions of those stars, this mission aims to directly image Earth-class planets by 2010 or so, ultimately resolving continental masses on their surfaces. The search for life in the universe has recently taken a number of palpable and exciting forms at NASA.
The need for autonomy on Planetfinder stems from the mission's multiple-platform aspect. Because the interferometer would be composed from several spacecraft elements, the need to point the entire formation with unprecedented precision for truly deep-space observing creates a special challenge (see Figure 4
). If this collective platform is to operate at low cost, the inevitable—and divergent—degradations of performance that will appear over time across the distinct platforms must be automatically detected, evaluated, and compensated to preserve the interferometer's overall coordinated pointing accuracy. On this mission's science side, automated classification of detected planets is a possibility, as is automated spectroscopic analysis of atmospheric constituents of Earth-like planets.
Figure 4. Deep-sky interferometry.
Before the Planetfinder mission is realized, formations and constellations of spacecraft in Earth orbit will appear, with objectives for Earth-observing (natural-event detection, atmospheric and oceanographic studies, land-use and ecological management) and communications (networks such as Iridium and Teledesic). The salient difference between formations and constellations is whether the individual satellite assets are similar or not, and whether a strict geometric configuration is required to perform the mission. In general, homogeneous formations are appropriate to support low Earth-orbit (LEO), satellite-based communication networks, while heterogeneous constellations provide more and desired flexibility for Earth-observing objectives.
In formations and constellations, spacecraft functions become distributed and do not simply scale from the single-platform case. This applies to mission planning, resource management, execution, and fault protection, as well as to information sharing and problem solving. Multiple-platform missions also require shared approaches to operations to keep costs down—for example, with ongoing engineering-data summarization and paging alerts when problems occur. Finally, automated orbit maintenance, including onboard navigation, maneuver planning, and execution, along with de-orbiting of compromised satellites and automated promotion of satellites held in reserve, will be needed to maintain formations at low cost.
In its manned space programs, NASA planners are looking in earnest at autonomy for the next-generation Space Shuttle concept, to reduce the cost and turnaround time associated with the vehicle's flight. The specific goal is to slash payload tenfold and achieve a routine seven-day turnaround. NASA has examined a number of designs recently, with the one known as X-33 going forward to detailed design and initial flight tests.
Autonomy figures prominently in the emerging NASA concept to operate a low-cost, quick-turnaround reusable launch vehicle for LEO manned missions. Here, onboard software conducts ongoing fault and performance monitoring, while salient engineering data is downlinked automatically and requests for maintenance and repair are also generated automatically. Such requests are input to a ground-based automated planning and scheduling system, which generates and updates a maintenance plan and schedule for refurbishing the vehicle even while it is in flight, for immediate execution upon landing.
Returning to deep space, a mission that might fly by 2004 is the Pluto/Kuiper Express mission. Pluto is the only known planet yet to be visited by a spacecraft. Historically, Pluto has been something of an enigma. The first four planets (including Earth) are small and rocky, with thin atmospheres. The next four, known as the gas giants, are large and might be almost entirely composed of gas. Pluto, despite its great distance, seems more like the terrestrial planets than the gas giants. This mystery might now be solved, with the emerging understanding of a third class of objects in the solar system, the so-called Kuiper objects, of which Pluto might be the most outstanding member. A mission to Pluto is now even more compelling in the context of this new theory.
Any trajectory to Pluto is dominated by the extremely long cruise period required to reach this most distant planet. The Pluto Express mission calls for on the order of a 12-year cruise, including the benefit of a gravity assist at Jupiter. To keep costs reasonable, Pluto mission personnel conceived an innovative operations concept known as Beacon Operations. On a continuous basis, the spacecraft sends a simple signal that denotes the urgency with which it needs interaction with the ground. This concept assumes a certain level of autonomy on the spacecraft, certainly for fault protection, but perhaps for detecting science events as well. The Beacon Mode Operations concept includes the idea of onboard engineering-data summarization in an ongoing fashion, so that when an emergency signal arrives from the spacecraft, it is quickly followed—once a full communications link is established—by an anomaly report, including context and completed analysis, to bootstrap the ground-based troubleshooting effort.
Perhaps the mission currently on the NASA books that cries out for autonomy more than any other is Deep Space Four (DS-4), which is the rather generic name given to a mission with a planned 2003 launch that is to rendezvous with, land on, and return a sample from a comet (see Figure 5
). Comets likely contain primordial material largely unaltered from the era of the formation of the solar system. DS-4 would rendezvous with its comet at the range from the sun where interaction with the solar wind begins to produce noticeable activity—the beginnings of the tail.
Figure 5. Sample return from a comet.
What makes the DS-4 mission so intriguing from an autonomy viewpoint is the extreme unpredictability of the cometary environment. Comets can spontaneously emit jets, eject particles, and even break up. A mission to rendezvous with, much less land on, a comet must detect events that represent potential hazards to the spacecraft and mission, as well as being science events in their own right. The set of onboard autonomy capabilities that appear relevant here, at a minimum, are event and hazard detection, object tracking, navigation, and maneuver planning and execution. These capabilities must be tightly integrated so that decision loops can close quickly—for example, to abort landings or execute safety maneuvers.
Aerobots—or planetary hot-air balloons carrying serious scientific payloads—are a newly conceived space-platform concept that combines the wide coverage advantages of orbiting spacecraft with the in situ exploration advantages of surface vehicles such as rovers (see Figure 6
). The basic idea is to exploit the diurnal thermal cycle of a planetary environment to alternately go aloft into the prevailing winds and land on the surface (where there is one), once a day (or sol, the equivalent in the local planetary environment). The concept works wherever a planetary atmosphere exists, including at Venus, Mars, Jupiter, and Saturn's moon Titan.
Figure 6. Planetary aerobots.
An aerobot would require a high degree of onboard autonomy, because entering a dense atmosphere (Mars being the exception among the examples just now) implies difficult communications, with much of the mission occurring without routine interaction with the ground. Aerobots also suggest a unique form of the path-planning problem. Presumably, the vertical dimension of motion can be reasonably controlled, but the two horizontal dimensions will have a significant stochastic element, and path planning would have to be based on models of wind patterns. This suggests an approach of arriving with crude wind models derived from Earth-based observations, and refining them based on actual experience in the planetary atmosphere. In aerobot missions, scientists might experience a certain frustration with analysis results not achieved in situ, because it would be nearly impossible to return to a site. Perhaps the mission with the most remarkable set of stretch goals—for both autonomy and general engineering functions—is the proposed Europa cryobot/hydrobot mission. This mission would land on the surface of Europa, melt through its icy crust, and release an underwater submersible into the suspected subsurface ocean (see Figure 7
). The problems to be solved are mind-boggling. First, melting through perhaps several kilometers of ice at a temperature that gives ice the structural properties of rock—and starting from vacuum—is unprecedented. Going tethered or untethered each presents unique challenges. A tethered mission would solve the communications problem, but reaching the Europan ocean floor, which might be a hundred kilometers from the ice-water boundary, becomes problematic. But, going untethered forces one to look at acoustic communication within the ocean, or navigate back to the penetration site, or somehow reemerge through the ice crust at a different site.
Figure 7. Exploration of unknowable environments.
The need for autonomy on this mission is obvious, for the usual drivers of poor communications and an uncertain environment are multiplied many times. It's hard to imagine sending a spacecraft into a more alien environment. And yet, it's also hard to imagine a more compelling place to explore. Nowhere else in our solar system have we any reason to expect to find an ocean, perhaps the defining global characteristic of our own planet. I've already noted that Europa might harbor the basic ingredients of life. Would we be able to equip our intelligent envoy to know what to look for and the means to recognize it?
This quick survey is just a sample from the incredibly exciting set of future NASA missions. The space agency is returning to its most noble goals of exploration: the search for life in the universe and a new vision of sustained, vigilant intelligent presence in the solar system and eventually beyond, via a fleet of autonomous space vehicles. Autonomy done well means tapping the expertise not only of computer scientists, but of spacecraft engineers, mission designers, operations personnel, software engineers, and systems engineers. And, for the first time, AI practitioners will work side by side with these traditional contributors, to realize the future NASA mission set.
Richard J. Doyle
is technical section manager of the Information and Computing Technologies Research Section and program manager for the Autonomy Technology Program at JPL. His research interests are in causal reasoning about physical systems, machine learning, and quantum computing. He received a BA in mathematics from Boston University, an SM in electrical engineering and computer science from MIT, and his PhD in computer science at the MIT Artificial Intelligence Laboratory. He was the US Program Chair for the International Symposium on Artificial Intelligence, Robotics, and Automation for Space, held in Tokyo in July 1997. He is a member of the AAAI, Sigma Xi, Phi Beta Kappa, and Pi Mu Epsilon. Contact him at JPL, Mail Stop 126-347, California Inst. of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109-8099; firstname.lastname@example.org; http://autonomy.jpl.nasa.gov.