IEEE Internet Computing

PATTIE MAES on Software Agents: Humanizing the Global Computer

"Now that we have a network, it's as though we already have our intelligent machine. It's a huge distributed system in which, like an ant society, none of the components are critical."

--Pattie Maes

Anyone familiar with the current schisms in artificial intelligence knows that Pattie Maes, long-time AI researcher and student of Rodney Brooks, hopes to revolutionize the way we think about agent technologies. Brooks, Maes, and others represent a new paradigm in in AI often termed the "bottom-up" school, in which biological structures have replicated the rules of logic in the quest to develop intelligent machines.

Traditional AI approaches, which use symbolic knowledge representation that embody fundamental "rules of thought," have been turned upside down by the new school, who write simple, small programs that are designed to let intelligence evolve as the programs interact. The small programs run without any central, complex governing program, which proponents point out is more closely akin to the actual neuronal structure of the brain than abstract symbolic languages will ever be. The first demonstration of the possibilities of this new approach were Brooks' famous "bugbots," insectlike robots that wandered MIT's Artificial Intelligence Laboratory during the late 1980s.

Maes and her Software Agents Group at MIT have taken this principle of interaction and married it to the Internet with the development of software agents that interact with other agents or humans to provide useful services, usually using a Web interface. Maes's company, Firefly Network, Inc., based in Cambridge, Mass., was first to market in the tightly competitive AI world with innovative agent products that let Web sites develop personalized content and services for users. The now famous Firefly Web site,* which launched Maes into the public spotlight in 1996, was designed as a prototype for this company.  

Pattie Maes is clearly one of the most dynamic and imaginative thinkers on the forefront of agent research today. Internet Computing's Charles Petrie and Meredith Wiggins met with her for an hour of provocative conversation at her office at the Media Lab, where Maes began with an explanation of why she views agents as a critical technology in today's computing environment.

What do you think are the most important developments in agent technology right now?

Let's first say what agents are. You see the term used in so many different ways. I use the word agent to mean software that is proactive, personalized, and adapted. Software that can actually act on behalf of people, take initiative, make suggestions, and so on. This is in contrast to today's software which is really very passive, it just sits there and waits until you pick it up and make it do something. The metaphor used for today's software is that of a tool.

At the MIT Media Lab's Software Agents Group, we're trying to change the nature of human-computer interaction. I personally believe more proactive and more personalized software is of crucial importance because our computer environments are becoming more and more complex, and we as users can no longer stay on top of things.

The whole metaphor of direct manipulation, of viewing software as a tool that the user manipulates, was invented about 25 years ago when the personal computer was first emerging and when the situation for the user was completely different. Back then, the computer was being used for a very small number of tasks. It was being used by one person, who knew exactly where all the information was on the computer because he or she put it there. Nothing would happen unless that person made it happen. This was a very controlled, static, structured kind of environment.

The situation that a computer user faces today is completely different. Suddenly the computer is a window into a world of information, people, software. . . . And this world is vast, unstructured, and completely dynamic. It's no longer the case that a person can be in control of this world and master it. So there is actually a mismatch between the way in which we interact with computers, or the metaphor that we use for human-computer interaction, and what the computer environment really is like today. I think we need a new metaphor.

The one we are proposing is that of software agents, software that is personalized, that knows the user, knows what the user's interests, habits, and goals are. Software that takes an active role in helping the user with those goals and interests, making suggestions, acting on the user's behalf, performing tasks it thinks will be useful to support the user.

One of the phrases you've used in your work is the metaphor of indirect versus direct control.

Yes, actually Alan Kay1 originally came up with the great phrase of "indirect management" in contrast with direct manipulation. Our goal is to change the nature of human-computer interaction from the direct manipulation metaphor where the user has to initiate everything, to an indirect management style of interaction where every user has a whole army of agents that try to help with the user's different tasks, goals, and interests.

Sometimes I envision it as having digital alter-egos, extensions of yourself in a digital world that obviously aren't as complex and as smart as you are, but that look out for your particular interests and are continuously acting on your behalf. They may be monitoring some data of particular interest to you like whether the stocks that you own are increasing or decreasing in value, and notify you when unusual changes take place. Buying and selling agents may actually represent you, engaging in transactions on your behalf, negotiating with other people for you, spending your money, or making you money. Other agents may make recommendations to you about things you may want to look into, like Firefly software now helps you find relevant people and information.

Your vision is one of autonomous intelligent, personal agents. Another competing vision people have been talking about is ubiquitous computing, as originally described by Marc Weisner* from Xerox Parc.

Actually, agents and ubiquitous computing are complementary visions rather than competing ones. We are actively involved in merging the two. The Media Lab has its own terminology for an idea that is very close to ubiquitous computing--we refer to it as Things That Think, or TTT. If software is to take a more active role in helping users, one of the first prerequisites is that it know their interests, goals, and behavior and be able to detect patterns in their actions. We have been working on ubiquitous computing in the sense of embedding sensors and computation and communication abilities in everyday objects. This will permit, for example, my refrigerator to monitor whether I'm out of milk and tell my remembrance agent (the agent that reminds me of things that may be important to me) to remind me to pick up some milk the next time I drive past the grocery store. The approaches as completely complementary: Ubiquitous computing makes it possible for agents to help users with physical world tasks as well as digital world tasks.

Agents need to have information about the user--not just about the user's behavior online but about the user's behavior in the physical world--for them to assist us with a range of tasks. Embedding not intelligence but capabilities in everyday objects is one crucial part of the solution.

This vision of intelligent personal agents has been around for a long time. You've written that we're quite a ways from seeing it come to pass. Is anything new in this regard?

Indeed this vision of agents is a very old one, and in fact it probably has been around for 25 years or so. But I think that for the last, say, 20 years we were on the wrong path toward trying to achieve it. We were trying to approach this goal by researching artificial intelligence or by attempting to make computers with the same level of intelligence, the same capacities, that people have. Obviously this is a very ambitious goal. Although we have made some progress, it will still be a very, very long time before we actually have computers that really are as intelligent as people.

One of the critical things that has happened in the last five to 10 years is a the emergence of a new approach toward building agents, much more of a brute-force approach--some people even refer to it as "cheating." We try to build software entities that demonstrate behavior that, to an observer, seems intelligent, even though the ways in which they achieve that intelligence may not truly be the way that people do it.

This is starting to sound like Eliza.*

Yes, actually it is. Nobody really took Eliza seriously. But now we are taking approaches that are similar to that of Eliza to build not just intelligent software agents but intelligent systems. The same approach is being taken in robotics2 and natural language understanding3 as well. These approaches are brute force and rely mostly on pattern recognition, recognizing patterns in large amounts of data, rather than relying on knowledge representation and other traditional AI techniques people have been working on for many years. There's no reasoning, no inferencing, none of that. It's just recognizing patterns and exploiting them.4

To give an example, Firefly is a system that can help you find information relevant to your interests in the areas of music, movies, Web sites, whatever. The Firefly system doesn't have any real knowledge of music. It appears as if it does because it can recommend to you artists or recordings, based on some knowledge of your interests, that you have a good chance of finding very relevant. But it does this simply by exploiting patterns it finds among users.

Because the users of Firefly tell the system what music they like and dislike, when you ask for recommendations for, let's say, blues artists, the system computes the users who are most similar to you in interests--the people who are your taste-mates, so to speak. It then checks for music they are interested in that you don't seem to know about yet and recommends that music to you. You can think of it as a way to facilitate the transfer of musical intelligence among people. That's just one example of how a brute-force, pattern-recognition approach can result in things that actually work, that are useful, that even can be called intelligent (that seem to be intelligent to an observer) even though there isn't any real musical intelligence or understanding behind the system.

It occurs to me that in contrast to processes like browsing in a library or on the Web, which expose you to new ideas, systems like Firefly give users data that's very narrow in scope. Will people's horizons be narrowed if they rely on personal agents?

This is a very valid concern, but one which can be dealt with through good user interface design. If an agent only gives you what you like or what you ask for, then your view of the world will become more and more narrow. It's important to integrate the agent interface into a direct manipulation interface, or to integrate agent recommendations within an existing system where the user can also browse.

To take a concrete example, if you have an agent that puts together a personalized newspaper for you, one way to do this--and this is what we originally did--is to have the agent give you a list of articles. You give it feedback and it changes its profile of you. If you think the agent shouldn't be your sole source of news information, you go to another program that lets you browse news just like a real newspaper.

It turns out that's completely the wrong approach to take. Most people won't bother to do the browsing, they'll rely on the agent's articles, and after a while they'll get a much more narrow view of the world. They'll only be giving the agent feedback about stuff the agent already gives them. Instead, a much better approach is to take an existing direct manipulation metaphor--meaning the user can browse directly, like with a newspaper--and have the agent highlight articles it thinks the user will be interested in. It's as if someone who knows you very well has already gone through your newspaper and highlighted all the things you definitely should not miss. You will still see the other articles, and you may say, "Oh, this is interesting as well," and then the agent can learn that you're also interested in that and adapt its user model.

Let's talk more about the technology behind your filtering agents. What new technology is involved--or are you suggesting you have simply applied existing technology in a different way?

Actually, there isn't much new technology involved. A lot of the pattern-recognition algorithms we're using are standard stuff. Of course, you want to make sure that things scale, and so on, the algorithms need to be adapted, but it's not at all a completely new technical field. Often more research is needed in the user-interface design for all of this than in the actual algorithms involved.

What I'm really doing, I think, is shaking people up. Once they get it they say, "Oh yes, of course, we could make computers take more initiative, etcetera." But it's not necessarily that hard to make it happen--it doesn't take years and years of research. In fact, a lot of this kind of agent work is already becoming commercially available. So I'd say software agents are mostly a new way to think about software.

Let me disagree with you a little bit. It seems like you do have some new technology, or at least some new ideas. A case in point: if you're going to use this approach, it's very hard to jump-start one of these agents because it doesn't start with any supply of patterns that it already evaluates. It doesn't know what to recommend to you. It seems to me one of your novel ideas has been for these agents to talk to other agents who do know something about their users.

Definitely there's a very important idea here. For so many years AI has been trying to build intelligent computers--intelligent medical experts, for example. What we realized is that there's a different way to make computers do intelligent things, and that's by actually allowing them to channel or transfer intelligence and know-how among people who are dealing with similar problems. The technical term we use is "collaborative filtering."

For example, I recently did some research because I was buying a car. I looked at all the different online magazines, read reviews of the cars I was interested in, tried to find information about what prices the dealer really pays and whether they get money back at the end of the year if they sell more than a given number of cars. Even though there are many other people who are (or soon will be) trying to solve exactly that same problem, right now this type of knowledge is lost unless it's shared directly among friends. With something like movies it's easy to acquire knowledge since almost everybody watches movies fairly often (even so, you may have very different tastes in movies than most of your friends, in which case you're still not that well off). But with more unique kinds of problems like buying a car--you and your friends don't buy a car every other week--then it really is very difficult to make use of the knowledge acquired by other people. Our systems try to leverage this kind of knowledge.

In some of your papers you solved the problem of bootstrapping the system by starting off with virtual users. When I go on the Net and I look at the list of people in Firefly, are any of them virtual?

Actually we don't have virtual users in Firefly, because we had a lot of users from the start. For the first week, maybe, the system didn't give good recommendations, but from then on it was fine. So in practice if you have a lot of users, bootstrapping the system is not a problem.

I see, things are chaotic only for very short times because you have so many people. So the technical point here is that by using this technique you can substitute lots of people for learning experience. It's like a space versus time tradeoff, but it's number of users versus time instead.

Exactly.

So collaborative filtering is a pure AI learning technique that you developed. But would it have developed without the Internet?

Probably not. The Web wasn't really there in 1993, but the Net was there. And the technique definitely relies on the fact that there's lots of people connected and you can easily tap into their experience or opinions.

It's not completely limited to usage on the Internet, though, because you could imagine that, say, Tower Records had kiosks in their stores. When you put in your membership card it would ask you what you thought of the U2 album you bought last time you were there, and after you answer, it tells you some new things that you might be interested in that week.

But what I think is important is that it was in some real way an Internet-inspired technique, just like Tom Malone's Lens system.8-10 He wouldn't have developed Lens without the Internet either.

Yes, it's true. AI has tried to build stand-alone, intelligent systems--one machine that would be as intelligent as a person, the dual goals of this being to understand human intelligence by trying to synthesize it, but also to create smart machines that can do things for us. In terms of that second goal, now that we have a network, it's as though we already have our intelligent machine. It's a huge distributed system constituted by lots of people as well as machines that you can make act like an intelligent system.

For example, I can send a message to mailing lists and get the answer to any question I may have, or even get people to do anything I may need done. So in a way I think we already have an artificially intelligent system. Yes, it's sort of a mix of humans and machines, but it exists. It's a completely distributed system in which, like in an ant society, none of the components are critical. If any of the people who are part of this network are not logged in or die tomorrow, it's not going to affect its performance. I can still get the answers to all the questions I have. So it's an extremely robust, fault-tolerant, swarm-like or insect society-like kind of intelligent system. That's the kind of thing we are exploiting with systems like Firefly.

What a wonderful insight. Science fiction writers have been writing for years about developing a consciousness within a system that's sufficiently robust, and AI has been working on the structure of the consciousness. And what you're saying is, never mind, it's here!

We always think of intelligence as a centralized thing. We view even our own consciousness as centralized. It's called the homunculus metaphor--that there's a little person inside our brain running things. But it's more likely that intelligence is decentralized and distributed.11 It would be great to try to solve some other AI problems in this way. For example the "common sense knowledge" problem. In the Cyc* project Doug Lenat is basically trying to build a computer that knows all the common-sense facts that a ten-year-old would know, like how many feet a horse has, and so on.

He's trying to build in what you would need to know in order to read an encyclopedia and understand it.

Right, all the stuff that isn't in the encyclopedia. Now you could try to reach that same goal in a completely distributed way by making use of the Internet. Instead of having ten people carefully craft a knowledge base and enter all the facts, you could have a system that asks anyone who's online at the moment how many feet a horse has. Well, actually it should ask 100 people how many feet a horse has, because some people may be malicious and say five or three. If it asks enough people, it could take the answer that is given most often, four. You could build up common sense like that in a very distributed way by using the power that the Net provides--the fact that there are always people connected and they're willing to do a little bit of work. And if everybody's willing to do a little bit of work and you have some interesting software to connect all of that, you can achieve behavior that is seemingly very intelligent. You can achieve very complex and sophisticated things.

You've just outlined a very interesting research project which may be more promising than Cyc or Ontolingua or any of the other distributed, formal systems.

Initially the Web was designed with this very much in mind. It was designed as a sharing medium, a place where we would together build up our knowledge about things and have a medium for dialogue.

Yes, but there's nothing about formal knowledge sharing built into HTML or HTTP. Cyc and Ontolingua12 are trying to build up knowledge bases that software can use for reasoning and inference. The research project you just outlined would develop software that allows that using the Internet.

Yes, and this project wouldn't just be ratings of artists, it would be a database of facts that is being built up by having millions of people contribute. Even if each person contributed only two facts, together this would result in a very rich and powerful system.

But would those two facts be computer readable?

I think they could be, yes, if the system asks for them in the right way and if people answered in a structured format.

Is this project just an idea right now, or do you have plans to pursue it?

Well, we may pursue it, although I always have too many projects, and this would be a big one to pursue.

Let's go back to your current work. On one hand you're making this sound ready for prime time, while on the other hand, there are clearly some very difficult issues here. This new idea involves coming up with common ontologies and a common language. You are certainly also working on new learning algorithms

I completely agree that there is a lot of work involved if you want to make things robust, if you want to make things scalable, and so on. Those are all very hard problems to solve, and you mentioned some of the other technical difficulties that we deal with. What language do you use to allow these agents to talk to each other? Which learning algorithms do you use? How do you provide enough features for the system to actually detect patterns? There are a whole set of technical challenges. While some of it is indeed ready for prime time, there's definitely a lot of work ahead of us as well. Enough to work on for the next 10 or 20 years.

How do you juggle the effort it takes to commercialize things with the effort it takes to do the research on these hard problems?

I'm trying to avoid a mistake we made in AI, which is that a lot of the problems we worked on weren't really relevant for any applications. I think I became a little bit disillusioned by having been in AI research for 10 years, and seeing so much work done to come up with a very general solution to problems--a generic architecture for x or a general language for y--but none of it ever got used. In all of my research I try to strike a difficult balance between doing basic research, coming up with results that are general and can be used by the research community, and building prototypes that are usable, that can inspire others to deploy them in commercial applications. I think it's important to try to do both of these things, and never to lose contact with the real world.

I've taken a very pragmatic approach. I'm doing very what I call "applications-motivated" or "applications-driven" basic research. My goal is still to do basic research, but it's completely driven by applications. I only tackle certain research problems when there is a real need for tackling them, and I try to find a good balance between developing technology that applies to more than just one system, while avoiding the most general solution, because it typically ends up being too bulky, too big, and ends up never being used.

This seems like a controversial approach to science. It goes against a long history of research, not only in AI but in computer science in general of coming up with the most general solution, and then applying it.

Often we say that Artificial Intelligence has "physics envy." AI researchers hope they'll find the general principles for x and y. For example, many people in AI hoped and still hope that there could be a generic problem solver and other such very general principles that would apply across all problem-solving domains. They would be the magic ingredients to solve all of your problems. I and some others as well are taking a different approach. We're approaching artificial intelligence more in the way a biologist would, rather than a physicist. There are a lot of principles, not just five or three. There are lots of different mechanisms that are all useful and interacting, just like what you would find in an organism or an ecosystem. For example, research into animal behavior shows us that the behavior of an animal is the result of many simple components interacting--a huge bag of tricks, so to speak--rather than the result of any generalized complex reasoning and representation modules.

Who else is taking this approach?

There's a whole new school of people taking this biology-inspired approach to AI. Rodney Brooks* is one of the researchers who started the new wave of AI research. Brooks' work shows how you can build robots that demonstrate sophisticated behavior by integrating a distributed set of very simple modules. Marvin Minsky* is another good example.

Didn't AI go to this general approach because it initially started out with people writing clever programs that did astounding things, but from which nothing general could be learned?

To some extent, yes, it's the pendulum swinging back to clever programs. However, we are trying to do more than that; we're trying to figure out how these different clever programs may interact, for example. One important idea is this idea of very distributed, adaptive systems, systems that continuously try to change themselves and adapt. How does this very distributed, decentralized collection of entities organize itself? And how does it adapt over time?

Let me get back to your different approach in research, which is to do specific things and then learn from it and generalize, which also has the advantage that it lets you commercialize the very practical things you develop. How much of your time does commercialization take away from your research?

At the moment it's up to one day per week, which is the amount of time MIT encourages its faculty to spend transferring know-how to industry. Most people spend that time consulting for existing companies. I use that time to create my own company. I haven't regretted it yet.

I've been building these applications of software agents for six years now (I've been doing more general AI-oriented agent work for much longer), and I used to tell many different companies about my work, trying to convince them to incorporate the ideas into products. It's amazing how slow most of the bigger companies are to adopt new ideas. A concrete example is Apple. They funded a lot of my work on software agents, which I am very grateful for, and although I was there every month talking about the newest stuff we had done and giving them code, they didn't end up doing anything with it. Now it's Microsoft that has the first agent in an application with Microsoft Office 97.* I felt I had to start my own company to make sure these things actually became commercially available. That was the only way to really make it happen.

Do you think the Microsoft 97 Office Advisor is an agent?

It's a simple example of an agent, but it definitely is one. It's just providing better help functionality, but it monitors your actions, and based upon the pattern of actions that you demonstrate, it recommends specific help topics to you. So it tries to recognize what your goal is and gives you help that is relevant to the current situation. It's not personalized yet, it's assisting me in the same way it's assisting you, but it's a first step. Hopefully people will like this first attempt, and Microsoft will take it further.

An aspect of what you call an agent is something that is personalized or personalizable, and perhaps can surprise you (something like what's meant by taking initiative). But you don't necessarily insist that an agent be sociable.

Sociable in the sense that they talk to other agents or to people?

I tend to mean it in terms of talking to other agents. But you also consider something to be an agent if it operates in a single platform environment and only talks to a single person?

Sure, yes. The key is that it takes initiative, that it can act autonomously, that it doesn't just sit there and wait, but that it always assesses the situation, monitors the environment, and decides to take action based on whatever happens to be going on. That is the way I use the word agent. It really stems from AI research, in which agent is an abstract concept. Basically, it is a program that has its own goals, that has sensors to sense its environment continuously, and that can decide what actions to engage in to make progress toward its goals based on what it senses in its environment.

Some people will indeed add to that that agents have to be able to talk to other agents, but that's not a necessary characteristic. In fact, often what you see in AI research is this notion that agents communicate with one another through the environment without engaging in explicit communication. Ants, for example, don't talk to each other. Still they demonstrate collaborative work that requires them to communicate with each other passively by changing the environment. For example, if a couple of ants put down some food somewhere, then other ants are more likely to leave their food in the same location. Suddenly that's where the whole ant colony ends up storing its food. There isn't any real communication between these ants; they just change the environment and that affects the other ants' behavior. Software agents can communicate or collaborate in that kind of sense as well. For example, agents could leave the digital equivalent of a pheromone at documents they deemed relevant to their users, thereby attracting more agents toward those same documents.

What problems have you solved launching systems like Firefly, and what's left to do?

I think we've made a lot of progress in learning algorithms, and in figuring out what learning algorithms to apply to what problems. Now we have one agent that helps the user with one specific problem like finding relevant music, and that agent can communicate with other people's agents to transfer knowledge among users. However, what we haven't really tackled yet is heterogeneous agents. Right now all of these agents are helping the user with the same things, so there aren't that many problems in terms of how these agents collaborate, how they communicate, what ontology they use, because it's all hand-coded and they're all homogeneous. We want to move toward a situation in which these agents could be heterogeneous and different vendors could be making them. Second, we want these heterogeneous agents to be able to collaborate with one another.

For example, if I buy an agent from one company to filter my e-mail, and I buy a personal news-filtering agent from another company, ideally the two agents should be able to exchange information and collaborate. For example, topics my e-mail agent has found I give high priority to are probably things I want to receive news stories about as well. Similarly, if I get lots of e-mail from a particular person at a particular company, I probably want to receive news stories that mention that same company, and so on. My buying and selling agents could collaborate with my recommendation agents or with my matchmaking agent, so that, for example, I could be matched with other people who are trying to buy the same car as me.

This will require more generic languages that agents can use to exchange information. Some efforts have been made here, for example, DARPA sponsored an effort to come up with standards.

Their knowledge-sharing effort?

Yes. It still hasn't really been used extensively, though.

The hard problem here seems to be agreeing upon the standard. It was KQML, but in fact nobody is using standard KQML. Now the Physical Agents Society* has proposed something new (Agent Specification,* Foundation for Intelligent Agents (FIPA)*). Do you think the standards will emerge in this fashion or in a more grassroots manner?

Well, I'm not an expert on standards and I personally don't find it terribly interesting to be involved with standardization. It's a slow and painful process. But I'm not convinced you can always impose standards like that in a top-down way, and I'm sure often it happens in very unexpected ways, for instance some products that are successful become the de facto standards. With Java that's definitely the case.

Yes, with Java, and with everything on the Internet from TCP/IP to HTTP for e-mail. So you may be setting standards by having commercial working products out there.

Definitely my priority is to build things that demonstrate the usefulness of this technology, so that it isn't simply the next fad that everybody has forgotten about a year from now. I want to make sure that there is something substantial there. I'm less interested in coming up with the standards before we even know whether users want this stuff.

But it's also important to know what it is you need in a standard, and you do that by experimentation.

Yes, definitely. And I think it's still too early to standardize agents and the languages they use. We need more experimentation first, more wild ideas that people try out, and different applications. Whenever you come up with standards you stop research and development right there, or at least slow things down a lot.

Let's switch tracks now and talk about mobile agents, code that can migrate from machine to machine, with a persistent identity so it's carrying its state with it. What is the application for this technology?

If you find out, let me know, because I haven't found out yet. There isn't one, as far as I can tell. I agree this idea of mobile agents is definitely an appealing one. It sounds very elegant and interesting. However, once you ask the question, what can you do with mobile agents that you cannot do with stationary agents, there is no satisfactory answer that I've come across. So again, I'm not going to worry about mobile agents unless I have a need for them.

Some people say, yes, but look at Java, isn't that useful?

Yes, but Java is not really moving program state around. And also, it's not generally mobile. It just means you're downloading a program from somewhere; it's not that that program itself is hopping around based on results from computation and deciding where to go. So Java per se is not an example of a mobile agent. Java is interesting for a lot of other reasons apart from the fact that you can download it on your machine, of course. Its portability for example.

Let's take as an example your Challenger system, a network load-leveling system that uses agents that reside on each machine.10 If you used mobile agents, you would still have to put some sort of common framework on each machine. Do you see any advantage to using a mobile system for systems like Challenger?

No, honestly, you're asking the wrong person. I am known to be very critical of mobile agents. Whenever I give tutorials about agents I always warn the people in the audience that I think they are really not as important as some people make them seem.

Some people are frightened by the vision of ubiquitous computing and digital alter egos. Loss of privacy is clearly one component of this. What do you say to such people?

I think that most consumers (myself included) will only be willing to adopt agent technology if their privacy is safeguarded. Luckily agents do not necessarily imply a loss of privacy. We advocate that agent technology should be deployed in such a way that the consumer is the sole owner of the information that is captured about him/her, and also that the consumer has complete control over who gets access to what aspects of that information. Several commercial examples of agents, such as the Firefly software, have demonstrated successfully that this (however thin) line can be walked.

REFERENCES

1. A. Kay, "User Interface: A Personal View," in The Art of Human-Computer Interface Design, B. Laurel, ed., Addison-Wesley, Reading, Mass., 1990, pp. 191-207.

2. R. Brooks, "Elephants Dont Play Chess, "Robotics and Autonomous Systems, Vol. 6, 1990.

3. M. Mauldin, "Chatterbots, Tiny Muds, and the Turing Test," Proc. Natl Conf. AI(AAAI-94), MIT Press, Cambridge, Mass., 1994.

4. P. Maes, "Agents That Reduce Work and Information Overload, "Comm. ACM, Vol. 37, No. 7, 1994.

5. P. Maes, "Intelligent Software," Scientific American, Vol. 273, No. 3, Sept. 1995, pp. 84-86.

6. U. Shardan and P. Maes, "Social Information Filtering: Algorithms for Automating Word of Mouth," Proc. CHI-95 Conf., ACM Press, New York, May 1995.

7. Y. Lashkari, M. Metral, and P. Maes, "Collaborative Interface Agents," Proc. 12th Natl Conf. Artificial Intelligence, Vol. 1, AAAI Press, Seattle, Wash., Aug. 1994.

8. K. Lai, T. Malone, and K. Yu, "Object Lens: A Spreadsheet for Cooperative Work," ACM Trans. Office-Information Systems, Vol. 5, No. 4, 1988, pp. 297-326.

9. T.W. Malone, "Free on the Range: Tom Malone on the Implications of the Digital Age," IEEE Internet Computing, Vol. 1, No. 3, May/June 1997, pp. 8-20, http://computer.org/internet/xtras/malone9703.htm.

10. T.W. Malone et al., "Intelligent Information Sharing Systems," Comm. ACM, Vol. 30, 1987, pp. 390-402.

11. D. Dennet, Consciousness Explained, Little, Brown, Waltham, Mass., 1992.

12. T. Gruber, "A Translation Approach to Portable Ontology Specification," Knowledge Acquisition, Vol. 5, No. 2, 1993, pp. 199-220.

13. M. Minsky, The Society of Mind, Simon & Schuster, New York, 1986.

14. P. Maes. "Modeling Adaptive Autonomous Agents," Artificial Life J., C. Langton, ed., Vol. 1, Nos. 1&2, MIT Press, New York, 1994 pp. 135-162.

15. A. Chavez, A. Moukas, and P. Maes, "Challenger: A Multiagent System for Distributed Resource Allocation," Proc. Intl Conf. Autonomous Agents, Marina del Rey, Calif., 1997, forthcoming.

URLs FOR THIS ARTICLE

Rodney Brooks and Cog
http://www.ai.mit.edu/projects/cog
Cog is a humanoid robot developed by Rodney Brooks, associate director of the AI Laboratory at MIT. In the March 1997 issue of Time magazine, Brooks said of Cog that "there is no there there," referring to the robot's decentralized intelligence.

Marvin Minsky on the symbolic vs. connectivist controversy
http://minsky.www.media.mit.edu/people/minsky/papers/SymbolicVs.Connectio nist.txt
Turing Award recipient and professor at the MIT Media Lab, Minsky is well known for his seminal work in neural networks. He built SNARC, the first neural network simulator, in 1951. Other inventions include mechanical hands and other robotic devices, the confocal scanning microscope, the "Muse" synthesizer for musical variations (with E. Fredkin), and the first LOGO "turtle."  

CYC
http://www.cyc.com/tech.html
http://www.cyc.com/documentation.html
Doug Lenat developed the Cyc project in 1984 while at Microelectronics and Computer Technology Corporation (MCC). The Cyc system comprises a very large, multicontextual knowledge base, an inference engine, a set of interface tools, and a number of special-purpose application modules. In 1995 the project was spun off into the new company Cycorp, with Lenat as president.

Eliza
http://www-ai.ijs.si/eliza/eliza.html

Firefly, Inc.
http://www.firefly.com

Microsoft Office 97
http://www.microsoft.com/workshop/prog/agent

Ontolingua
http://www.cs.umbc.edu/agents/kse/ontology
http://www-ksl-svc.stanford.edu:5915/doc/frame-editor/index.html
http://www-ksl-svc.stanford.edu:5915
Ontolingua is a tool for the construction of collaborative ontologies developed by Tom Gruber.

Physical Agents Society specifications
Agent Specification, http://drogo.cselt.stet.it/fipa/spec/httoc.htm
Foundation for Intelligent Physical Agents (FIPA), http://drogo.cselt.stet.it/fipa

Mark Weiser
http://www.ubiq.com/weiser.html