, University of Modena and Reggio Emilia
, University of Southampton
Pages: pp. 96-98
Abstract—A little confusion goes a long way—too far—with software-based agents. Engineering discipline is the solution.
It was a pleasant, warm evening. Franco found a soft grassy area in the park, and sat down with a spaghetti ball and beer. He ate, drank, and rested quietly in the fading light of a beautiful spring evening. It wasn't a night to spend with the Salvation Army, he thought.
"Please stand up and report to the nearest police station! City laws forbid vagrants from sleeping in the park."
Startled by the harsh synthetic voice of the park control system, Franco looked up and saw one of those irritating all-look-the-same yuppies scrambling for a place to hide with his girlfriend. He saw, too, that—like every other boy of his age and social class that year—this one was wearing a pair of Timberland boots, carefully dirtied so he could appear as poor as possible.
Obviously, one of those ridiculously simple software agents that were dispersed in every corner of the city 20 years ago had never been deactivated. Now it had mistakenly identified the prosperous kid as a vagrant because of his scruffy boots.
In 2017, a tidal wave of enthusiasm had launched a program to integrate automatic intelligent control systems. At first, the program had strong support from governments, suppliers, merchants, and consumers. By embedding agents in streets and other public sites, the city expected to provide useful information and improve public safety. In the park, for example, intelligent software agents controlled embedded sensors designed to recognize dangerous or illegal situations, discourage lawbreakers, and alert the nearest police station.
Franco didn't care much about the sensors. Like any true vagrant, he knew exactly how to outwit those software agents by simply rearranging his clothing. When the young boy gave up and left the park with his girlfriend, the synthetic voice stopped. Franco settled down again to watch the stars that began appearing in the sky.
Then, as often happened on such lonely nights, Franco started thinking of those last days of his youth—right before his personal agent hell began.
On a Saturday night in July 2021, Franco had joined his friends at a café. He ordered a draft beer from a cheerful waitress, while his friend Paolo ordered one using a PDA. Everybody in the group began making bets on which of the two beers would come first.
Franco won the bet. Paolo paid the waitress for Franco's beer, ordered another one for himself, and then tried to track the status of his electronic beer order. The agent was stupidly smiling from the screen, asking Paolo to be patient, unable to say anything more.
Over the past few months, the waitress had consistently beaten the agent because of power reductions and congestion in e-marketplaces. In fact, most people no longer used a PDA to order a beer, but the fun of betting hadn't yet worn out with Franco and his friends.
At the same table, Andrea and Mario were talking about music. Mario was irritated because yesterday he had bought the latest Anastacia MP7 at almost twice the price Andrea paid just a few hours earlier. Such discrepancies were common with the auction-based pricing systems that e-marketplace service providers promoted as a technology destined to overtake human effort.
Paolo and Franco joined the discussion—yet another occasion to laugh about Paolo's PDA agent. The friends debated the recent European Commission resolution to impose strong price regulations on e-marketplaces. The resolution aimed to eliminate agent-based pricing systems.
Mario claimed to believe in a secret society that controlled the whole pricing system, but everyone knew this idea was just an urban myth. It was simply impossible to control the billions of agents negotiating in the network and the unbearable price fluctuations that resulted.
In the end, the friends all agreed on the usefulness of the EC's decision.
Laughing, talking, and drinking—a usual Saturday night. But Franco was expecting something unusual. At 10:00 p.m., he was to meet with Cecchetto, the city's most important music agent. Franco already had some success as a sax player in local discos and pubs. However, Cecchetto could change his life—opening doors to the big theaters, MP7 recordings for sale in major portals, and why not TV?
Franco had worked hard to arrange the meeting, which Cecchetto postponed several times. Now the time had come. But 10:00 passed, then 11:00, and Cecchetto did not appear. By midnight, Franco gave up.
He tried to contact Cecchetto the next day and the next, but couldn't get through to him. He began receiving messages from the discos and pubs where he usually played, saying, "We're sorry, but we have to cancel your performance." Soon, all his performances were cancelled, and he couldn't get any new gigs. Franco was without work or money. He eventually lost everything, including his old sax.
What happened was this: Cecchetto was simply running late that night and sent a personal message to Franco to apologize: "Franco, I am late. Please wait for me. I want your sax!"
Somehow, the last "a" of the message was misinterpreted as an "e." Franco had instructed his agents to automatically answer any sex-related spam messages, before deleting them, with messages like, "You moron! Stop wasting my time!"
Being very powerful and easily offended, Cecchetto shut every door in the city to Franco and his sax. Franco tried to explain what happened, but it was too late.
His story became one of many that led to the EC's decision a few years later to ban software agents.
Notable advances in both miniaturization and communications technology, together with advances in artificial intelligence and agent-based computing, enable us to imagine a world of pervasive computing technologies.
It is very likely that hardware technologies, properly empowered with agent-based software, will dramatically improve our quality of life. Applications in our homes and workplaces can use sensors and intelligent control systems to automatically perform tasks such as regulating room temperature and ordering supplies.
At closer range, agents can coordinate the activities of wristwatches, PDAs, and cellular phones via short-range wireless technologies that interconnect devices worn on a person's body. Connecting such agents to a city's computer-based infrastructure could allow, for example, users of augmented reality glasses to visualize environmental dangers.
Most commercial transactions can occur in agent-based marketplaces, using computerized mechanisms and dynamic pricing systems to monitor trends that humans might not have the time or inclination to track.
However, Franco's story points out the potential for agent technology to degenerate into agent hell.
For instance, the agents in the park scenario could not adapt effectively to changes in fashion, so a rich boy had to leave the park while Franco and other vagrants had learned how to cheat the system. Even if such agents could adapt to new situations over time, they would likely be useless—even disturbing—for long intervals.
Of course, deactivation is one way to circumvent a useless system. However, is it possible—or simply economically feasible—to remove millions of computer-based systems dispersed throughout a city? Alternatively, is it possible to deactivate the typically self-powered systems on which the agents reside? Or, to avoid deactivation, is it possible to globally reprogram millions of agents, forcing adaptation to a new situation?
The café scenario shows that agents do not necessarily improve the performance of even simple, useful tasks. Ordering a beer left Paolo's PDA agent stuck in the middle of a commercial network transaction.
In complex and critical social mechanisms, such as pricing, agent-based systems could dramatically increase the instability and chaotic behavior that already characterize today's market economies.
In fact, some observers have claimed that the rigid rationality of an agent-mediated economy might provide more economic stability, but their claims are backed up by neither experience nor realistic simulations. Nor do they account for the unpredictable behaviors that can emerge in a collective. In the café scenario, the price differences in Mario and Andrea's music files may have emerged from the global agent-based economy having reached a strange—and possibly chaotic—attractor, regardless of any actual change in the demand for such goods.
In general, multitudes of interacting autonomous components executing in a dynamic environment suggest an interactive system in which the global state evolves as a function more of environmental dynamics and interactions than of internal component-level intelligence and rationality. Thus, as software agents begin to populate everyday networks and environments, global behaviors will become increasingly important in all agent-based activity.
Unfortunately, the state of the art in complexity science is still very far from offering constructive methods for controlling global state in interactive systems. Without such methods, skeptics like Mario could easily reject agent systems as demonic entities under the control of an esoteric secret society.
Delegating work to agents requires trusting them, yet software agent technology is unlikely to achieve the complex human decision-making capabilities that numerous tasks require. Franco's story is a possibly naive and extreme example of how the lack of these capabilities in a message agent could ruin someone's life.
Even with much more intelligent agents, trust is a difficult issue. While we do not argue that trusting agents is and will always be wrong, we do contend that trust must be achieved gradually. Potential advantages must be carefully evaluated against potential drawbacks.
Consumer and developer enthusiasm for advanced technologies already characterizes the software market. However, this enthusiasm can lead to shortcutting best practices in product development and test. Because agents are autonomous, deploying them with poor testing and documentation—in the tradition of some large software companies—could yield disastrous results. Instead, software agents should undergo exhaustive tests defining their characteristics and limitations, learning processes (if any), and behavior in relation to environmental dynamics and uncertainty—all carefully documented.
Our agent hell scenario aims to emphasize that it is not enough to explore methods of making agents more intelligent and autonomous or to analyze the ways and extent to which we can delegate work to them. Equally important is the need to advance the discipline related to engineering such systems. Agent-oriented software engineering research can address several areas that serve this end.
The research topics described here emerged explicitly from discussions at the meetings of the "Methodologies and Software Engineering for Agent Systems" Special Interest Group of the EC-funded Agentlink Research Network of Excellence ( www.agentlink.org/activities/sigs/sig2.html).
We must study the social, political, and ecological implications of billions of agents executing in our physical environments, interacting with each other and with the environment in a globally interconnected network, and possibly capable of monitoring our everyday activities.
Prior to developing a software system, analysts should clearly understand its feasibility and likely impact. The pervasiveness and autonomous decision-making capabilities of agent-based systems make such considerations particularly important.
Environmental conditions change, but we may not necessarily have the option of updating an agent-based system's response to such changes.
We need to model the relationships of agent systems with their environments so that systems not only operate effectively and learn but also adapt to changing environmental conditions.
To help develop and maintain well-engineered agent systems, we must define good modeling tools and methodologies. Despite the impossibility of controlling both individual agents and environmental dynamism, we need ways to predict interactive behavior among large numbers of agents and to provide some sort of control for easily maintaining them.
We believe that, because agent systems are large and closely tied to the physical world, good tools and methodologies should take their inspiration from the science of complexity and, more generally, from all scientific disciplines dealing with complex macro systems. This means adding physics, biology, and sociology to the logical sciences that traditionally dominate computer science.
Finally, the need to study the scalability properties of multiagent systems, well before problems of scale arise, is both evident and pressing. Once agents begin to populate the world, their numbers will grow—resulting in potentially unmanageable systems of dramatic size. By then, it will be too late to rethink methodologies that we originally conceived for systems with only a few dozen agents.
We need new performance models specifically tuned to agent-based systems. Such models should do more than integrate and extend well-assessed performance models for distributed systems (which are, nevertheless, needed) and should also define performance models for trust to characterize how and to what extent we can trust an agent system to perform a given activity.
The moral of the story: Systematic development of agent-based systems requires rigorous software engineering processes and suitable tools to ensure a future agent heaven rather than agent hell.