The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - March/April (2005 vol.20)
pp: 4-7
Published by the IEEE Computer Society
ABSTRACT
Practical Agents Help Out, <em>Dana Voth</em> <p>As AI research probes further into making machines more like humans, technology incorporating advances in speech recognition, conversational interactions, decision-making, and emotion is being used to create virtual practical agents-text, voice, and graphical avatars that assist people with various tasks and difficult situations.</p> <p></p> <p>AI Robots Team Up to Win Prisoner's Dilemma, <em>Benjamin Alfonsi</em></p> <p>A team of intelligent agents from the University of Southampton worked together to win the 2004 Iterated Prisoner's Dilemma competition, defeating the Tit for Tat strategy, which had won for the last 20 years. Was this a case of bending the rules or a significant step toward advancing AI and gaming research?</p>




Practical Agents Help Out
As artificial intelligence probes further into making machines more like humans, researchers are using technology advances in speech recognition, conversational interactions, decision making, and emotion to create virtual practical agents. These text, voice, and graphical avatars assist people with various tasks and difficult situations. They can help people better use services, make financial transactions, learn about technology, and even develop interpersonal negotiating skills and cope with stressful situations.
Are you typing to me?
Many businesses use practical agents to support customer care operations. Conversagent, a company that provides interactive conversational applications, deploys text-based automated service agents. ASAs "replicate the behavior of a very well-trained human service rep," says CEO Stephen Klein. Conversagent's ASAs interact with customers in a chatbot format and use AI technology to understand questions that customers input using natural language. Through pattern matching and semantic parsing, an ASA looks for words and at their order in a sentence. It then searches its knowledge repository to return information that corresponds to its recognized meaning of the sentence. To better understand exactly what a person wants, the system's automated ambiguity resolution technology engages in a back-and-forth text dialog with the customer.
The Harris Direct brokerage company uses Conversagent's ASA technology to assist customers with tasks such as opening up an IRA. When a customer types in, "How do I open an IRA?," the virtual agent asks a clarifying question such as "Are you interested in a SEP, a Simple, or a Roth IRA?" The agent then provides information that correlates to the customer's response.
Can we talk?
Speech applications—also using natural language technologies—are being deployed for conversational telephone interactions. Peter Mahoney, vice president of worldwide marketing for the Speechworks division of Scansoft, notes that early speech applications basically involved a simple menu structure directing callers through a rigid path in a menu tree. "I think we've all experienced touch-tone hell," he says. "What you're seeing now with more advanced speech applications is a capability to have a more free-form conversation with the caller." Mahoney explains how Speechworks' voice applications operate: The natural language technology recognizes a human's voice, translates it into something the computer understands by relating it to a set of words and phrases stored in its memory (a grammar), and then matches the meaning to a set of commands.
Verizon's repair service uses a Speechworks application to serve its telephone customers. When a customer calls in, the virtual agent asks in an open-ended way what sort of problem the customer is experiencing. The virtual agent picks out key words from the customer's answer and asks questions to confirm its understanding, such as "Are you having a noisy line? Would you like me to fix that?" The agent then uses some of Verizon's proprietary repair systems, integrated in the virtual-agent system, to try to fix the line.
Last fall, Edify, another company that creates speech applications, rolled out Kate, a virtual agent that specializes in helping bank customers over the phone. Kate's personality is designed to be empathetic and efficiently helpful. "If you want to do something like track a check or reorder checks or find out your account history or transfer funds," says Marie Jackson, Edify's vice president of marketing, "you can in essence have that whole transaction done by a virtual agent in a very natural, interactive-sounding way."
The system's speech recognizer uses neural networks and hidden Markov models to figure out what a customer says. A combination of statistical language models—a way of statistically classifying and inferring the key words and meaning of a complex phrase without having to understand the entire phrase's grammar—and business rules determine what the agent will say. Kate uses SLM when it asks more open-ended questions, such as, "How may I help you?" When Kate knows what data it needs to do a task, it uses a rules-based approach and directs the dialogue by asking the customer questions that have few answer choices, eventually getting specific data such as a bank account number.
Edify technology also supports customers of a large consumer electronics company. The company's database contains a large amount of product information, including brand names as well as thousands of parts. When customers call in, the company's virtual agent responds to various statements, including "I am having problems with my camera" or "My CD player is broken." Chris Nichols, Edify's senior director of product marketing and management, explains that the agent identifies a specific product by picking up words it recognizes, such as product names, model numbers, and brands. Additionally, the virtual agent can check customer information, offer related services (such as buying a warranty), and connect customers to the appropriate support staff.
Putting a face on it
As practical agent technology becomes more interactive and emotive, adding an expressive visual face for customers to relate to can make virtual agents more effective. Graphical avatars can make routine service inquiries less impersonal and more engaging and can be programmed to acknowledge a customer's emotional state during the transaction.
The eGain Assistant is a self-service application that can function as a conversational agent on a Web site, in an instant messenger module, or in a chat room. The graphical avatar can talk and demonstrate facial expressions, responding to typed input in natural language from customers. Its responses are based on a knowledge base that's built for specific deployments. The Assistant's underlying AI technology is case-based reasoning, says Pedro Cortopassi, director of eGain. The natural language input gets built into an internal representation of key concepts that end users are trying to express. The application evaluates and matches concepts to the knowledge base of cases and presents the most relevant information to the user. Parsing the customer's words according to business rules, the avatar infers the customer's emotional state and reacts by showing empathy. The avatar can display a full range of emotions from happiness to sternness, which it might employ with an extremely abusive customer. A company can disable certain reactive expressions of the emotive range for particular customers, such as the stern response for a high-value strategic customer.
The Assistant also practices assisted learning. It records each conversation it has and categorizes its response to customer input. The Assistant has programmed responses for things it knows, but it also has a level of knowledge for things it doesn't know about. Someone managing the system can use its reporting tool to locate what data the Assistant didn't recognize and then develop a new data structure that provides answers for those situations. The system automatically creates new cases for its knowledge base. Cortopassi says that the system can track contextual information, such as where the user is in the conversation, remembering what sort of information has been already given so that the system avoids asking repetitive or irrelevant questions, providing unsuitable information, or responding with an inappropriate expression. He adds that when the system is working well—or conversely, failing to help—"our system can also infer a little bit of the emotional content, such as frustration and happiness." Upon detecting such responses, the avatar can trigger actions based on business rules, such as transferring an unhappy customer to a human agent or up-selling or cross-selling a product to a happy customer.
ABN AMRO Bank is using Rita ( real-time Internet technical assistant), an eGain graphical avatar, to help its customers with tasks such as sending a wire money transfer. The avatar can drill down and walk customers through the steps; it can also direct them to more related information. Rita is programmed to attend to rules and laws concerning fund transfers and knows when to request approval (such as for an amount that exceeds $500,000). If Rita doesn't understand a request, it can redirect the customer to another channel, such as email or live chat. At the end of Rita's interaction with a customer, it asks "Did this information help you?" and tracks the feedback, counting and recording which questions it didn't answer adequately. The system then adds the necessary information, or the company can build the information into another point in the system, such as an FAQ list.
Get smart
Jonathan Gratch, project leader of the Stress and Emotion project at the University of Southern California Institute for Creative Technologies, is interested in the AI aspects of coping. He's helped create a system called Stability and Support Operations, which focuses on negotiation skills, tactics, and planning. SASO incorporates speech recognition, including dialogue management, hierarchical Strips-style planning, the Sonic system from the University of Colorado at Boulder, USC's developments in natural language understanding, and Gratch's emotion-modeling work. The system creates a virtual environment with embodied characters that can interact, showing facial expressions and body language that aims to help people develop social intelligence. The program presents human players with situations to respond to, which result in scenarios based on the player's choices.
For example, a military application for teaching negotiation skills employs a virtual peacekeeping operation, in which the human player's job is to convince a virtual doctor to move a virtual clinic (see Figure 1). "There, the emotions help determine whether the character is negotiating in good faith or not with the trainee," Gratch says. The human player must determine from voice, facial, and body cues how the virtual character is responding to him or her. The human player can learn about emotion-focused coping strategies, such as assigning blame or abandoning goals, or problem-focused strategies, such as making plans to address a situation. "By changing some internal parameters and leveraging different coping models, we get a variety of strategies these characters can exhibit when they're negotiating with the trainee," Gratch says.


Figure 1. In this simulation of a peacekeeping mission, the user must try to convince a virtual doctor to move his clinic. (figure courtesy of USC Institute for Creative Technologies)

Virtual agents are improving their ability to identify what we mean when we communicate with them, no matter how we put it. Advances in natural language AI, enhanced data-mining techniques, better decision-making capabilities, and developing emotive capabilities are helping virtual agents become more practical. As practical agents get better at appropriately responding to and interacting with us, they'll be able to help with more than simple tasks, such as teaching a wide variety of skills as well as helping with therapeutic goals such as smoking cessation, problem solving, coping with stress, and anger management.
AI Robots Team Up to Win Prisoner's Dilemma
Prisoner's Dilemma is a seemingly simple game in which autonomous agents—the prisoners—make a series of choices in a round-robin format. The choices determine whether the agents will remain incarcerated or be liberated, until a single winner remains free.
For 20 years, a strategy called Tit for Tat proved unbeatable. The simple yet effective strategy leads agents to initially cooperate with other players, then defect on the next move. However, in the 20th Anniversary Iterated Prisoner's Dilemma (IPD) Competition, held in November 2004, a new strategy proved victorious and ended TFT's winning streak.
Rather than working against each other, the University of Southampton's AI-endowed software agents worked in unison to earn top honors.
While the victory raised a few eyebrows in the gaming world—with some attributing the win to bending the rules, if not outright cheating—researchers behind the strategy insist their victory is not only legitimate, but also a significant step toward advancing AI and gaming research.
Winning strategy
The winning strategy relies on the fact that IPD participants enter a coalition of players in the competition.
"We divided our players into a single master player and several slaves," says team leader Nick Jennings, a computer science professor at the University of Southampton. "The slaves continually defect against other players but allow the master to defect against them; in essence, the slaves sacrifice their own chances of winning but increase the master's chances of winning."
The robots must communicate to successfully carry out the strategy.
"Since no outside form of communication is possible, our players use the initial sequence of moves that they make at the start of each interaction as a code word," explains Jennings.
However, some question the achievement. "The strategy is not new and from the perspective of the competition, I would even call it poor sportsmanship," says Jeroen Donkers, assistant professor of computer science at Universiteit Maastricht's Institute for Knowledge and Agent Technology.
"Whenever one allows a participant to enter with more than one player, such a strategy is possible," says Donkers. "The Southampton team used a code word to recognize a friendly player, but a query and answer pattern or even a probability distribution could be used."
Communicative agents
Jennings believes that finding ways for agents to better communicate with each other is the most valuable part of the research, one that can be applied to other AI areas.
"The key technique that our players in the IPD tournament are utilizing is the ability to form a coalition to tackle a problem—in this case, winning the tournament—that none of them were capable of achieving individually," he says.
"Much of the current research within the field of agent-based systems is exploring exactly how this should work in practice," says Jennings. "To do so, the work concentrates on the fundamental questions of how agents can learn to negotiate with one another, form coalitions, divide labor, share benefits, and even 'trust' each other."
Jennings uses the example of online auctions, in which agents will search out and bid for particular combinations of goods. Such agents are self-interested in the sense that they're attempting to keep costs down. However, it might be possible for an agent to form a coalition with other agents, perhaps agreeing not to participate in the same auctions and thereby avoiding the artificial price inflation that often occurs.
If the agents can reduce their costs by acting in this way, Jennings concludes, clearly their self-interest should direct them to cooperate rather than compete. According to Jennings, game theory and economics concepts should combine with more traditional AI subjects, such as planning.
Donkers does not see anything particularly novel about the research.
"The concept of cooperating autonomous agents is the basic idea behind multiagent systems," he says. "The general question in this area of artificial intelligence research is whether it is possible to solve a problem by a group of independent autonomous agents that communicate among each other and negotiate to cooperate, without the need for a central planning agent."
Staying power?
Jennings' team believes such simple examples of these environments, such as the IPD competition, are prime testing grounds for developing ideas.
Donkers remains skeptical. "The IPD competition is an ideal domain to test some ideas, but it is too academic for real-world problems."
Still, Southampton's team plans to defend its title at the next round of the IPD competition, to be held in April at the IEEE Symposium on Computational Intelligence and Games. In fact, the team is stepping up its game, anticipating stiffer competition.
"We will be entering some updated versions of our previous strategies," says Jennings. "One idea we have is the possibility of a master exploiting the slaves of other coalitions."
According to Donkers, the 2005 IPD competition organization has revised the rules in recognition of what he refers to as the collusion problem. "The IPD competition now consists of four separate competitions," he says. "The third one explicitly allows coalitions."
Still, almost everyone seems to agree on one thing: emerging AI-based strategies might have rendered TFT obsolete.
Donkers doesn't think the strategy will survive any longer in the IPD competition. "Strategies will become multilayered, just as the famous RoShamBo strategy Iocaine Powder by Dan Egnor." Egnor won the 1st International RoShamBo Programming Competition in 1999.
Jennings agrees. "Researchers have developed strategies that out perform TFT, such as Gradual by Bruno Beaufils, Jean-Paul Delahaye, and Philippe Mathieu, and Adaptive by Elpida Tzafestas."
However, Jennings also says that they're much more complex. "For biologists looking for explanations of altruism in real populations, the simplicity of Tit for Tat means that it will retain its importance."
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool