The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2004 vol.19)
pp: 4-7
Published by the IEEE Computer Society
ABSTRACT
<em>Gaming Technology Helps Troops Learn Language</em><div>Danna Voth</div><p>DARPA's Tactical Language Project aims to help soldiers in Arabic-speaking countries learn the language quickly using a game-based program that incorporates speech recognition technology. The program targets more practical speaking skills combined with an understanding of cultural cues and appropriate nonverbal behaviors.</p><p>Also: <em>Fantasy Island? Testing AI-Enabled Homeland Security</em><div>Benjamin Alfonsi</div></p>
GAMING TECHNOLOGY HELPS TROOPS LEARN LANGUAGE
Danna Voth
For many people, learning a language means years of classes and hours of audiotapes, and it might be months before you learn to say something useful. That simply won't work for US military personnel on missions in non-English speaking countries—they need real communication skills quickly. To meet the demand, DARPA is sponsoring the Tactical Language Project as part of its training superiority program, nicknamed DARWARS.
As part of the project, scientists at the University of Southern California Center for Research in Technology for Education (CARTE) are creating an interactive, video-game-based program designed to teach troops on post-war missions how to communicate in Levantine Arabic. In parallel development, the researchers are creating a second tutor for Iraqi Arabic.
All a user needs is headphones, a computer with a good graphics card, and about 80 hours to use the program, which has two parts: an interactive game based on the popular Unreal Tournament and an interactive tutorial (see Figure 1).


Figure 1. The Tactical Language Project's interactive program, which features (a) a game and (b) a tutorial. The project aims to teach users not only the spoken language—a form of Arabic in this case—but also the body language.

Focused on practical skills
Rather than comprehensive language proficiency, the program targets more practical speaking skills combined with an understanding of cultural cues and appropriate nonverbal behaviors. "Most people assume that Arabic is a very hard language to learn, and, in fact, in the military you usually have to score very high on a language ability test to even qualify to learn Arabic," says Lewis Johnson, director of CARTE. "Our goal is to be able to provide effective instruction to a wider range of learners than is typical for language learning courses."
The program's tutorial portion introduces the language and social skills, offers hints and feedback, and provides encouragement as the user learns. The game portion lets users practice skills in virtual scenarios that resemble real-world situations they might soon encounter.
"Tactical language is a subset of language and culture that enables you to accomplish particular tasks or missions," Johnson says. By combining spoken language skills with social skills in a realistic, but virtual, setting, the program teaches users how to build relationships necessary for successful missions. "As you are playing the game, you can bring up a screen that shows how well you are building rapport with the different characters of the game," Johnson says.
In one scenario, the user's character walks into a cafe and explains to one gentleman why he's there. Another gentleman jumps up and accuses the user's character of being a CIA agent. The user must find a way to de-escalate the stress level while politely asserting himself. To help with this task, the program teaches polite forms of address, such as the word "sayyid," and appropriate gestures, such as one used for contradicting someone while holding your ground—moving your hands side to side at waist level. "Knowing this combination of politeness and address together with appropriate gestures is important," Johnson says.
Feedback a primary feature
Johnson prefers the program's approach over traditional language teaching methods. "One advantage is the intense amount of interaction and feedback that you experience going through the game," he says.
The scientists tested users on different versions of the program and found that the versions that provided interaction were most effective. "To be able to use your language and then see how people respond to your use seems to be extremely important," Johnson says.
The feedback feature was made possible in part by speech recognition technology. Shrikanth Narayanan, a scientist at USC's Signal and Image Processing Institute, worked on the program's speech recognition piece. "The speech technology is used to see how well people pronounce things," he says. Typically speech recognition programs are designed to figure out what has been said, often converting speech to text. But this program has an additional requirement: the program must recognize not only what a user said but also how well the user pronounced it.
Those conflicting goals offered a research problem that fascinated Narayanan. "You want to make the machine recognize what they are saying regardless of the pronunciation because all the learners are non-native speakers of the language," he says. "At the same time you want to be able assess automatically how different they are from a canonical native speaker of foreign language."
The program also offers chances to replay a scene and try a problem again. Ralph Chatham, the DARWARS program manager, emphasizes this comfort factor: Because this learning doesn't occur in a classroom, you can make errors in private without fear of embarrassment.
Intelligent agents at play
The program uses intelligent agents such as the nonplayer characters (the virtual townspeople in the game) and the pedagogical agent (the virtual tutor who works with the user throughout the program). Both agents employ speech-recognition technology in interacting with the user. Game AI based on a program called PsychSim controls nonplayer characters' behavior. PsychSim is a cognitive model used in multi-agent simulations developed by David Pynadath and Stacy Marsella, scientists at USC's Information Sciences Institute.
The AI uses a partially observable Markoff decision processes model for choosing actions in the virtual world. Characters' belief states about the virtual world drive the decisions for different actions. To train this model, the scientists sketch examples of dialog between the game's characters and then annotate them as sequences of speech acts by the different characters. The program uses behaviors produced in situations that follow those patterns to form reasonable action decisions in alternative dialogs.
Another intelligent agent, which Johnson calls the "director," observes interactions in a particular scene and makes decisions about the scene's dramatic flow. Taking into account the user's expertise, the director decides how difficult a scenario should be and issues an interaction policy to the individual agents. The director decides if a given set of characters should operate in a forgiving mode, tolerant of the user's mistakes, or a less forgiving mode, suited for the advanced user.
As each user works with the program, the software keeps track of the user's actions and whether they were appropriate—what the actions indicate about the user's skill. Within the network of possible skills, the learner model checks off which skills the user demonstrates competency in, providing an estimate of the user's proficiency level. The pedagogical agent uses the learner model to decide how to help the user; the more skill the user develops, the less help the pedagogical agent should provide.
Scenarios help lessons stick
Hannes Vilhjalmsson, who designed the character control for gestures, studies nonverbal behavior in face-to-face encounters. "In a social situation, we make extensive use of the body," he says. To build in the virtual agents' nonverbal awareness, he gathered data on gestures, drawing from manuals outlining cultural taboos from the Defense Language Institute, videos of people filmed in the Middle East, and videos of the native Lebanese speakers filmed while recording their voices for use in the program.
"A simulated environment will improve your language learning because it will give you a rich social context for what you are learning," Vilhjalmsson says. "You don't just get something to repeat back to the system, but rather you have to take what you learn into an environment and communicate with people as if you were in real life."
Vilhjalmsson is working on making the intelligent agents even smarter in his "smart bodies" project. "Usually intelligent agents operate at a cognitive level," he says. "They come up with decisions about what they want to do for the next iteration such as tell you something or not tell you something, and then you can hear them speak or make an action. But there is a missing link, which is 'embodied intelligence.' You get the mind or the body, but not really the link between the two."
Lt. Ian Strand tested an early version of the program while a cadet at West Point last fall. "It was great," he says. "The play is really easy and the voice recognition software works really well. You don't get too frustrated. You can navigate quickly and easily just like you're actually playing the Unreal Tournament game."
He thought the program was fun to use and was glad to quickly learn some Arabic dialect. "Being able to use this program to pick up those fundamentals of spoken Arabic is very useful—you can really apply it when you go over to the Arab world."
FANTASY ISLAND? TESTING AI-ENABLED HOMELAND SECURITY
Benjamin Alfonsi
Some saw Ayers Island, an all-but-abandoned, 60-acre tract of land off the coast of Maine, as a wasteland. George Markowsky, chair of the University of Maine's Computer Science Department, saw it as a possibility.
Now plans are under way to transform Ayers Island ( www.ayersisland.com) into a test bed for Markowsky's brainchild, Intelligent Island—a security technology that will use AI to provide unprecedented surveillance and intelligence of Ayers Island. The greater goal is to one day adapt the same technology at a grand scale.
"You can protect the office, you can protect the building, but the next step in homeland security is figuring out a way to protect larger areas," says Markowsky, who is also president of Ayers Island LLC.
Located in Orono, Maine, Ayers Island is accessible only by bridge or boat. As such, Markowsky says, it's an ideal place to undertake such an investigation. "Ayers Island is isolated, so it provides the perfect laboratory for this kind of research," he says.
Background
Markowsky first became interested in applying his technical savvy to the area of antiterrorism security after the 1995 bombing of the Murrah Federal Building in Oklahoma City, Okla. His interest only intensified after the 9/11 attacks.
"The federal building in Oklahoma City looked a lot like the one in Bangor [Maine], and I thought, 'It could have been me,'" Markowsky says. "At that time I knew we weren't taking the problem seriously enough, especially concerning first responders to an attack like that. Then, of course, September 11 happened."
Working with other scientists both in the US and Europe, and the Ukraine in particular, Markowsky hopes to have a prototype in place within a year and sees this research as ongoing.
Ayers Island, which is already a military training site, employs basic security using standard video surveillance cameras. But in the next several months, the researchers expect to launch a software suite that will use computer sensors and motion detectors to identify visitors (and intruders), track their movements, and eventually study their behavior.
The software, yet unnamed, is funded in part by the Civilian Research and Development Foundation ( www.crdf.org) and will serve as the centerpiece of the Intelligent Island project.
Role of AI
"The first question that the sensors should be able to help answer is, 'How many people are on this island?'" Markowsky says. This information would be useful not only for security purposes, but also for the safety of first responders, he says, citing 9/11 as an example.
"Next, the question becomes how to evaluate these people's behavior," he says. "That's where artificial intelligence comes in. You need both data and intelligence for a successful implementation.
"AI will be implemented in a hierarchical manner. You will need it down at the sensor level to limit the amount of information going back to the central system, and you will also need it at a high level to integrate and manage the information coming from the sensors. This will involve a lot of filtering."
He compares the technology to the human brain. "It's surprising how much filtering is done by the various senses as they transmit information to the brain," Markowsky says.
The complexities and challenges of homeland security are not lost on Markowsky. He even concedes that Intelligent Island, at present, is far from solving them.
"We're just laying the foundation," he says. "At this stage of the game, I see AI's role as managing data and identifying situations that humans need to examine. This role will probably evolve over time with AI playing a larger role as its abilities improve.
"A useful analogy might be that of controlling the temperature in a room. You can have a human turning the furnace on and off, as needed, to try to keep the room at a comfortable level. After a while you might decide that a thermometer hooked up to a switch [a thermostat] can do the job at least as well, and the human can move on to other things. Similarly, our initial efforts on the Intelligent Island project will be crude and primitive, but with time I expect that we will learn how to do things better and have AI play an ever-increasing role."
Larger implications
Still, as it unfolds, the Intelligent Island project is sure to raise a series of technological, ethical, and even legal questions. To what extent can AI-enhanced technology actually improve homeland security? And, with respect to privacy considerations, how much is too much?
According to Markowsky, a successful Intelligent Island project would serve as a large-scale, real-world proof of concept. "The idea is to keep expanding the scale," he says. "Solve a problem at one level, then move to the next level."
He says that at present, Ayers Island is a realistic scale. But suppose the technology can one day be scaled to accommodate a larger island—Manhattan, for example. Avi Rubin, professor of computer science at Johns Hopkins University and technical director of the Hopkins Information Security Institute, says it could seriously infringe upon privacy rights.
"In places where there is a real and immediate threat to personal safety, then perhaps the privacy compromise makes sense," he says. "However, if privacy invading technologies are deployed as a preventative measure, then the loss in personal freedom is devastating."
Markowsky acknowledges that privacy is a legitimate concern, but believes "a systematic, scientific approach to balancing homeland security and personal privacy is far better than the ad hoc approach we've been seeing in response to everything that's been happening in the US and the world."
Rubin has his doubts. "A video camera in plain view in a dark parking lot with a history of violent crime makes sense; cameras on every street corner do not," he counters. "The Ayers Island project demonstrates how you can achieve great security at the cost of total privacy compromise."
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool