The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - Nov.-Dec. (2012 vol.27)
pp: 60-75
Anton Nijholt , University of Twente
Ronald C. Arkin , Georgia Institute of Technology
Sébastien Brault , University Rennes 2
Richard Kulpa , University Rennes 2
Franck Multon , University Rennes 2
Benoit Bideau , University Rennes 2
David Traum , University of Southern California
Hayley Hung , University of Amsterdam
Eugene Santos Jr. , Dartmouth College
Deqing Li , Dartmouth College
Fei Yu , Dartmouth College
Lina Zhou , University of Maryland, Baltimore County
Dongsong Zhang , University of Maryland, Baltimore County
ABSTRACT
Many applications require knowledge about how to deceive, including those related to safety, security, and warfare. Speech and text analysis can help detect deception, as can cameras, microphones, physiological sensors, and intelligent software. Models of deception and noncooperation can make a virtual or mixed-reality training environment more realistic, improve immersion, and thus make it more suitable for training military or security personnel. Robots might need to operate in physical and nontraining environments where they must perform military activity, including misleading the enemy. The contributions to this installment of Trends & Controversies present state-of-the-art research approaches to the analysis and generation of noncooperative and deceptive behavior in virtual humans, agents, and robots; the analysis of multiparty interaction in the context of deceptive behavior; and methods to detect misleading information in texts and computer-mediated communication. Articles include: "Computational Deception and Noncooperation," by Anton Nijholt; "Robots that Need to Mislead: Biologically-Inspired Machine Deception," by Ronald C. Arkin; "Deception in Sports Using Immersive Environments," by Sébastien Brault, Richard Kulpa, Franck Multon, and Benoit Bideau; "Non-Cooperative and Deceptive Virtual Agents," by David Traum; "Deception Detection in Multiparty Contexts,"by Hayley Hung; "Deception Detection, Human Reasoning, and Deception Intent," by Eugene Santos Jr., Deqing Li, and Fei Yu; and "Automatic Deception Detection in Computer-Mediated Communication," by Lina Zhou and Dongsong Zhang.
INDEX TERMS
Virtual reality, Human computer interaction, Training, agents, human-centered computing, deception detection, computer-mediated communication, virtual reality
CITATION
Anton Nijholt, Ronald C. Arkin, Sébastien Brault, Richard Kulpa, Franck Multon, Benoit Bideau, David Traum, Hayley Hung, Eugene Santos Jr., Deqing Li, Fei Yu, Lina Zhou, Dongsong Zhang, "Trends & Controversies", IEEE Intelligent Systems, vol.27, no. 6, pp. 60-75, Nov.-Dec. 2012, doi:10.1109/MIS.2012.116
REFERENCES
1. B.M. DePaulo et al., “Lying in Everyday Life,” J. Personality and Social Psychology, vol. 70, no. 5, 1996, pp. 979–995.
2. D. Traum et al., “Fight, Flight, or Negotiate: Believable Strategies for Conversing Under Crisis,” Proc. Intelligent Virtual Agents (IVA 05), LNCS 3661, Springer, 2005, pp. 52–64.
3. T. Komura, A. Kuroda, and Y. Shinagawa, “NiceMeetVR: Facing Professional Baseball Pitchers in the Virtual Batting Cage,” ACM Symp. Applied Computing, ACM, 2002, pp. 1060–1065.
4. D. Heylen, A. Nijholt, and R. op den Akker, “Affect in Tutoring Dialogues,” J. Applied Artificial Intelligence, vol. 19, nos. 3–4, 2005, pp. 287–311.
5. K. Dautenhahn and I. Werry, “Towards Interactive Robots in Autism Therapy: Background, Motivation, and Challenges,” Pragmatics & Cognition, vol. 12, no. 1, 2004, pp. 1–35.
6. J.E. Driskell, E. Salas, and T. Driskell, “Social Indicators of Deception,” Human Factors: J. Human Factors and Ergonomics Society, SAGE Publications, vol. 54, 2012, pp. 577–588.
7. A.R. Wagner and R.C. Arkin, “Acting Deceptively: Providing Robots with the Capacity for Deception,” Int'l J. Social Robotics, vol. 3, no. 1, 2011, pp. 5–26.
8. J. Reissland and T.O. Zander, “Automated Detection of Bluffing in a Game—Revealing a Complex Covert User State with a Passive BCI,” Proc. Human Factors and Ergonomics Soc., Europe Chapter Ann. Meeting 2009, D. de Waard et al., eds., Shaker Publishing, 2009, pp. 435–443.
9. L. Takayama, V. Groom, and C. Nass, “I'm Sorry, Dave: I'm Afraid I Won't Do That: Social Aspects of Human-Agent Conflict,” Proc. ACM SIGCHI Conf. Human Factors in Computing Systems (CHI 09), ACM, 2009, pp. 2099–2107.
10. A. Vrij, Detecting Lies and Deceit: Pitfalls and Opportunities, Wiley Series in Psychology of Crime, Policing, and Law, Wiley, 2008.
11. D. Traum, “Computational Models of Non-Cooperative Dialogue,” Proc. LONDIAL 2008 Workshop Semantics and Pragmatics of Dialogue, abstract of invited talk, J. Ginzburg, P. Healey, and Y. Sato eds., Queen Mary Univ. of London, 2008, pp. 11–14; www.dcs.qmul.ac.uk/tech_reportsRR-08-02.pdf .
12. C. Castelfranchi, “, Artificial Liars: Why Computers Will (Necessarily) Deceive Us and Each Other,” Ethics and Information Technology 2, Kluwer Academic Publishers, 2000, pp. 113–119.
36 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool