Issue No.01 - January-June (2010 vol.1)
Published by the IEEE Computer Society
Emotions profoundly shape behavior and lend meaning and direction to human existence. Even during the “age of reason,” when logic and dispassionate science reigned supreme, David Hume acknowledged that “reason is, and ought to be, only the slave of the passions” (Hume, 1740, p. 295). Around the same time, Enlightenment thinkers enshrined in the US Declaration of Independence the “unalienable right” to pursue happiness. And today, people passionately pursue this right through technology. The human race spends billions of dollars and hours each year, stroking their emotions in computer games, finding love in virtual worlds, “flaming” in blogs and e-mail, and cursing bad interfaces. Perhaps more ominously, technology allows the emotions of individuals to magnify into a force that can shape a society, as demonstrated when international markets rise and fall depending on the weather (Hirshleifer and Shumway, 2003).
Affective Computing is the field of study concerned with understanding, recognizing, and utilizing human emotions and other affective phenomena in the design of technological systems. Research in the area is motivated by the fact that emotion pervades human life—emotions motivate and shape our individual thoughts and social behavior, they promote social bonds between people and between people and artifacts, and emotional cues play an important role in communicating attitudes and intentions to other social actors (be they human or computer). Technology is less acceptable if it disturbs our emotions; more efficient if it engages them productively; more attractive if it appeals to them; and often it is developed with the sole purpose of enabling us to experience them (as with entertainment technology). Since the coining of the term by Picard in 1995 (and see her personal perspective on the development of the field within this issue), affective computing has emerged as a cohesive interdisciplinary field with its own international conference (the International Conference on Affective Computing and Intelligent Interaction) and professional society (the HUMAINE Association), and today, its own journal.
The IEEE Transactions on Affective Computing is intended to be a cross-disciplinary journal aimed at disseminating results of research on the design of systems that can recognize, interpret, and simulate human emotions and related affective phenomena. The journal will publish original research on the principles and theories explaining why and how affective factors condition interaction between humans and technology, on how affective sensing and simulation techniques can inform our understanding of human affective processes, and on the design, implementation, and evaluation of systems that carefully consider affect among the factors that influence their usability.
Readers will see that this first issue emphasizes an interdisciplinary perspective on the field and, for me, this is an essential characteristic of affective computing. Emotion has been studied as a science at least since Aristotle and an enormous body of theoretical and empirical work exits across a wide range of disciplines, including neuroscience, ethology, psychology, anthropology, sociology, economics, art, and literature. Computer scientists and engineers are newcomers to the game, and bring new perspectives and new tools to the challenge of recognizing, understanding, and shaping human emotions: The ability to automatically acquire enormous datas ets of human behavior will revolutionize how we understand, model and shape human behavior (see Calvo, this issue); Technology demands rigor and attempts to “computationalize” an emotion theory can highlight is vagueness and inconsistent assumptions (see Gratch, Marsella, and Petta, 2009); Further, being outsiders to the existing body of emotion research, technologists can also bring fresh new perspectives and can avoid the pitfalls of the sometimes sterile theoretical disputes that can grip any mature science (see Kuhn, 1962). That said, technologists ignore well-established findings, theories, and methodologies at their peril. For one familiar with social science research on emotion, papers in affective computing can sometimes seem naive: ignoring important distinctions, rediscovering long-documented phenomena, or employing poor experimental design. To be successful as a field, affective computing must bring new ideas and technology, but not recapitulate the common confusions and missteps that other fields spent considerable effort to overcome.
Several features of the journal reinforce my interdisciplinary perspective on the field. The editorial board draws from a wide range of scientific expertise and authors that submit to the journal can expect to be tested on their interdisciplinary knowledge. Papers will often be reviewed by reviewers with different disciplinary perspectives and the review process should be seen as a valuable tool for disseminating knowledge from other perspectives on the phenomena of emotion. Occasionally, as in this first issue, I will select an article that facilitates discussion across disciplines and invite commentary articles that highlight differences of opinion, theory, or practice across fields.
As the inaugural issue of the journal, this volume contains two appetizers before we get to the meat of the field. We begin with a personal reflection by the founding figure on the field. Rosalind Picard of the MIT Media Lab. Roz first coined the term “affective computing” 15 years ago at a time when the topic of emotion had a dim reputation within the computational sciences. Her commentary, “From Laughter to IEEE,“ recounts these early challenges and the fellow visionaries who have led us to this point. We next introduce our impressive list of associate editors. Collectively, they bring a broad range of expertise to the field spanning neuroscience, cognitive science, psychology, robotics, linguistics, human-computer interaction, the learning sciences, health, multimedia, design, and engineering.
The scientific articles begin with a conversation on affect detection. Rafael Calvo and Sidney D’Mello begin with a review article of models, methods, and applications of affect detection. This is followed by two commentary articles from prominent emotion psychologists. Arvid Kappas emphasizes some of the challenges and pitfalls that can confront techniques for automatically recognizing affect. Rainer Reisenzein offers a broader perspective on the topic, arguing that affect detection is best seen as a special case of, and should be performed within the context of a larger effort on “mental state detection.”
Rounding out the issue, Ptaszynski, Maciejewski, Dybala, Rzepka, and Araki present work on inferring the emotional content of computer-mediated conversations, which often emphasize obscure symbols such as OMG, ;-), and :-*. Finally, Bickmore, Fernando, Ring, and Schulman illustrate the strong effect that human touch has on our emotional state and subsequent decisions, and present a robot that yields the same beneficial outcomes.
I close this introduction with a few personal words. My own introduction to this field began 13 years ago when, somewhat by accident, Paul Rosenbloom gave me the opportunity to explore the role of emotion with Soar, a cognitive architecture he had co-developed with John Laird and Allen Newell. Since then, I’ve come to appreciate the strange and wonderful ways that emotion colors our thinking and our interactions with technology. And as I’ve grown into more of a leadership role within the community, I also appreciate the importance of sharing this knowledge and sense of wonderment with the next generation of affective computing researchers. The launch of an IEEE journal on this topic is an important milestone. It definitively signals that the topic of emotion has established itself as a serious domain of discourse within the engineering sciences. It is an achievement only made possible through considerable personal investment from many individuals in the field. But a journal is just a vehicle. Now it is up to you to drive this vehicle in whatever way your wonderment for emotion and technology may take you.
This journal would not have been possible without tireless work of several individuals, only a few of whom are recognized in the steering committee and editorial board of the journal. We owe Roz Picard a considerable debt for giving a name to the field and tirelessly advancing the community through research and service. Efforts to launch a journal were started at the request of the membership of the HUMAINE Association. I became involved as an officer of this society and through the prodding of Roddy Cowie, and I leaned heavily on the support of Roddy, Maja Pantic, Björn Schuller, and Jianhua Tao. The initial proposal to the IEEE leveraged on an earlier effort to create a journal, spearheaded by Fiorella de Rosis and Roz. Within the IEEE I received considerable support and advice from Sorel Reisman and Alicia Stickley. And ongoing operations are only made possible through the efforts of the IEEE and ScholarOne staff, including Joyce Arnold, Kathy Santa Maria, and Kristen Anderson.
J. Gratch, S. Marsella, and P. Petta, ”Modeling the Antecedents and Consequences of Emotion,” J. Cognitive Systems Research, vol. 10, no. 1, pp 1-5, 2009.
D. Hirshleifer and T. Shumway, “Good Day Aunshine: Stock Returns and the Weather,” J. Finance, vol. 58, pp. 1009-1032, 2003.
D. Hume, A Treatise of Human Nature. Oxford Univ. Press, 1740 (1967 ed.).
T.S. Kuhn, The Structure of Scientific Revolutions. Univ. of Chicago Press, 1962.
Jonathan Gratch received the PhD degree in computer science from the University of Illinois in Urban-Champaign in 1995. He is a respected researcher in the field who is an associate director for Virtual Humans Research at the University of Southern California’s (USC) Institute for Creative Technologies, a research associate professor in the Department of Computer Science, and co-director of USC’s Computational Emotion Group. His research focuses on virtual humans (artificially intelligent agents embodied in a human-like graphical body), and computational models of emotion. He studies the relationship between cognition and emotion, the cognitive processes underlying emotional responses, and the influence of emotion on decision making and physical behavior. A recent emphasis of this work is on social emotions, emphasizing the role of contingent nonverbal behavior in the co-construction of emotional trajectories between interaction partners. His research has been supported by the US National Science Foundation, DARPA, AFOSR, and RDECOM. He is on the editorial board of the journal Emotion Review and is the president of the HUMAINE Association for Research on Emotions and Human-Machine Interaction. He is a sitting member of the organizing committee for the International Conference on Intelligent Virtual Agents (IVA) and a frequent organizer of conferences and workshops on emotion and virtual humans. He belongs to the American Association for Artificial Intelligence (AAAI) and the International Society for Research on Emotion. He is the author of more than 100 technical articles.
Elisabeth André is a full professor of computer science at Augsburg University, Germany, and chair of the Laboratory for Multimedia Concepts and Their Applications. Prior to that, she worked as a principal researcher at DFKI GmbH, where she led various academic and industrial projects in the area of intelligent user interfaces, one of which was honored with the European IT Prize. Her current research interests include affective computing, multimodal user interfaces, and synthetic agents. She has been involved in a number of international research collaborations, such the EU-funded projects and networks CALLAS, Dynalearn, eCIRCUS, METABO, and IRIS, and the DFG-funded project CUBE-G. She is also a member of the Humaine Association for researchers in emotion-oriented/affective computing. She has been program cochair of major international conferences, such as the Seventh International Conference on Intelligent Virtual Agents (IVA ’07), the 23rd Annual Conference on Computer Animation and Social Agents (CASA ’10), and the International Conference on Intelligent User Interfaces (IUI ’03 and IUI ’11). She has more than 160 papers in refereed journals and high-quality conferences. Among other things, she is the editor of a volume on Affective Dialogue Systems. In summer 2007 she was a fellow of the Alcatel-Lucent Foundation for Communications Research.
Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab and an associate professor in the Department of Communication at Stanford. He received the BA degree cum laude from the University of Michigan in 1994 and the PhD degree in cognitive psychology from Northwestern University in 1999. After receiving his doctorate, he spent four years at the Research Center for Virtual Environments and Behavior at the University of California, Santa Barbara, as a postdoctoral fellow and then as an assistant research professor. His main area of interest is the phenomenon of digital human representation, especially in the context of immersive virtual reality. He explores the manner in which people are able to represent themselves when the physical constraints of body and veridically-rendered behaviors are removed. Furthermore, he designs and studies collaborative virtual reality systems that allow physically remote individuals to meet in virtual space, and explores the manner in which these systems change the nature of verbal and nonverbal interaction. His findings have been published in more than 70 academic papers in the fields of communication, computer science, education, law, political science, and psychology. His work has been consistently funded by the US National Science Foundation for more than a decade, and he also receives grants from various Silicon Valley and international corporations. He consults regularly for US government agencies including the Army, the Department of Defense, the National Research Council, and the National Institutes of Health on policy issues surrounding virtual reality.
Anton Batliner received the MA degree in Scandinavian languages in 1973 and the DrPhil degree in phonetics in 1978, both from the University of Munich. From 1978 to 1984, he was an assistant professor at the Institute for Scandinavian Languages (University of Munich). His fields of research up to 1984 were Scandinavian literature, translation, language and gender, and phonology. From 1984 to 1996, he has worked on several research projects on prosody that were financed by the German Research Council (DFG) and by the German Federal Ministry of Education, Science, Research, and Technology (BMBF). In winter 1992/1993, he was a visiting scientist at the Daimler Benz Research Center, Ulm, and in summer 1994, he was a visiting scientist at the IMS, University of Stuttgart. Since 1997 he has been a member of the research staff of the Pattern Recognition Lab. He is coeditor of one book and author/coauthor of some 200 articles. His research interests are the modeling and automatic recognition of emotional user states, all aspects of prosody in speech processing, the uni and multimodal focus of attention, the automatic assessment of nonnative speech, and spontaneous speech phenomena such as disfluencies, irregular phonation, etc.
Cynthia Breazeal received the SB degree in 1989 in electrical and computer engineering from the University of California, Santa Barbara. She did her graduate work at the MIT Artificial Intelligence Lab, and received the SM (1993) and ScD (2000) degrees in electrical engineering and computer science from the Massachusetts Institute of Technology. She is an associate professor of media arts and sciences at the Massachusetts Institute of Technology, where she founded and directs the Personal Robots Group at the Media Lab. She is a pioneer of social robotics and Human Robot Interaction. She authored the book Designing Sociable Robots and has published more than 100 peer-reviewed articles in journals and conferences on the topics of autonomous robotics, artificial intelligence, human robot interaction, and robot learning. She serves on several editorial boards in the areas of autonomous robots, affective computing, entertainment technology, and multi-agent systems. She is also a member of the advisory board for the Science Channel. Her research focuses on developing the principles, techniques, and technologies for personal robots that are socially intelligent, interact and communicate with people in human-centric terms, work with humans as peers, and learn from people as an apprentice. She has developed some of the world’s most famous robotic creatures, ranging from small hexapod robots, to embedding robotic technologies into familiar everyday artifacts, to creating highly expressive humanoid robots and robot characters. Her recent work investigates the impact of social robots on helping people of all ages to achieve personal goals that contribute to quality of life, in domains such as physical performance, learning and education, health, and family communication over distance. She is recognized as a prominent young innovator. She is a recipient of the US National Academy of Engineering’s Gilbreth Lecture Award, Technology Review’s TR35 Award, and TIME magazine’s Best Inventions of 2008. She has won numerous best paper and best technology inventions at top academic conferences. She has also been awarded an ONR Young Investigator Award, and was honored as a finalist in the National Design Awards in Communication.
Rafael A. Calvo holds a PhD in Artificial Intelligence applied to automatic document classification and has also worked at Carnegie Mellon University, and Universidad Nacional de Rosario, and as a consultant for projects worldwide. He is a senior lecturer at the University of Sydney-School of Electrical and Information Engineering and director of the Learning and Affect Technologies Engineering (Latte) research group. He is the author of numerous publications in the areas of affective computing, learning systems, and Web engineering, the recipient of three teaching awards, and a senior member of the IEEE. He is a member of the W3C EmotionML working group.
Jeffrey Cohn is a professor of psychology at the University of Pittsburgh and an adjunct faculty member at the Carnegie Mellon University Robotics Institute. He has led interdisciplinary and interinstitutional efforts to develop advanced methods of automatic analysis of facial expression and prosody, and applied those tools to research in human emotion, social development, nonverbal communication, psychopathology, and biomedicine. He cochaired the 2008 IEEE International Conference on Automatic Face and Gesture Recognition and the 2009 International Conference on Affective Computing and Intelligent Interaction. He has coedited two recent special issues of the Journal of Image and Vision Computing. His research has been supported by grants from the US National Institutes of Health, National Science Foundation, Autism Foundation, Office of Naval Research, Defense Advanced Research Projects Agency, and the Technical Support Working Group
Cristina Conati received the “Laurea” degree (MSc equivalent) in computer science from the University of Milan, Italy, in 1988, as well as the MSc (1996) and PhD (1999) degrees in intelligent systems from the University of Pittsburgh. She is an associate professor of computer science at the University of British Columbia. Her areas of research include affective computing, adaptive interfaces, user modeling, and intelligent tutoring systems. She has published published more than 50 strictly refereed articles, and her research has received awards from the International Conference on User Modeling (1997), the International Conference of AI in Education (1999), the International Conference on Intelligent User Interfaces (2007, and the Journal of User Modeling and User Adapted Interaction (2002).
Jean-Marc Fellous is a researcher who focuses on the way large networks of neurons interact in the face of background noise and unreliable synaptic transmission. His background is in computer science and artificial intelligence. His previous work included automated face recognition using image processing techniques based on the biological features of neurons of the early stages of visual processing. This early work was followed up by a series of studies on the nature of facial information such as sex, age or emotional expressions. Higher cognitive functions such as face perception rely on the computations of large networks of neurons spanning many areas of the nervous system. Interestingly, in some cases, these large network computations can be understood at the level of single cells. For example, so-called “face cells” in the temporal lobe, dozen of synapses away from the eye, will fire only when the picture of a specific individual is presented. As of today, such selectivity, no matter what the details of the inputs are (e.g. face orientation, size, makeup, facial hair), cannot be achieved by any known artificial system, computerized or otherwise. Yet it is achieved effortlessly by face cells in the human and monkey brains. A similar selectivity has been observed in “place cells” in the hippocampus, a structure involved in short term memory. These cells are selective to a particular spatial location in a given environment, and are again dozens of synapses away from the basic sensory apparatus. His current research focuses on how large networks of neurons transfer and process information in the face of large amounts of neuronal and synaptic noise to yield such a reliable output. Members of his laboratory use a combination of experimental in vitro and in vivo techniques together with sophisticated computer simulations to understand the basic neural processing principles that are required to yield such effective and selective computations. Other research interests of the laboratory include the neural basis of emotion, memory reconsolidation, and the computational roles of neuromodulation in the young and aged. Dr. Fellous teaches courses in neural sata analyses, computational neuroscience, and physiological psychology. He supervises and advises undergraduate, graduate and postdoctoral students in psychology, applied mathematics, neuroscience, and physiological science.
Alan Hanjalic received the PhD degree in 1999 from the Delft University of Technology, Delft, The Netherlands, and the Diplom-Ingenieur (Dipl.-Ing.) degree in 1995 from the Friedrich-Alexander University in Erlangen, Germany, both in electrical engineering. He is an associate professor and coordinator of the Delft Multimedia Information Retrieval Lab at the Delft University of Technology, The Netherlands. He was a visiting scientist at Hewlett-Packard Labs, British Telecom Labs, Philips Research, and Microsoft Research Asia. His research interests and expertise are in the broad area of multimedia computing, with focus on (affective) multimedia content analysis, multimedia information retrieval, and personalized multimedia content access and delivery. In his fields of expertise, he has (co)authored more than 80 publications, among which are the books Image and Video Databases: Restoration, Watermarking and Retrieval (Elsevier, 2000) and Content-Based Analysis of Digital Video (Kluwer Academic Publishers, 2004). Dr. Hanjalic has been a member of the editorial boards of a number of scientific journals in the multimedia field, including the IEEE Transactions on Multimedia (2006-2010), the Journal of Multimedia, Advances in Multimedia (Hindawi), and Image and Vision Computing (Elsevier). He was also a guest editor of several journal special issues, such as the IEEE Transactions on Multimedia, special issue on integration of content and context for multimedia management, January 2009, the Journal of Visual Communication and Image Representation, special issue on emerging techniques for multimedia content sharing, search, and understanding, February 2009, and the Proceedings of the IEEE, special issue on advances in multimedia information retrieval, April 2008. He has also served on the organizing committees of leading multimedia conferences, among which are the ACM Multimedia (General Chair 2009, Program Chair 2007, Workshops Chair 2006), ACM Conference on Image and Video Retrieval (Program Chair 2008), the International WWW conference (Track Chair 2008), the Multimedia Modeling Conference (Track Chair 2007), Pacific Rim Conference on Multimedia (Track Chair 2007), the IEEE International Conference on Multimedia and EXPO (Area Chair 2007), and the IEEE International Conference on Image Processing (Track Chair 2010). He was a Keynote Speaker at the Pacific-Rim Conference on Multimedia, Hong-Kong, December 2007, and has served regularly as a Program Committee member for more than 20 international conferences and workshops, including ACM Multimedia, ACM CIVR, ACM SIGIR, International Conference on Computer Vision (ICCV), IEEE ICME, IEEE ICIP, and IEEE ICASSP. He served as a member or organizer of panels at conferences like ACM Multimedia (2007), the Picture Coding Symposium (2007), and the ACM Multimedia Information Retrieval Conference (2010). He is a senior member of the IEEE.
Kristina Höök has been a full professor in the Department of Computer and Systems Science, Stockholm University/KTH since February 2003. She leads one of the groups of the Mobile Life Center and upholds a part-time position at the Swedish Institute of Computer Science (SICS). The focus of her group is on social and affective interaction, narrative intelligence, in mobile settings. Methodwise, she works from a user-centered design perspective with a phenomenological grounding. She and her research group have been exploring the idea of involving users both physically and cognitively in what they name an affective loop. The idea of an affective loop is for users to step-by-step interpret, become influenced by, imitate, and be involved with an (computer or mobile) application, both physically and cognitively. She and her group have created several demos that embody the affective loop idea.
Qiang Ji received the PhDdegree in electrical engineering from the University of Washington. He is currently a professor with the Department of Electrical, Computer, and Systems Engineering at Rensselaer Polytechnic Institute (RPI). He is also a program director at the US National Science Foundation, managing NSF’s computer vision and machine learning programs. He has also held teaching and research positions with the Beckman Institute at the University of Illinois at Urbana-Champaign, the Robotics Institute at Carnegie Mellon University, the Department of Computer Science at the University of Nevada at Reno, and the US Air Force Research Laboratory. He currently serves as the director of the Intelligent Systems Laboratory (ISL) at RPI. His research interests are in computer vision and probabilistic machine learning and their applications in various fields. He has published more than 150 papers in peer-reviewed journals and conferences. His research has been supported by major US governmental agencies, including NSF, NIH, DARPA, ONR, ARO, and AFOSR, as well as by major companies ,including Honda and Boeing. He is an editor for several computer vision and pattern recognition related journals and he has served as program chair, technical area chair, and program committee member for numerous international conferences/workshops. He is a senior member of the IEEE.
Seong-Whan Lee received the BS degree in computer science and statistics from Seoul National University, Korea, in 1984 and the MS and PhD degrees in computer science from the Korea Advanced Institute of Science and Technology in 1986 and 1989, respectively. From February 1989 to February 1995, he was an assistant professor in the Department of Computer Science at Chungbuk National University, Cheongju, Korea. In March 1995, he joined the faculty of the Division of Computer and Communications Engineering at Korea University, Seoul, Korea, and now is a full professor. He is also the director of Center for Artificial Vision Research (CAVR). In 2001, he worked in the Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, as a visiting professor. In 2009, he was appointed the Hyundai-Kia Motor Chair Professor. He has more than 250 publications on computer vision, pattern recognition and brain engineering in international journals and conference proceedings, and has authored 10 books.,
Christine Lætitia Lisetti received the BS degree cum laude, and the MS and PhD degrees in computer science from Florida International University (FIU) in 1988, 1992, and 1995 respectively. She then went to Stanford University, where she was a postdoctoral fellow jointly in the Department of Computer Science and the Department of Psychology. She is an associate professor in the School of Computing and Information Sciences (SCIS) of the College of Engineering and Computing at Florida International University, where she directs the Affective Social Computing Laboratory (ascl.cs.fiu.edu). She taught in the Business School at the University of South Florida, she was an Assistant Professor in Computer Science at the University of Central Florida, and she joined FIU in 2007 from the Eurecom Institute, France, where she was a professor. She has been the principal investigator (PI) and co-PI of various research awards funded by federally funded agencies such as the NSF, NIH, ONR, the US Army STRICOM, and NASA Ames; from industries such as Interval (Microsoft) Research Corp.,Intel Corp., and ST MicroElectronics Corp.; and from the European Commission (EC), the EUREKA Information Technology for European Advancement (ITEA) Programme, and the Provence-Alpes Cote d’Azur (PACA) Regional R&D Programme in France. She ris the recipient of numerous awards. Her interdisciplinary research lies at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI) in computer science, of emotion theories in psychology, and of social interactions in communication. Her long-term research goal is to create digital, engaging, and helpful socially intelligent agents that can learn to interact naturally with humans via expressive multi-modalities in a variety of contexts involving socio-emotional content (e.g. social companions, cyber-therapy, intelligent tutoring systems, serious games).
Stacy C. Marsella received the PhD degree in computer science from Rutgers University. He is a fesearch associate professor in the Department of Computer Science at the University of Southern California, associate director of social simulation research at the Institute for Creative Technologies (ICT), and a co-director of USC’s Computational Emotion Group. His general research interest is in the computational modeling of cognition, emotion, and social behavior, both as a basic research methodology in the study of human behavior as well as in the use of these computational models in a range of education and analysis applications. His current research spans the interplay of emotion and cognition, modeling of the influence that beliefs about the mental processes of others have on social interaction, and the role of nonverbal behavior in face-to-face interaction. He has extensive experience in the application of these models to the design of virtual humans, human-like virtual agents that can interact with people in a virtual environment using spoken dialog. He is a recipient of the ACM/SIGART Autonomous Agents Research Award for his work on emotion and social simulation. In addition to being an associate editor of the IEEE Transactions on Affective Computing, he is on the editorial boards of the Journal of Experimental & Theoretical Artificial Intelligence and the Journal of Intercultural Communication. He is also on the steering committee of the Intelligent Virtual Agents conference. He is a member of the International Society for Research on Emotions (ISRE) and has published more than 150 technical articles.
Shrikanth (Shri) Narayanan is the Andrew J. Viterbi Professor of Engineering at the University of Southern California (USC), and holds appointments as a professor of electrical engineering, computer science, linguistics, and psychology. Prior to joining USC he was with AT&T Bell Labs and AT&T Research from 1995-2000. At USC he directs the Signal Analysis and Interpretation Laboratory. His research focuses on human-centered information processing and communication technologies. He is a fellow of the IEEE, the Acoustical Society of America, and the American Association for the Advancement of Science (AAAS,) and a member of Tau-Beta-Pi, Phi Kappa Phi, and Eta-Kappa-Nu. He is a recipient of a number of honors including Best Paper awards from the IEEE Signal Processing Society in 2005 (with Alex Potamianos) and in 2009 (with Chul Min Lee) and selection as an IEEE Signal Processing Society Distinguished Lecturer for 2010-2011. Papers with his students have won awards at ICSLP ’02, ICASSP ’05, MMSP ’06, MMSP ’07 and DCOSS ’09 and the InterSpeech2009-Emotion Challenge. He has published more than 350 papers and has seven granted US patents. He is also an editor for the Computer Speech and Language Journal and an associate editor for the IEEE Transactions on Multimedia and the Journal of the Acoustical Society of America. He was also previously an associate editor of the IEEE Transactions of Speech and Audio Processing (2000-2004) and the IEEE Signal Processing Magazine (2005-2008). He served on the Speech Processing technical committee (2005-2008) and Multimedia Signal Processing technical committees (2004-2008) of the IEEE Signal Processing Society and presently serves on the Speech Communication committee of the Acoustical Society of America and the Advisory Council of the International Speech Communication Association.
Ana Paiva received the PhD degree from the University of Lancaster, United Kingdom. She is currently a research group leader of GAIPS at INESC-ID and an associate professor at the Instituto Superior Técnico, Technical University of Lisbon. She is well known in the area of intelligent agents, artificial intelligence applied to education, and affective computing. She has worked in Germany (at GMD) and in France (CNRS-COAST team at ENS of Lyon). When she returned to Portugal in 1996, whe created a group on intelligent agents and synthetic characters (GAIPS). Her research is focused on the affective elements in the interactions between users and computers. She has served as a member of numerous international conferences and workshops. She has (co)authored more than 100 publications in refereed journals, conferences, and books. She was a founding member of the Kaleidoscope Network of Excellence SIG on Narrative and Learning Environments and has been very active in the area of synthetic characters and intelligent agents. She coordinated the participation of INESC-ID in sevral European projects, such as the NIMIS (an I3-ESE project), DiViLav, Safira (IST-5th Framework), where she was the prime contractor, VICTEC, COLDEX, MinRaces, E-Circus, and LIREC (in the 7th framework).
Brian Parkinson is a social psychologist based at Oxford University, United Kingdom, who is interested in the interpersonal functions and effects of emotion. His books include Ideas and Realities of Emotion (1995), Changing Moods (with Totterdell, Briner, and Reynolds, 1996), and Emotion in Social Relations (with Fischer and Manstead, 2005). Over the last 10 years he has been chief editor of the British Journal of Social Psychology and associate editor of Cognition and Emotion.
Catherine Pelachaud received the PhD degree in computer fraphics from the University of Pennsylvania, Philadelphia, in 1991. She is Director of Research at CNRS in the laboratory LTCI, TELECOM ParisTech. She participated to the elaboration of the first embodied conversation agent system, GestureJack, with Justine Cassell, Norman Badler, and Mark Steedman while a post-doctoral researcher at the University of Pennsylvania. Her research interests include representation language for agent, embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors, and multimodal interfaces. She has been involved in and is still involved in several European projects related to multimodal communication (EAGLES, IST-ISLE), to believable embodied conversational agents (IST-MagiCster, FP5 PF-STAR), emotion (FP5 NoE Humaine, FP6 IP CALLAS, FP7 STREP SEMAINE), and social behaviors (FP7 NoE SSPNet).
Helmut Prendinger received the master’s degree in 1994 and the doctoral degree in 1998, both from the University of Salzburg, Austria, in the areas of logic and artificial intelligence. He is an associate professor in the Digital Content and Media Sciences Research Division at the National Institute of Informatics in Tokyo, and coopted as an associate professor in the Department of Informatics of the Graduate University for Advanced Studies. Previously, he held positions as a postdoctoral fellow and research associate at the University of Tokyo. During 1996-1997, he conducted part of his doctoral research at the University of California, Irvine. He has published extensively (more than 170 refereed papers in international journals and conferences, and book chapters) in the fields of virtual conversational agents, multimodal content creation tools, affective human-computer interaction, artificial intelligence, and, lately, 3D online virtual worlds. He won the Best Paper Award at the Pacific-Rim International Conference on Artificial Intelligence in 2000, and in 2004, he received the Future Program Special Contributor Award for his research in the Multi-modal Anthropomorphic Interface project of the Japan Society for the Promotion of Science. His work on attentive presentation agents was awarded the best application of lifel-ike agents in the GALA competition held at the International Conference on Intelligent Virtual Agents in 2006. He is listed as a finalist for the Best Paper Award of the 23rd International Conference on Computational Linguistics (COLING 2010). He co-dited (with Mitsuru Ishizuka from the University of Tokyo) a book on life-like characters (tools, affective functions, applications) which was published in the prestigious Springer Cognitive Technologies series in 2004, and he organized the Eighth International Conference on Intelligent Virtual Agents in 2008. He has served as a program committee and, recently, senior program committee member or area chair for several international conferences and workshops, including the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2004-2010), the International Conference on Intelligent Virtual Agents (IVA 2003-2010), the International Conference on Intelligent Interfaces (IUI 2003-2010), and the International Conference on Affective Computing and Intelligent Interaction (ACII 2005, 2007).
Matthias Scheutz received degrees in philosophy (MA, 1989, PhD, 1995) and formal logic (MS, 1993) from the University of Vienna and in computer engineering (MS, 1993) from the Vienna University of Technology (1993) in Austria. He also received the joint PhD degree in cognitive science and computer science from Indiana University in 1999. He is currently an associate professor of computer and cognitive science in the Department of Computer Science at Tufts University. He has more than 100 peer-reviewed publications in artificial intelligence, artificial life, agent-based computing, natural language processing, cognitive modeling, robotics, human-robot interaction, and foundations of cognitive science. His current research and teaching interests include multiscale agent-based models of social behavior and complex cognitive and affective robots with natural language capabilities for natural human-robot interaction.
Marc Schröder is a senior researcher at DFKI and the leader of the DFKI text-to-speech group. Since 1998, he has been responsible for building up technology and research in expressive TTS at DFKI. Within the FP6 NoE HUMAINE, he has built up the scientific portal http://emotion-research.net, which won the Grand Prize for the best IST project website 2006. He is editor of the W3C Emotion Markup Language specification, coordinator of the FP7 STREP SEMAINE, and project leader of the national-funded basic research project PAVOQUE. He is an author of more than 50 scientific publications and a programming committee member fro many conferences and workshops.
Bernd Carsten Stahl is a professor of critical research in technology in the Centre for Computing and Social Responsibility at De Montfort University, Leicester, United Kingdom. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes the ethics of computing and critical approaches to information systems.
Marilyn Walker received the PhD degree in 1993 in computer science from the University of Pennsylvania. She is a professor of computer science and head of the Natural Language and Dialogue Systems Lab in the Baskin’s School of Engineering at the University of California, Santa Cruz (UCSC). Before coming to UCSC, she was a professor of computer science at the University of Sheffield, where she was a Royal Society Wolfson Research Merit Fellow, recruited to the United Kingdom under Britain’s “Brain Gain” program. From 1996 to 2003, she was a principal member of the research staff in the Speech and Information Processing Lab at AT&T Bell Labs and AT&T Research. While she was at AT&T, she was a PI on two DARPA projects. The first was the Communicator Evaluation project, where she was the chair of the Evaluation Committee and led the design of the cross-site evaluation experiments with implementation help from NIST. The second project funded by DARPA was the AT&T Communicator project, where she developed a new architecture for spoken dialogue systems and statistical methods for dialogue management and generation. While at AT&T she received the AT&T Labs Mentoring Award in 2001 for her excellence in mentoring PhD students and junior researchers. At UCSC, her lab is part of the Computational Media Group, whose research focuses on next-generation computer games, incorporating concepts from dramatic theory and social interaction, and notably extending the language capabilities of current interactive games, focusing specifically on training, assistive, and educational games. She has given keynote addresses at AAAI ’ 97, LREC ’04, the NSF workshop on Question Generation 2008, IVA ’09 and SIGDIAL ’10. She has served on many program committees, both as a reviewer and as senior area chair, organized dozens of workshops, and was the program chair for ACL ’04. She was a member of the founding board for the North American ACL, serving to set up and orchestrate its first conference between 1998 and 2001. Her H-index, a measure of research excellence is 37. She has supervised 8 doctoral students and 10 undergraduate senior theses. She has published more than 200 papers, and has 10 granted/pending US patents.
Chung-Hsien Wu received the BS degree in electronics engineering from National Chiao Tung University, Hsinchu, Taiwan, in 198, and the MS and PhD degrees in electrical engineering from National Cheng Kung University, Tainan, Taiwan, Republic of China, in 1987 and 1991, respectively. Since August 1991, he has been with the Department of Computer Science and Information Engineering, National Cheng Kung University. He became a professor and a distinguished professor in August 1997 and August 2004, respectively. From 1999 to 2002, he served as the chairman of the department. Currently, he is the deputy dean of the College of Electrical Engineering and Computer Science, National Cheng Kung University. He also worked at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts, in the summer of 2003 as a visiting scientist. He was the editor-in-chief for the International Journal of Computational Linguistics and Chinese Language Processing from 2004 to 2008. He served as a guest editor of the ACM Transactions on Asian Language Information Processing, IEEE Transactions on Audio, Speech, and Language Processing, and EURASIP Journal on Audio, Speech, and Music Processing in 2008~2009. He is currently the subject editor on information engineering of the Journal of the Chinese Institute of Engineers (JCIE) and on the Editorial Advisory Board of The Open Artificial Intelligence Journal. His research interests include speech recognition, text-to-speech, and spoken language processing. He is a senior member of the IEEE and a member of the International Speech Communication Association (ISCA). He has been the president of the Association for Computational Linguistics and Chinese Language Processing (ACLCLP) since September 2009.
Georgios N. Yannakakis received both the 5-year Diploma (1999) in production engineering and management and the MSc (2001) degree in financial engineering from the Technical University of Crete, and the PhD degree in informatics from the University of Edinburgh in 2005. He is an associate professor at the IT University of Copenhagen. Prior to joining the Center for Computer Games Research, IT University of Copenhagen in 2007, he was a postdoctoral researcher at the Maersk Mc-Kinney Moller Institute, University of Southern Denmark. His current primary research focus is on the investigation of intelligent mechanisms for modeling and optimizing user experience. His research interests and publications lie in the fields of multimodal intelligent interaction, computational intelligence in games, player experience modeling, user modeling, affective computing, artificial life, and neuro-evolution. He has published approximately 50 journal and international conference papers in the aforementioned fields. Among his theoretical contributions, the most important are the establishment of generic computational models of user experience in specific genres of computer games, the design of a player experience modeling method which is based on user expressed preferences, the design of frameworks for obtaining digital entertainment of richer interactivity and higher enjoyment, and the identification of physiological signal features that correspond to player enjoyment in physical activity games. He is the chair of the IEEE CIS Task Force on Player Satisfaction Modeling, the cochair of the HUMAINE SIG on Games and Entertainment, and and the general chair of the 2010 IEEE Conference on Computational Intelligence and Games.
For information on obtaining reprints of this article, please send e-mail to: email@example.com.