Search For:

Displaying 1-48 out of 48 total
AI's 10 to Watch
Found in: IEEE Intelligent Systems
By James Hendler, Philipp Cimiano, Dmitri Dolgov, Anat Levin, Peter Mika, Brian Milch, Louis-Philippe Morency, Boris Motik, Jennifer Neville, Erik B. Sudderth, Luis von Ahn
Issue Date:May 2008
pp. 9-19
The recipients of the 2008 IEEE Intelligent Systems 10 to Watch award—Philipp Cimiano, Dmitri Dolgov, Anat Levin, Peter Mika, Brian Milch, Louis-Philippe Morency, Boris Motik, Jennifer Neville, Erik Sudderth, and Luis von Ahn—discuss their current research...
 
Constrained Local Neural Fields for Robust Facial Landmark Detection in the Wild
Found in: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW)
By Tadas Baltrusaitis,Peter Robinson,Louis-Philippe Morency
Issue Date:December 2013
pp. 354-361
Facial feature detection algorithms have seen great progress over the recent years. However, they still struggle in poor lighting conditions and in the presence of extreme pose or occlusions. We present the Constrained Local Neural Field model for facial l...
 
Mutual Behaviors during Dyadic Negotiation: Automatic Prediction of Respondent Reactions
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Sunghyun Park,Stefan Scherer,Jonathan Gratch,Peter Carnevale,Louis-Philippe Morency
Issue Date:September 2013
pp. 423-428
In this paper, we analyze face-to-face negotiation interactions with the goal of predicting the respondent's immediate reaction (i.e., accept or reject) to a negotiation offer. Supported by the theory of social rapport, we focus on mutual behaviors which a...
 
Automatic Nonverbal Behavior Indicators of Depression and PTSD: Exploring Gender Differences
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Giota Stratou,Stefan Scherer,Jonathan Gratch,Louis-Philippe Morency
Issue Date:September 2013
pp. 147-152
In this paper, we show that gender plays an important role in the automatic assessment of psychological conditions such as depression and post-traumatic stress disorder (PTSD). We identify a directly interpretable and intuitive set of predictive indicators...
 
Fifth International Workshop on Affective Interaction in Natural Environments (AFFINE 2013): Interacting with Affective Artefacts in the Wild
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Ginevra Castellano,Kostas Karpouzis,Jean-Claude Martin,Louis-Philippe Morency,Christopher Peters,Laurel D. Riek
Issue Date:September 2013
pp. 727
This workshop covers real-time computational techniques for the recognition and interpretation of human affective and social behaviour, and techniques for synthesis of believable social behaviour supporting real-time adaptive human-agent and human-robot in...
 
YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context
Found in: IEEE Intelligent Systems
By Martin Wollmer,Felix Weninger,Tobias Knaup,Bjorn Schuller,Congkai Sun,Kenji Sagae,Louis-Philippe Morency
Issue Date:May 2013
pp. 46-53
This work focuses on automatically analyzing a speaker's sentiment in online videos containing movie reviews. In addition to textual information, this approach considers adding audio features as typically used in speech-based emotion recognition as well as...
 
Multimodal Sentiment Analysis of Spanish Online Videos
Found in: IEEE Intelligent Systems
By Veronica Perez Rosas,Rada Mihalcea,Louis-Philippe Morency
Issue Date:May 2013
pp. 38-45
Using multimodal sentiment analysis, the presented method integrates linguistic, audio, and visual features to identify sentiment in online videos. In particular, experiments focus on a new dataset consisting of Spanish videos collected from YouTube that a...
 
Hidden Conditional Random Fields
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Ariadna Quattoni, Sybor Wang, Louis-Philippe Morency, Michael Collins, Trevor Darrell
Issue Date:October 2007
pp. 1848-1852
We present a discriminative latent variable model for classification problems in structured domains where inputs can be represented by a graph of local observations. A hidden-state Conditional Random Field framework learns a set of latent variables conditi...
 
Latent-Dynamic Discriminative Models for Continuous Gesture Recognition
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Louis-Philippe Morency, Ariadna Quattoni, Trevor Darrell
Issue Date:June 2007
pp. 1-8
Many problems in vision involve the prediction of a class label for each frame in an unsegmented sequence. In this paper, we develop a discriminative framework for simultaneous sequence segmentation and labeling which can capture both intrinsic and extrins...
 
Hidden Conditional Random Fields for Gesture Recognition
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Sy Bor Wang, Ariadna Quattoni, Louis-Philippe Morency, David Demirdjian, Trevor Darrell
Issue Date:June 2006
pp. 1521-1527
We introduce a discriminative hidden-state approach for the recognition of human gestures. Gesture sequences often have a complex underlying structure, and models that can incorporate hidden structures have proven to be advantageous for recognition tasks. ...
 
Pose Estimation using 3D View-Based Eigenspaces
Found in: Analysis and Modeling of Faces and Gestures, IEEE International Workshop on
By Louis-Philippe Morency, Patrik Sundberg, Trevor Darrell
Issue Date:October 2003
pp. 45
In this paper we present a method for estimating the absolute pose of a rigid object based on intensity and depth view-based eigenspaces, built across multiple views of example objects of the same class. Given an initial frame of an object with unknown pos...
 
Adaptive View-Based Appearance Models
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Louis-Philippe Morency, Ali Rahimi, Trevor Darrell
Issue Date:June 2003
pp. 803
We present a method for online rigid object tracking using an adaptive view-based appearance model. When the object?s pose trajectory crosses itself, our tracker has bounded drift and can track objects undergoing large motion for long periods of time. Our ...
 
Stereo Tracking Using ICP and Normal Flow Constraint
Found in: Pattern Recognition, International Conference on
By Louis-Philippe Morency, Trevor Darrell
Issue Date:August 2002
pp. 40367
This paper presents a new approach for 3D view registration of stereo images. We introduce a hybrid error function which combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise fo...
 
Fast 3D Model Acquisition from Stereo Images
Found in: 3D Data Processing Visualization and Transmission, International Symposium on
By Louis-Philippe Morency, Ali Rahimi, Trevor Darrell
Issue Date:June 2002
pp. 172
We propose a fast 3D model acquisition system that aligns intensity and depth images, and reconstructs a texture 3D mesh. 3D views are registered with shape alignment based on intensity gradient constraints and a global registration algorithm. We reconstru...
 
Fast Stereo-Based Head Tracking for Interactive Environments
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By Louis-Philippe Morency, Ali Rahimi, Neal Checka, Trevor Darrell
Issue Date:May 2002
pp. 0390
We present a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique....
 
Introduction to the special issue on affective interaction in natural environments
Found in: ACM Transactions on Interactive Intelligent Systems (TiiS)
By Christopher Peters, Ginevra Castellano, Kostas Karpouzis, Laurel D. Riek, Louis-Philippe Morency, Louis-Philippe Morency, Christopher Peters, Ginevra Castellano, Jean-Claude Martin, Jean-Claude Martin, Kostas Karpouzis, Laurel D. Riek
Issue Date:March 2012
pp. 1-4
Affect-sensitive systems such as social robots and virtual agents are increasingly being investigated in real-world settings. In order to work effectively in natural environments, these systems require the ability to infer the affective and mental states o...
     
Relative facial action unit detection
Found in: 2014 IEEE Winter Conference on Applications of Computer Vision (WACV)
By Mahmoud Khademi,Louis-Philippe Morency
Issue Date:March 2014
pp. 1090-1095
This paper presents a subject-independent facial action unit (AU) detection method by introducing the concept of relative AU detection, for scenarios where the neutral face is not provided. We propose a new classification objective function which analyzes ...
   
Action Recognition by Hierarchical Sequence Summarization
Found in: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Yale Song,Louis-Philippe Morency,Randall Davis
Issue Date:June 2013
pp. 3562-3569
Recent progress has shown that learning from hierarchical feature representations leads to improvements in various computer vision tasks. Motivated by the observation that human activity data contains information at various temporal resolutions, we present...
 
Distribution-sensitive learning for imbalanced datasets
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Yale Song,Louis-Philippe Morency,Randall Davis
Issue Date:April 2013
pp. 1-6
Many real-world face and gesture datasets are by nature imbalanced across classes. Conventional statistical learning models (e.g., SVM, HMM, CRF), however, are sensitive to imbalanced datasets. In this paper we show how an imbalanced dataset affects the pe...
   
Sequential emotion recognition using Latent-Dynamic Conditional Neural Fields
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Julien-Charles Levesque,Louis-Philippe Morency,Christian Gagne
Issue Date:April 2013
pp. 1-6
A wide number of problems in face and gesture analysis involve the labeling of temporal sequences. In this paper, we introduce a discriminative model for such sequence labeling tasks. This model involves two layers of latent dynamics, each with their separ...
   
Automatic behavior descriptors for psychological disorder analysis
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Stefan Scherer,Giota Stratou,Marwa Mahmoud,Jill Boberg,Jonathan Gratch,Albert Rizzo,Louis-Philippe Morency
Issue Date:April 2013
pp. 1-8
We investigate the capabilities of automatic nonverbal behavior descriptors to identify indicators of psychological disorders such as depression, anxiety, and post-traumatic stress disorder. We seek to confirm and enrich present state of the art, predomina...
   
ICMI 2013 grand challenge workshop on multimodal learning analytics
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Louis-Philippe Morency, Marcelo Worsley, Nadir Weibel, Sharon Oviatt, Stefan Scherer
Issue Date:December 2013
pp. 373-378
Advances in learning analytics are contributing new empirical findings, theories, methods, and metrics for understanding how students learn. It also contributes to improving pedagogical support for students' learning through assessment of new digital tools...
     
Automatic multimodal descriptors of rhythmic body movement
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Louis-Philippe Morency, Marwa Mahmoud, Peter Robinson
Issue Date:December 2013
pp. 429-436
Prolonged durations of rhythmic body gestures were proved to be correlated with different types of psychological disorders. To-date, there is no automatic descriptor that can robustly detect those behaviours. In this paper, we propose a cyclic gestures des...
     
Interactive relevance search and modeling: support for expert-driven analysis of multimodal data
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Francis Quek, Chreston Miller, Louis-Philippe Morency
Issue Date:December 2013
pp. 149-156
In this paper we present the findings of three longitudinal case studies in which a new method for conducting multimodal analysis of human behavior is tested. The focus of this new method is to engage a researcher integrally in the analysis process and all...
     
Audiovisual behavior descriptors for depression assessment
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Giota Stratou, Louis-Philippe Morency, Stefan Scherer
Issue Date:December 2013
pp. 135-140
We investigate audiovisual indicators, in particular measures of reduced emotional expressivity and psycho-motor retardation, for depression within semi-structured virtual human interviews. Based on a standard self-assessment depression scale we investigat...
     
Speaker-adaptive multimodal prediction model for listener responses
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Louis-Philippe Morency, Dirk Heylen, Iwan de Kok
Issue Date:December 2013
pp. 51-58
The goal of this paper is to analyze and model the variability in speaking styles in dyadic interactions and build a predictive algorithm for listener responses that is able to adapt to these different styles. The end result of this research will be a virt...
     
Who is persuasive?: the role of perceived personality and communication modality in social multimedia
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Alessandro Vinciarelli, Kenji Sagae, Louis-Philippe Morency, Gelareh Mohammadi, Sunghyun Park
Issue Date:December 2013
pp. 19-26
Persuasive communication is part of everyone's daily life. With the emergence of social websites like YouTube, Facebook and Twitter, persuasive communication is now seen online on a daily basis. This paper explores the effect of multi-modality and perceive...
     
Learning a sparse codebook of facial and body microexpressions for emotion recognition
Found in: Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)
By Louis-Philippe Morency, Randall Davis, Yale Song
Issue Date:December 2013
pp. 237-244
Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of ...
     
Multimodal prediction of expertise and leadership in learning groups
Found in: Proceedings of the 1st International Workshop on Multimodal Learning Analytics (MLA '12)
By Louis-Philippe Morency, Nadir Weibel, Sharon Oviatt, Stefan Scherer
Issue Date:October 2012
pp. 1-8
In his study, we investigate low level predictors from audio and writing modalities for the separation and identification of socially dominant leaders and experts within a study group. We use a multimodal dataset of situated computer assisted group learnin...
     
1st international workshop on multimodal learning analytics: extended abstract
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Louis-Philippe Morency, Marcelo Worsley, Stefan Scherer
Issue Date:October 2012
pp. 609-610
This summary describes the 1st International Workshop on Multimodal Learning Analytics. This area of study brings together the technologies of multimodal analysis with the learning sciences. The intersection of these domains should enable researchers to fo...
     
Step-wise emotion recognition using concatenated-HMM
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Derya Ozkan, Louis-Philippe Morency, Stefan Scherer
Issue Date:October 2012
pp. 477-484
Human emotion is an important part of human-human communication, since the emotional state of an individual often affects the way that he/she reacts to others. In this paper, we present a method based on concatenated Hidden Markov Model (co-HMM) to infer t...
     
Towards sensing the influence of visual narratives on human affect
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Alexis Narvaez, Daniel McDuff, Louis-Philippe Morency, Mihai Burzo, Rada Mihalcea, Veronica Perez-Rosas
Issue Date:October 2012
pp. 153-160
In this paper, we explore a multimodal approach to sensing affective state during exposure to visual narratives. Using four different modalities, consisting of visual facial behaviors, thermal imaging, heart rate measurements, and verbal descriptions, we s...
     
Structural and temporal inference search (STIS): pattern identification in multimodal data
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Chreston Miller, Francis Quek, Louis-Philippe Morency
Issue Date:October 2012
pp. 101-108
There are a multitude of annotated behavior corpora (manual and automatic annotations) available as research expands in multimodal analysis of human behavior. Despite the rich representations within these datasets, search strategies are limited with respec...
     
Multimodal human behavior analysis: learning correlation and interaction across modalities
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Louis-Philippe Morency, Randall Davis, Yale Song
Issue Date:October 2012
pp. 27-30
Multimodal human behavior analysis is a challenging task due to the presence of complex nonlinear correlations and interactions across modalities. We present a novel approach to this problem based on Kernel Canonical Correlation Analysis (KCCA) and Multi-v...
     
I already know your answer: using nonverbal behaviors to predict immediate outcomes in a dyadic negotiation
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Jonathan Gratch, Louis-Philippe Morency, Sunghyun Park
Issue Date:October 2012
pp. 19-22
Be it in our workplace or with our family or friends, negotiation comprises a fundamental fabric of our everyday life, and it is apparent that a system that can automatically predict negotiation outcomes will have substantial implications. In this paper, w...
     
Computational study of human communication dynamic
Found in: Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding (J-HGBU '11)
By Louis-Philippe Morency
Issue Date:December 2011
pp. 13-18
Face-to-face communication is a highly dynamic process where participants mutually exchange and interpret linguistic and gestural signals. Even when only one person speaks at the time, other participants exchange information continuously amongst themselves...
     
Towards multimodal sentiment analysis: harvesting opinions from the web
Found in: Proceedings of the 13th international conference on multimodal interfaces (ICMI '11)
By Louis-Philippe Morency, Payal Doshi, Rada Mihalcea
Issue Date:November 2011
pp. 169-176
With more than 10,000 new videos posted online every day on social websites such as YouTube and Facebook, the internet is becoming an almost infinite source of information. One crucial challenge for the coming decade is to be able to harvest relevant infor...
     
Learning and evaluating response prediction models using parallel listener consensus
Found in: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI '10)
By Derya Ozkan, Dirk Heylen, Iwan de Kok, Louis-Philippe Morency
Issue Date:November 2010
pp. 1-8
Traditionally listener response prediction models are learned from pre-recorded dyadic interactions. Because of individual differences in behavior, these recordings do not capture the complete ground truth. Where the recorded listener did not respond to an...
     
3rd international workshop on affective interaction in natural environments (AFFINE)
Found in: Proceedings of the international conference on Multimedia (MM '10)
By Christopher Peters, Ginevra Castellano, Jean-Claude Martin, Kostas Karpouzis, Laurel D. Riek, Louis-Philippe Morency
Issue Date:October 2010
pp. 1759-1760
The 3rd International Workshop on Affective Interaction in Natural Environments, AFFINE, follows a number of successful AFFINE workshops and events commencing in 2008.A key aim of AFFINE is the identification and investigation of significant open issues in...
     
Co-occurrence graphs: contextual representation for head gesture recognition during multi-party interactions
Found in: Proceedings of the Workshop on Use of Context in Vision Processing (UCVP '09)
By Louis-Philippe Morency
Issue Date:November 2009
pp. 1-6
Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To accurately recognize visual feedback, humans often use contextual knowledge from previous and current events to anticipa...
     
Use of context in vision processing: an introduction to the UCVP 2009 workshop
Found in: Proceedings of the Workshop on Use of Context in Vision Processing (UCVP '09)
By Anton Nijholt, Hamid Aghajan, Louis-Philippe Morency, Maja Pantic, Ming-Hsuan Yang, Ralph Braspenning, Yuri Ivanov
Issue Date:November 2009
pp. 1-3
Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multi-modal information fusion and situation-aware and dynamic vision...
     
Context-based recognition during human interactions: automatic feature selection and encoding dictionary
Found in: Proceedings of the 10th international conference on Multimodal interfaces (IMCI '08)
By Iwan de Kok, Jonathan Gratch, Louis-Philippe Morency
Issue Date:October 2008
pp. 203-204
During face-to-face conversation, people use visual feedback such as head nods to communicate relevant information and to synchronize rhythm between participants. In this paper we describe how contextual information from other participants can be used to p...
     
Recognizing gaze aversion gestures in embodied conversational discourse
Found in: Proceedings of the 8th international conference on Multimodal interfaces (ICMI '06)
By C. Mario Christoudias, Louis-Philippe Morency, Trevor Darrell
Issue Date:November 2006
pp. 287-294
Eye gaze offers several key cues regarding conversational discourse during face-to-face interaction between people. While a large body of research results exist to document the use of gaze in human-to-human interaction, and in animating realistic embodied ...
     
Co-Adaptation of audio-visual speech and gesture classifiers
Found in: Proceedings of the 8th international conference on Multimodal interfaces (ICMI '06)
By C. Mario Christoudias, Kate Saenko, Louis-Philippe Morency, Trevor Darrell
Issue Date:November 2006
pp. 84-91
The construction of robust multimodal interfaces often requires large amounts of labeled training data to account for cross-user differences and variation in the environment. In this work, we investigate whether unlabeled training data can be leveraged to ...
     
The effect of head-nod recognition in human-robot conversation
Found in: Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (HRI '06)
By Candace L. Sidner, Christopher Lee, Clifton Forlines, Louis-Philippe Morency
Issue Date:March 2006
pp. 290-296
This paper reports on a study of human participants with a robot designed to participate in a collaborative conversation with a human. The purpose of the study was to investigate a particular kind of gestural feedback from human to the robot in these conve...
     
Head gesture recognition in intelligent interfaces: the role of context in improving recognition
Found in: Proceedings of the 11th international conference on Intelligent user interfaces (IUI '06)
By Louis-Philippe Morency, Trevor Darrell
Issue Date:January 2006
pp. 32-38
Acknowledging an interruption with a nod of the head is a natural and intuitive communication gesture which can be performed without significantly disturbing a primary interface activity. In this paper we describe vision-based head gesture recognition tech...
     
From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systems
Found in: Proceedings of the 6th international conference on Multimodal interfaces (ICMI '04)
By Louis-Philippe Morency, Trevor Darrell
Issue Date:October 2004
pp. 32-37
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. While the machine interpretation of these cues has previously been limited to output modalities, recent advances in fac...
     
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
Found in: CHI '02 extended abstracts on Human factors in computer systems (CHI '02)
By Aaron Adler, Alice Oh, Harold Fox, Krzysztof Gajos, Louis-Philippe Morency, Max Van Kleek, Trevor Darrell
Issue Date:April 2002
pp. 650-651
We present "look-to-talk", a gaze-aware interface for directing a spoken utterance to a software agent in a multi-user collaborative environment. Through a prototype and a Wizard-of-Oz (Woz) experiment, we show that "look-to-talk" is indeed a natural alter...
     
 1