Search For:

Displaying 1-50 out of 54 total
Audiovisual Detection of Behavioural Mimicry
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Sanjay Bilakhia,Stavros Petridis,Maja Pantic
Issue Date:September 2013
pp. 123-128
Human mimicry is a behavioural cue occurring during social interaction that can inform us about the participants' inter-personal states and attitudes. It occurs when a participant in an interaction exhibits some behaviour as a result of a co-participants p...
 
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang
Issue Date:January 2009
pp. 39-58
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately display...
 
Full-Angle Quaternions for Robustly Matching Vectors of 3D Rotations
Found in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Stephan Liwicki,Minh-Tri Pham,Stefanos Zafeiriou,Maja Pantic,Bjorn Stenger
Issue Date:June 2014
pp. 105-112
In this paper we introduce a new distance for robustly matching vectors of 3D rotations. A special representation of 3D rotations, which we coin full-angle quaternion (FAQ), allows us to express this distance as Euclidean. We apply the distance to the prob...
 
Gauss-Newton Deformable Part Models for Face Alignment In-the-Wild
Found in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Georgios Tzimiropoulos,Maja Pantic
Issue Date:June 2014
pp. 1851-1858
Arguably, Deformable Part Models (DPMs) are one of the most prominent approaches for face alignment with impressive results being recently reported for both controlled lab and unconstrained settings. Fitting in most DPM methods is typically formulated as a...
 
Merging SVMs with Linear Discriminant Analysis: A Combined Model
Found in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Symeon Nikitidis,Stefanos Zafeiriou,Maja Pantic
Issue Date:June 2014
pp. 1067-1074
A key problem often encountered by many learning algorithms in computer vision dealing with high dimensional data is the so called "curse of dimensionality" which arises when the available training samples are less than the input feature space di...
 
RAPS: Robust and Efficient Automatic Construction of Person-Specific Deformable Models
Found in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Christos Sagonas,Yannis Panagakis,Stefanos Zafeiriou,Maja Pantic
Issue Date:June 2014
pp. 1789-1796
The construction of Facial Deformable Models (FDMs) is a very challenging computer vision problem, since the face is a highly deformable object and its appearance drastically changes under different poses, expressions, and illuminations. Although several m...
 
300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge
Found in: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW)
By Christos Sagonas,Georgios Tzimiropoulos,Stefanos Zafeiriou,Maja Pantic
Issue Date:December 2013
pp. 397-403
Automatic facial point detection plays arguably the most important role in face analysis. Several methods have been proposed which reported their results on databases of both constrained and unconstrained conditions. Most of these databases provide annotat...
 
Context-Sensitive Conditional Ordinal Random Fields for Facial Action Intensity Estimation
Found in: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW)
By Ognjen Rudovic,Vladimir Pavlovic,Maja Pantic
Issue Date:December 2013
pp. 492-499
We address the problem of modeling intensity levels of facial actions in video sequences. The intensity sequences often exhibit a large variability due to the context factors, such as the person-specific facial expressiveness or changes in illumination. Ex...
 
Markov Random Field Structures for Facial Action Unit Intensity Estimation
Found in: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW)
By Georgia Sandbach,Stefanos Zafeiriou,Maja Pantic
Issue Date:December 2013
pp. 738-745
We present a novel Markov Random Field (MRF) structure-based approach to the problem of facial action unit (AU) intensity estimation. AUs generally appear in common combinations, and exhibit strong relationships between the intensities of a number of AUs. ...
 
Optimization Problems for Fast AAM Fitting in-the-Wild
Found in: 2013 IEEE International Conference on Computer Vision (ICCV)
By Georgios Tzimiropoulos,Maja Pantic
Issue Date:December 2013
pp. 593-600
We describe a very simple framework for deriving the most-well known optimization problems in Active Appearance Models (AAMs), and most importantly for providing efficient solutions. Our formulation results in two optimization problems for fast and exact A...
 
Learning Slow Features for Behaviour Analysis
Found in: 2013 IEEE International Conference on Computer Vision (ICCV)
By Lazaros Zafeiriou,Mihalis A. Nicolaou,Stefanos Zafeiriou,Symeon Nikitidis,Maja Pantic
Issue Date:December 2013
pp. 2840-2847
A recently introduced latent feature learning technique for time varying dynamic phenomena analysis is the so called Slow Feature Analysis (SFA). SFA is a deterministic component analysis technique for multi-dimensional sequences that by minimizing the var...
 
Audiovisual Detection of Laughter in Human-Machine Interaction
Found in: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII)
By Stavros Petridis,Maelle Leveque,Maja Pantic
Issue Date:September 2013
pp. 129-134
Laughter is clearly an audiovisual event, consisting of the laughter vocalization and of facial activity, mainly around the mouth and sometimes in the upper face. However, past research on laughter recognition has mainly focused on the information availabl...
 
Coupled Gaussian processes for pose-invariant facial expression recognition
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Ognjen Rudovic,Maja Pantic,Ioannis Patras
Issue Date:June 2013
pp. 1357-1369
We propose a method for head-pose invariant facial expression recognition that is based on a set of characteristic facial points. To achieve head-pose invariance, we propose the Coupled Scaled Gaussian Process Regression (CSGPR) model for head-pose normali...
 
Shape-constrained Gaussian process regression for facial-point-based head-pose normalization
Found in: Computer Vision, IEEE International Conference on
By Ognjen Rudovic,Maja Pantic
Issue Date:November 2011
pp. 1495-1502
Given the facial points extracted from an image of a face in an arbitrary pose, the goal of facial-point-based head-pose normalization is to obtain the corresponding facial points in a predefined pose (e.g., frontal). This involves inference of complex and...
 
Robust and efficient parametric face alignment
Found in: Computer Vision, IEEE International Conference on
By Georgios Tzimiropoulos,Stefanos Zafeiriou,Maja Pantic
Issue Date:November 2011
pp. 1847-1854
We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iterative...
 
Regression-Based Multi-view Facial Expression Recognition
Found in: Pattern Recognition, International Conference on
By Ognjen Rudovic, Ioannis Patras, Maja Pantic
Issue Date:August 2010
pp. 4121-4124
We present a regression-based scheme for multi-view facial expression recognition based on 2D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the express...
 
Audio-Visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space
Found in: Pattern Recognition, International Conference on
By Mihalis A. Nicolaou, Hatice Gunes, Maja Pantic
Issue Date:August 2010
pp. 3695-3699
This paper focuses on audio-visual (using facial expression, shoulder and audio cues) classification of spontaneous affect, utilising generative models for classification (i) in terms of Maximum Likelihood Classification with the assumption that the genera...
 
The SEMAINE corpus of emotionally coloured character interactions
Found in: Multimedia and Expo, IEEE International Conference on
By Gary McKeown, Michel F. Valstar, Roderick Cowie, Maja Pantic
Issue Date:July 2010
pp. 1079-1084
We have recorded a new corpus of emotionally coloured conversations. Users were recorded while holding conversations with an operator who adopts in sequence four roles designed to evoke emotional reactions. The operator and the user are seated in separate ...
 
Facial point detection using boosted regression and graph models
Found in: Computer Vision and Pattern Recognition, IEEE Computer Society Conference on
By Michel Valstar, Brais Martinez, Xavier Binefa, Maja Pantic
Issue Date:June 2010
pp. 2729-2736
Finding fiducial facial points in any frame of a video showing rich naturalistic facial behaviour is an unsolved problem. Yet this is a crucial step for geometric-feature-based facial expression analysis, and methods that use appearance-based features extr...
 
A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Sander Koelstra, Maja Pantic, Ioannis (Yiannis) Patras
Issue Date:November 2010
pp. 1940-1954
In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face v...
 
Cost-Effective Solution to Synchronized Audio-Visual Capture Using Multiple Sensors
Found in: Advanced Video and Signal Based Surveillance, IEEE Conference on
By Jeroen Lichtenauer, Michel Valstar, Jie Shen, Maja Pantic
Issue Date:September 2009
pp. 324-329
Applications such as surveillance and human motion capture require high-bandwidth recording from multiple cameras. Furthermore, the recent increase in research on sensor fusion has raised the demand on synchronization accuracy between video, audio and othe...
 
Fully Automatic Facial Action Unit Detection and Temporal Analysis
Found in: Computer Vision and Pattern Recognition Workshop
By Michel Valstar, Maja Pantic
Issue Date:June 2006
pp. 149
In this work we report on the progress of building a system that enables fully automated fast and robust facial expression recognition from face video. We analyse subtle changes in facial expression by recognizing facial muscle action units (AUs) and analy...
 
Automatic Analysis of Facial Expressions: The State of the Art
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Maja Pantic, Leon J.M. Rothkrantz
Issue Date:December 2000
pp. 1424-1445
<p><b>Abstract</b>—Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems...
 
From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By Akshay Asthana,Stefanos Zafeiriou,GEORGIOS TZIMIROPOULOS,Shiyang Cheng,Maja Pantic
Issue Date:February 2015
pp. 1
We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Ga...
 
Incremental Face Alignment in the Wild
Found in: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Akshay Asthana,Stefanos Zafeiriou,Shiyang Cheng,Maja Pantic
Issue Date:June 2014
pp. 1859-1866
The development of facial databases with an abundance of annotated facial data captured under unconstrained 'in-the-wild' conditions have made discriminative facial deformable models the de facto choice for generic facial landmark localization. Even though...
 
Robust Canonical Time Warping for the Alignment of Grossly Corrupted Sequences
Found in: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Yannis Panagakis,Mihalis A. Nicolaou,Stefanos Zafeiriou,Maja Pantic
Issue Date:June 2013
pp. 540-547
Temporal alignment of human behaviour from visual data is a very challenging problem due to a numerous reasons, including possible large temporal scale differences, inter/intra subject variability and, more importantly, due to the presence of gross errors ...
 
Robust Discriminative Response Map Fitting with Constrained Local Models
Found in: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By Akshay Asthana,Stefanos Zafeiriou,Shiyang Cheng,Maja Pantic
Issue Date:June 2013
pp. 3444-3451
We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. Th...
 
2013 10th IEEE International Conference and workshops on Automatic Face and Gesture Recognition (FG)
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Rama Chellappa, Xilin Chen, Qiang Ji,Maja Pantic,Stan Sclaroff, Lijun Yin
Issue Date:April 2013
pp. 1-3
Welcome to the 10th IEEE International Conference on Automatic Face and Gesture Recognition (FG13) in Shanghai, China. The conference is the premier world conference on vision-based facial and body gesture modeling, analysis, and recognition. Since its fir...
   
Online learning and fusion of orientation appearance models for robust rigid object tracking
Found in: 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)
By Ioannis Marras,Joan Alabort Medina,Georgios Tzimiropoulos,Stefanos Zafeiriou,Maja Pantic
Issue Date:April 2013
pp. 1-8
We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To comb...
   
Automatic analysis of facial expressions
Found in: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (HRI '14)
By Maja Pantic
Issue Date:March 2014
pp. 390-390
Facial behaviour is our preeminent means to communicating affective and social signals. This talk discusses a number of components of human facial behaviour, how they can be automatically sensed and analysed by computers, what is the past research in the f...
     
The development and real-world deployment of FROG, the fun robotic outdoor guide
Found in: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (HRI '14)
By Daphne Karreman, Dariu Gavrila, Fernando Nabais, Luis Merino, Maja Pantic, Nuno Menezes, Paulo Alvito, Vanessa Evers
Issue Date:March 2014
pp. 100-100
This video details the development of an intelligent outdoor Guide robot. The main objective is to deploy an innovative robotic guide which is not only able to show information, but to react to the affective states of the users, and to offer location-based...
     
The development and real-world application of FROG, the fun robotic outdoor guide
Found in: Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing (CSCW Companion '14)
By Dariu Gavrila, Fernando Nabais, Luis Merino, Maja Pantic, Nuno Menezes, Paulo Alvito, Vanessa Evers
Issue Date:February 2014
pp. 281-284
This video details the development of an intelligent outdoor guide robot. The main objective is to deploy an innovative robotic guide which is not only able to show information, but to react to the affective states of the users, and to offer location-based...
     
Workshop summary for the 3rd international audio/visual emotion challenge and workshop (AVEC'13)
Found in: Proceedings of the 21st ACM international conference on Multimedia (MM '13)
By Roddy Cowie, Björn Schuller, Jarek Krajewski, Maja Pantic, Michel Valstar
Issue Date:October 2013
pp. 1085-1086
The third Audio-Visual Emotion Challenge and workshop AVEC 2013 will be held in conjunction ACM Multimedia'13. Like the 2012 edition of AVEC, the workshop/challenge addresses the interpretation of social signals represented in both audio and video in terms...
     
Bimodal log-linear regression for fusion of audio and visual features
Found in: Proceedings of the 21st ACM international conference on Multimedia (MM '13)
By Stavros Petridis, Maja Pantic, Ognjen Rudovic
Issue Date:October 2013
pp. 789-792
One of the most commonly used audiovisual fusion approaches is feature-level fusion where the audio and visual features are concatenated. Although this approach has been successfully used in several applications, it does not take into account interactions ...
     
Correlated-spaces regression for learning continuous emotion dimensions
Found in: Proceedings of the 21st ACM international conference on Multimedia (MM '13)
By Maja Pantic, Mihalis A. Nicolaou, Stefanos Zafeiriou
Issue Date:October 2013
pp. 773-776
Adopting continuous dimensional annotations for affective analysis has been gaining rising attention by researchers over the past years. Due to the idiosyncratic nature of this problem, many subproblems have been identified, spanning from the fusion of mul...
     
Human behavior sensing for tag relevance assessment
Found in: Proceedings of the 21st ACM international conference on Multimedia (MM '13)
By Maja Pantic, Sebastian Kaltwang, Mohammad Soleymani
Issue Date:October 2013
pp. 657-660
Users react differently to non-relevant and relevant tags associated with content. These spontaneous reactions can be used for labeling large multimedia databases. We present a method to assess tag relevance to images using the non-verbal bodily responses,...
     
AVEC 2013: the continuous audio/visual emotion and depression recognition challenge
Found in: Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge (AVEC '13)
By Bihan Jiang, Björn Schuller, Florian Eyben, Kirsty Smith, Maja Pantic, Michel Valstar, Roddy Cowie, Sanjay Bilakhia, Sebastian Schnieder
Issue Date:October 2013
pp. 3-10
Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence and arousal. In addition, psycholog...
     
AVEC 2012: the continuous audio/visual emotion challenge
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Björn Schuller, Florian Eyben, Maja Pantic, Michel Valster, Roddy Cowie
Issue Date:October 2012
pp. 449-456
We present the second Audio-Visual Emotion recognition Challenge and workshop (AVEC 2012), which aims to bring together researchers from the audio and video analysis communities around the topic of emotion recognition. The goal of the challenge is to recog...
     
AVEC 2012: the continuous audio/visual emotion challenge - an introduction
Found in: Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)
By Björn Schuller, Maja Pantic, Michel Valstar, Roddy Cowie
Issue Date:October 2012
pp. 361-362
The second international Audio/Visual Emotion Challenge and Workshop 2012 (AVEC 2012) is introduced shortly. 34 teams from 12 countries signed up for the Challenge. The SEMAINE database serves for prediction of four-dimensional continuous affect in audio a...
     
Joint ACM workshop on human gesture and behavior understanding: (J-HGBU'11)
Found in: Proceedings of the 19th ACM international conference on Multimedia (MM '11)
By Alberto Del Bimbo, Alessandro Vinciarelli, Alex Pentland, Maja Pantic, Mohamed Daoudi, Rita Cucchiara
Issue Date:November 2011
pp. 615-616
The ability to understand social signals of a person we are communicating with is the core of social intelligence. Social Intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in l...
     
A multi-layer hybrid framework for dimensional emotion classification
Found in: Proceedings of the 19th ACM international conference on Multimedia (MM '11)
By Hatice Gunes, Maja Pantic, Mihalis A. Nicolaou
Issue Date:November 2011
pp. 933-936
This paper investigates dimensional emotion prediction and classification from naturalistic facial expressions. Similarly to many pattern recognition problems, dimensional emotion classification requires generating multi-dimensional outputs. To date, class...
     
Implicit image tagging via facial information
Found in: Proceedings of the 2nd international workshop on Social signal processing (SSPW '10)
By Jun Jiao, Maja Pantic
Issue Date:October 2010
pp. 59-64
Implicit Tagging is the technique to annotate multimedia data based on user's spontaneous nonverbal reactions. In this paper, a study is conducted to test whether user's facial expression can be used to predict the correctness of tags of images. The basic ...
     
Discriminative space-time voting for joint recognition and localization of actions.
Found in: Proceedings of the 2nd international workshop on Social signal processing (SSPW '10)
By Antonios Oikonomopoulos, Ioannis Patras, Maja Pantic
Issue Date:October 2010
pp. 11-16
In this paper we address the problem of activity detection in unsegmented image sequences. Our main contribution is the use of an implicit representation of the spatiotemporal shape of the activity which relies on the spatiotemporal localization of charact...
     
MM'10 workshop summary for SSPW: ACM workshop on social signal processing 2010
Found in: Proceedings of the international conference on Multimedia (MM '10)
By Alessandro Vinciarelli, Alex Pentland, Maja Pantic
Issue Date:October 2010
pp. 1765-1766
The Workshop on Social Signal Processing (SSPW) is the yearly event of the Social Signal Processing Network (EU-FP7 SSPNet project). This year's workshop programme consists of 4 premium Key Note Talks by Jeff Cohn, Alex Pentland. Justine Cassell, and Toyoa...
     
Use of context in vision processing: an introduction to the UCVP 2009 workshop
Found in: Proceedings of the Workshop on Use of Context in Vision Processing (UCVP '09)
By Anton Nijholt, Hamid Aghajan, Louis-Philippe Morency, Maja Pantic, Ming-Hsuan Yang, Ralph Braspenning, Yuri Ivanov
Issue Date:November 2009
pp. 1-3
Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multi-modal information fusion and situation-aware and dynamic vision...
     
Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities
Found in: Proceedings of the 2009 international conference on Multimodal interfaces (ICMI-MLMI '09)
By Hatice Gunes, Maja Pantic, Sebastian Kaltwang, Stavros Petridis
Issue Date:November 2009
pp. 23-30
Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification...
     
Social signal processing: state-of-the-art and future perspectives of an emerging domain
Found in: Proceeding of the 16th ACM international conference on Multimedia (MM '08)
By Alessandro Vinciarelli, Alex Pentland, Herve Bourlard, Maja Pantic
Issue Date:October 2008
pp. 40-42
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for s...
     
Audiovisual laughter detection based on temporal features
Found in: Proceedings of the 10th international conference on Multimodal interfaces (IMCI '08)
By Maja Pantic, Stavros Petridis
Issue Date:October 2008
pp. 203-204
Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audio-visual approach to distinguishing laughter from speech based on temporal features and we show that integrating the informa...
     
Emotionally aware automated portrait painting
Found in: Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts (DIMEA '08)
By Maja Pantic, Michel F. Valstar, Simon Colton
Issue Date:September 2008
pp. N/A
We combine a machine vision system that recognises emotions and a non-photorealistic rendering (NPR) system to automatically produce portraits which heighten the emotion of the sitter. To do this, the vision system analyses a short video clip of a person e...
     
Fusion of audio and visual cues for laughter detection
Found in: Proceedings of the 2008 international conference on Content-based image and video retrieval (CIVR '08)
By Maja Pantic, Stavros Petridis
Issue Date:July 2008
pp. 569-570
Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audio-visual approach to distinguishing laughter from speech and we show that integrating the information from audio and video channels leads to i...
     
 1  2 Next >>