Search For:

Displaying 1-23 out of 23 total
Local Evidence Aggregation for Regression-Based Facial Point Detection
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By B. Martinez,M. F. Valstar,X. Binefa,M. Pantic
Issue Date:May 2013
pp. 1149-1163
We propose a new algorithm to detect facial points in frontal and near-frontal face images. It combines a regression-based approach with a probabilistic graphical model-based face shape model that restricts the search to anthropomorphically consistent regi...
 
Building Autonomous Sensitive Artificial Listeners
Found in: IEEE Transactions on Affective Computing
By M. Schroder,E. Bevacqua,R. Cowie,F. Eyben,H. Gunes,D. Heylen,M. ter Maat,G. McKeown,S. Pammi,M. Pantic,C. Pelachaud,B. Schuller,E. de Sevin,M. Valstar,M. Wollmer
Issue Date:April 2012
pp. 165-183
This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and nonverbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving an...
 
The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent
Found in: IEEE Transactions on Affective Computing
By G. McKeown,M. Valstar,R. Cowie,M. Pantic,M. Schroder
Issue Date:January 2012
pp. 5-17
SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from in...
 
Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing
Found in: IEEE Transactions on Affective Computing
By Alessandro Vinciarelli,M. Pantic,D. Heylen,C. Pelachaud,I. Poggi,F. D'Errico,M. Schroeder
Issue Date:January 2012
pp. 69-87
Social Signal Processing is the research domain aimed at bridging the social intelligence gap between humans and machines. This paper is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesi...
 
Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space
Found in: IEEE Transactions on Affective Computing
By M. A. Nicolaou,H. Gunes,M. Pantic
Issue Date:April 2011
pp. 92-105
Past research in analysis of human affect has focused on recognition of prototypic expressions of six basic emotions based on posed data acquired in laboratory settings. Recently, there has been a shift toward subtle, continuous, and context-specific inter...
 
Human body gesture recognition using adapted auxiliary particle filtering
Found in: Advanced Video and Signal Based Surveillance, IEEE Conference on
By A. Oikonomopoulos, M. Pantic
Issue Date:September 2007
pp. 441-446
In this paper we propose a tracking scheme specifically tailored for tracking human body parts in cluttered scenes. We model the background and the human skin using Gaussian Mixture Models and we combine these estimates to localize the features to be track...
 
Active Learning of Introductory Machine Learning
Found in: Frontiers in Education, Annual
By M. Pantic,R. Zwitserloot
Issue Date:October 2006
pp. 1-6
This paper describes a computer-based training program for active learning of agent technology, expert systems, neural networks and case-based reasoning by undergraduate students using a simple agent framework. While many machine learning (ML) and artifici...
 
A Multimodal Database for Affect Recognition and Implicit Tagging
Found in: IEEE Transactions on Affective Computing
By M. Soleymani,J. Lichtenauer,T. Pun,M. Pantic
Issue Date:January 2012
pp. 42-55
MAHNOB-HCI is a multimodal database recorded in response to affective stimuli with the goal of emotion recognition and implicit tagging research. A multimodal setup was arranged for synchronized recording of face videos, audio signals, eye gaze data, and p...
 
Multimodal Emotion Recognition in Response to Videos
Found in: IEEE Transactions on Affective Computing
By M. Soleymani,M. Pantic,T. Pun
Issue Date:April 2012
pp. 211-223
This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional conten...
 
Social Signal Processing: Understanding social interactions through nonverbal behavior analysis
Found in: Computer Vision and Pattern Recognition Workshop
By A. Vinciarelli, H. Salamin, M. Pantic
Issue Date:June 2009
pp. 42-49
This paper introduces social signal processing (SSP), the domain aimed at automatic understanding of social interactions through analysis of nonverbal behavior. The core idea of SSP is that nonverbal behavior is machine detectable evidence of social signal...
 
B-spline polynomial descriptors for human activity recognition
Found in: Computer Vision and Pattern Recognition Workshop
By A. Oikonomopoulos, M. Pantic, I. Patras
Issue Date:June 2008
pp. 1-6
The extraction and quantization of local image and video descriptors for the subsequent creation of visual codebooks is a technique that has proved extremely effective for image and video retrieval applications. In this paper we build on this concept and e...
 
Subspace Learning from Image Gradient Orientations
Found in: IEEE Transactions on Pattern Analysis and Machine Intelligence
By G. Tzimiropoulos,S. Zafeiriou,M. Pantic
Issue Date:December 2012
pp. 2454-2466
We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data are typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensitie...
 
Multi-output Laplacian dynamic ordinal regression for facial expression recognition and intensity estimation
Found in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
By O. Rudovic,V. Pavlovic,M. Pantic
Issue Date:June 2012
pp. 2634-2641
Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly ...
 
Kernel-based Recognition of Human Actions Using Spatiotemporal Salient Points
Found in: Computer Vision and Pattern Recognition Workshop
By A. Oikonomopoulos, I. Patras, M. Pantic
Issue Date:June 2006
pp. 151
This paper addresses the problem of human action recognition by introducing a sparse representation of image sequences as a collection of spatiotemporal events that are localized at points that are salient both in space and time. We detect the spatiotempor...
 
Biologically vs. Logic Inspired Encoding of Facial Actions and Emotions in Video
Found in: Multimedia and Expo, IEEE International Conference on
By M.F. Valstar, M. Pantic
Issue Date:July 2006
pp. 325-328
Automatic facial expression analysis is an important aspect of Human Machine Interaction as the face is an important communicative medium. We use our face to signal interest, disagreement, intentions or mood through subtle facial motions and expressions. W...
 
An Expert System for Multiple Emotional Classification of Facial Expressions
Found in: Tools with Artificial Intelligence, IEEE International Conference on
By M. Pantic, L.J.M. Rothkrantz
Issue Date:November 1999
pp. 113
This paper discusses Integrated System for Facial Expression Recognition (ISFER), which performs facial expression analysis from a still dual facial view image. The system consists of three major parts: facial data generator, facial data evaluator and faci...
 
An implicit spatiotemporal shape model for human activity localization and recognition
Found in: Computer Vision and Pattern Recognition Workshop
By A. Oikonomopoulos, I. Patras, M. Pantic
Issue Date:June 2009
pp. 27-33
In this paper we address the problem of localisation and recognition of human activities in unsegmented image sequences. The main contribution of the proposed method is the use of an implicit representation of the spatiotemporal shape of the activity which...
 
Work in Progress: Learner-Centered Online Learning Facility
Found in: Frontiers in Education, Annual
By M. Pantic,R. Zwitserloot,M. de Weerdt
Issue Date:October 2006
pp. 19-20
This paper describes a novel, learner-centered technology for authoring Web lectures. Besides seamless integration of video and audio feeds, Microsoft PowerPoint slides, and Web-pages, the proposed online learning facility (OLF) also facilitates online int...
 
Spatiotemporal saliency for human action recognition
Found in: Multimedia and Expo, IEEE International Conference on
By A. Oikonomopoulos, I. Patras, M. Pantic
Issue Date:July 2005
pp. 4 pp.
This paper addresses the problem of human action recognition by introducing a sparse representation of image sequences as a collection of spatiotemporal events that are localized at points that are salient both in space and time. We detect the spatiotempor...
 
Web-based database for facial expression analysis
Found in: Multimedia and Expo, IEEE International Conference on
By M. Pantic, M. Valstar, R. Rademaker, L. Maat
Issue Date:July 2005
pp. 5 pp.
In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be...
 
Facial Action Unit Detection using Probabilistic Actively Learned Support Vector Machines on Tracked Facial Point Data
Found in: Computer Vision and Pattern Recognition Workshop
By M.F. Valstar, I. Patras, M. Pantic
Issue Date:June 2005
pp. 76
<p>A system that could enable fast and robust facial expression recognition would have many applications in behavioral science, medicine, security and human-machine interaction. While working toward that goal, we do not attempt to recognize prototypi...
 
Particle Filtering with Factorized Likelihoods for Tracking Facial Features
Found in: Automatic Face and Gesture Recognition, IEEE International Conference on
By I. Patras, M. Pantic
Issue Date:May 2004
pp. 97
In the recent years particle filtering has been the dominant paradigm for tracking facial and body features, recognizing temporal events and reasoning in uncertainty. A major problem associated with it is that its performance deteriorates drastically when ...
 
The Detection of Concept Frames Using Clustering Multi-instance Learning
Found in: Pattern Recognition, International Conference on
By D.M.J. Tax, E. Hendriks, M.F. Valstar, M. Pantic
Issue Date:August 2010
pp. 2917-2920
The classification of sequences requires the combination of information from different time points. In this paper the detection of facial expressions is considered. Experiments on the detection of certain facial muscle activations in videos show that it is...
 
 1