Search For:

Displaying 1-6 out of 6 total
Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling
Found in: Virtual Reality Conference, IEEE
By Xiaohan Ma, Zhigang Deng
Issue Date:March 2009
pp. 143-150
Due to the intrinsic subtlety and dynamics of eye movements, automated generation of natural and engaging eye motion has been a challenging task for decades. In this paper we present an effective technique to synthesize natural eye gazes given a head motio...
 
Characterizing the Performance and Power Consumption of 3D Mobile Games
Found in: Computer
By Xiaohan Ma,Zhigang Deng,Mian Dong,Lin Zhong
Issue Date:April 2013
pp. 76-82
A preliminary study using the Quake 3 and XRace games as benchmarks on three mainstream mobile system-on-chip architectures reveals that the geometry stage is the main bottleneck in 3D mobile games and confirms that game logic significantly affects power c...
 
Live Speech Driven Head-and-Eye Motion Generators
Found in: IEEE Transactions on Visualization and Computer Graphics
By B. H. Le, Xiaohan Ma, Zhigang Deng
Issue Date:November 2012
pp. 1902-1914
This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each comp...
 
A Statistical Quality Model for Data-Driven Speech Animation
Found in: IEEE Transactions on Visualization and Computer Graphics
By Xiaohan Ma, Zhigang Deng
Issue Date:November 2012
pp. 1915-1927
In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved researc...
 
Perceptual analysis of talking avatar head movements: a quantitative perspective
Found in: Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11)
By Binh Huy Le, Xiaohan Ma, Zhigang Deng
Issue Date:May 2011
pp. 2699-2702
Lifelike interface agents (e.g. talking avatars) have been increasingly used in human-computer interaction applications. In this work, we quantitatively analyze how human perception is affected by audio-head motion characteristics of talking avatars. Speci...
     
Style learning and transferring for facial animation editing
Found in: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '09)
By Binh Huy Le, Xiaohan Ma, Zhigang Deng
Issue Date:August 2009
pp. 123-132
Most of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency. In this paper, we present a novel facial editi...
     
 1