This Article 
 Bibliographic References 
 Add to: 
IEEE Virtual Reality Conference 2001 (VR 2001)
Personalized Face and Speech Communication over the Internet
Yokohama, Japan
March 13-March 17
ISBN: 0-7695-0948-7
Sumedha Kshirsagar, MIRALab -University of Geneva
Chris Joslin, MIRALab -University of Geneva
Won-Sook Lee, MIRALab -University of Geneva
Nadia Magnenat-Thalmann, MIRALab -University of Geneva
We present our system for personalized face and speech communication over the Internet. The overall system consists of three parts: The cloning of real human faces to use as their presentative avatars, the Networked Virtual Environment System performing the basic tasks of network and device management, and the speech system, which includes a text-to-speech engine and a real-time phoneme extraction engine from natural speech. The combination of these three elements provides a system to allow real humans, represented by their virtual counterparts, to communicate with each other even when they are geographically remote. In addition to this, all elements present use MPEG-4 as a common communication and animation standard and were designed and tested on the Windows Operating System (OS). The paper presents the main aim of the work, the methodology and the resulting communication system.
Index Terms:
Facial communication, Network Virtual Environment, speech communication, facial cloning, Internet, MPEG-4
Sumedha Kshirsagar, Chris Joslin, Won-Sook Lee, Nadia Magnenat-Thalmann, "Personalized Face and Speech Communication over the Internet," vr, pp.37, IEEE Virtual Reality Conference 2001 (VR 2001), 2001
Usage of this product signifies your acceptance of the Terms of Use.