This Article 
 Bibliographic References 
 Add to: 
Virtual Human Representation and Communication in VLNet
March-April 1997 (vol. 17 no. 2)
pp. 42-53
The realism in participant representation in networked virtual environments involves two elements: believable appearance and realistic movements. Using virtual human figures for participant representation fulfills these functionalities with realism, as it provides the direct relationship between how we control our avatar in the virtual world and how our avatar moves related to this control. The inclusion of virtual human representation is not straightforward: the virtual body should move naturally, in accordance with the actual body, even with the small number of degrees of freedom; and facial communication should be included in the human representation. In addition, the architecture to combine motion control and virtual environment should be efficient and modular. We describe three types of motion control: direct control where the geometry is directly changed, user-guided actors where the motor skills of the actor are used by giving high-level tasks to perform, and autonomous actors, which are controlled by high-level motivations. Similarly, the face can be animated using video, speech, or higher level parameters. The articulated structure of the human body together with the face introduces a new complexity in the usage of the network resources because the size of a message needed to convey the body posture is greater than the one needed for simple, nonarticulated objects. We analyze the network requirements of different message types to animate the human body and face. We compare the message types with respect to coding computation at the sender site, transmission overhead, and decoding computation at the receiver site.

1. M. Slater and M. Usoh, "Body Centered Interaction in Immersive Virtual Environments," Artificial Life and Virtual Reality, N. Magnenat Thalmann and D. Thalmann, eds., John Wiley and Sons, Chichester, UK, 1994, pp. 125-147.
2. R. Boulic et al., "The Humanoid Environment for Interactive Animation of Multiple Deformable Human Characters," Computer Graphics Forum (Proc. Eurographics 95), Blackwell Publishers, Oxford, Vol. 14, No. 3, Sept. 1995, pp. 337-348.
3. P. Kalra et al., "Simulation of Facial Muscle Actions Based on Rational Free-Form Deformations," Computer Graphics Forum (Proc. of Eurographics 92), Vol. 11, No. 3, Sept. 1992, pp. 59-69.
4. T. Molet, R. Boulic, and D. Thalmann, "A Real-Time Anatomical Converter for Human Motion Capture," Eurographics Workshop on Computer Animation and Simulation 96, R. Boulic and G. Hegron, eds., Springer-Verlag, Wien, New York, 1996, pp. 79-94, .
5. S.K. Semwal, R. Hightower, and S. Stansfield, "Closed Form and Geometric Algorithms for Real-Time Control of an Avatar," Proc. IEEE VRAIS 96 IEEE Computer Society Press, Los Alamitos, Calif., 1996, pp. 177-184.
6. N.I. Badler, C.B. Phillips, and B.L. Webber, Simulating Humans: Computer Graphics Animation and Control, Oxford University Press, New York, 1993.
7. T.K. Capin et al., "Virtual Humans for Representing Participants in Immersive Virtual Environments," Proc. FIVE 95, Queen Mary and Westfield College Press, London, Dec. 1995, pp. 135-150.
8. L. Emering et al., "Real-Time Interactions with Virtual Agents Driven by Human Action Identification," First ACM Conf. on Autonomous Agents 97, ACM Press, New York, 1997, pp. 476-477, .
9. F. Lavagetto, "Converting Speech into Lip Movements: A Multimedia Telephone for Hard of Hearing People," IEEE Trans. on Rehabilitation Engineering, Vol. 3, No. 1, 1995, pp. 90-102.
10. T. Funkhouser, "Network Topologies for Scaleable Multi-User Virtual Environments," Proc. VRAIS 96 (Virtual Reality Ann. Int'l Symp.), IEEE Computer Society Press, Los Alamitos, Calif., 1996, pp. 222-229.
11. Capin et al., "A Dead-Reckoning Algorithm for Virtual Human Figures," Proc. VRAIS 97, IEEE Computer Society Press, Los Alamitos, Calif., March 1997, pp. 161-169.
12. Noser et al., "Playing Games through the Virtual Life Network," Proc. Artificial Life 96,Chiba, Japan, 1996, pp. 114-121.
13. P. Prusinkiewicz and A. Lindenmayer, The Algorithmic Beauty of Plants, Springer-Verlag, New York, 1990.
1. P.K. Doenges et al., "MPEG-4: Audio/Video and Synthetic Graphics/Audio for Mixed Media," Image Comm. J., Elsevier, Amsterdam, 1997 (to appear).
2. SNHC Face/Body Ad Hoc Group, "Face and Body Definition and Animation Parameters," Document No. MPEG96/N1365, Oct. 1996, Chicago Meeting of ISO/IEC JTC1/SC29/WG11.

Index Terms:
virtual humans, avatars, motion control, artificial life, networked virtual environments, broadband networks.
Tolga K. Capin, Hansrudi Noser, Daniel Thalmann, Igor Sunday Pandzic, Nadia Magnenat Thalmann, "Virtual Human Representation and Communication in VLNet," IEEE Computer Graphics and Applications, vol. 17, no. 2, pp. 42-53, March-April 1997, doi:10.1109/38.574680
Usage of this product signifies your acceptance of the Terms of Use.