The Community for Technology Leaders
Green Image
The realism in participant representation in networked virtual environments involves two elements: believable appearance and realistic movements. Using virtual human figures for participant representation fulfills these functionalities with realism, as it provides the direct relationship between how we control our avatar in the virtual world and how our avatar moves related to this control. The inclusion of virtual human representation is not straightforward: the virtual body should move naturally, in accordance with the actual body, even with the small number of degrees of freedom; and facial communication should be included in the human representation. In addition, the architecture to combine motion control and virtual environment should be efficient and modular. We describe three types of motion control: direct control where the geometry is directly changed, user-guided actors where the motor skills of the actor are used by giving high-level tasks to perform, and autonomous actors, which are controlled by high-level motivations. Similarly, the face can be animated using video, speech, or higher level parameters. The articulated structure of the human body together with the face introduces a new complexity in the usage of the network resources because the size of a message needed to convey the body posture is greater than the one needed for simple, nonarticulated objects. We analyze the network requirements of different message types to animate the human body and face. We compare the message types with respect to coding computation at the sender site, transmission overhead, and decoding computation at the receiver site.
virtual humans, avatars, motion control, artificial life, networked virtual environments, broadband networks.
Tolga K. Capin, Hansrudi Noser, Daniel Thalmann, Nadia Magnenat Thalmann, Igor Sunday Pandzic, "Virtual Human Representation and Communication in VLNet", IEEE Computer Graphics and Applications, vol. 17, no. , pp. 42-53, March-April 1997, doi:10.1109/38.574680
101 ms
(Ver )