Guest Editors' Introduction: Computer Animation for Virtual Humans
SEPTEMBER/OCTOBER 1998 (Vol. 18, No. 5) pp. 20-23
0272-1716/98/$31.00 © 1998 IEEE

Published by the IEEE Computer Society
Guest Editors' Introduction: Computer Animation for Virtual Humans
Rae University of Bradford

Nadia University of Geneva

Demetri University of Toronto and Intel

Daniel Swiss Federal Institute of Technology
  Article Contents  
  Controlling Actions and Behavior  
  Shared Environments  
  Distributed VES  
  This Issue  
  The Way Forward  
  References  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Advances in computer animation techniques have spurred increasing levels of realism and movement in virtual characters that closely mimic physical reality. Increases in computational power and control methods enable the creation of 3D virtual humans for real-time interactive applications. 1 Artificial intelligence techniques and autonomous agents give computer-generated characters a life of their own and let them interact with other characters in virtual worlds. Developments and advances in networking and virtual reality (VR) let multiple participants share virtual worlds and interact with applications or each other.
Controlling Actions and Behavior
High-level control procedures make it possible to give behaviors to computer-generated characters that make them appear "intelligent"—that is, they interact with other characters with similar properties and respond to environmental situations in a meaningful and constructive way. Such scenarios have the potential of receiving script information as input and producing computer-generated sequences as output. Application areas include production animation and interactive computer games. In addition, researchers are currently investigating ways of having virtual humans perform complex tasks reliably. 1
Shared Environments
Computer-supported collaborative work (CSCW) often involves interaction and discussion about computer-generated information such as models, simulations, annotations, and data accessed in shared virtual environments (VEs). Representations of users by computer-generated characters (avatars) facilitate communication and interaction. An interesting question arises as to what form such avatars should take to best promote life-like and interesting behaviors that mirror the owner, and invoke meaningful and creative responses from other avatars' owners in the virtual world. A shared experience in an artificial computer-generated world implies, in some sense, a belief that the world is real (that is, the suspension of disbelief). It's clear from research to date that creating environments that look real and believable is easier than creating moving characters that look real. Increasing the characters' fidelity doesn't necessarily increase the feeling that their world is real. Engaging users in the tasks required appears to be the first step toward making the interface transparent and enhancing the relationship with other objects or users in the virtual world. Computer-generated games such as "Doom" and the SimNet tank interface 2 both get the user to concentrate on task performance at an early stage. Pausch et al. 3 also reported similar results.
Distributed VES
Avatars and agents have an interesting relationship. An agent personalizes information. The presence of avatars and agents in the same environment seems a fruitful area for further work. Current evidence suggests that avatars link the user to the virtual world very well initially. But from then on, less sophisticated representations suffice to convey information and facilitate communication—except in application domains where the framework is just as important as the action (for example, when playing tennis in a public forum). However, the tennis players themselves could operate on more basic physical models and representations, since they're concentrating on the task rather than the framework, or the event as a whole. This is probably one reason why computer games succeed.
A second aspect of the rapid rate of change is the increasing degree of real-time control passed on to the user or viewer by giving them access to new forms of interactive content. A third aspect is the increasing importance and prominence of the Internet and the facilitation of distributed VEs that Web technology provides. We're thus seeing convergence of content creation and technology delivery as well as a migration of infrastructure technologies down to the Internet. 5, 6 Both these trends increase the relevance and importance of tools and techniques for realistic modeling and movement of human-like characters to populate scenes or represent human users in geographically dispersed places.
This Issue
This special issue features five articles on computer animation for virtual humans. The first is a survey of virtual humans and the techniques that control the face and body. The article also covers higher level interfaces that allow direct speech input and an examination of issues associated with real-time control. This is particularly important in avatar rehearsal scenarios for animation production, where the director requires characters to interact in real time during the production. In cases where the director shouts "Stop" or "Move now," the real-time constraints are considerable. To provide instantaneous response requires a behavioral model for the characters more sophisticated than currently available.
The article by Rose, Bodenheimer, and Cohen presents a technique for interpolating between basis motions derived from annotated motion-capture data or traditional animation. The interpolation is defined over a space of adverbs such as emotional characteristics or physical traits. Radial basis functions and linear regression are used to map a desired point in adverb space to the appropriate combination of basis motions. At runtime, the motion is controlled by a set of parameters called "adverbs" and through a graph of motions (such as walking or running) called "verbs." The graph defines the possible transitions between verbs and how they must be performed. Verbs, adverbs, and verb graphs are defined offline in an authoring system. User annotations place example basis motions along dimensions such as "happiness," or more generally at some point in the adverb space. During a transition between two graph nodes, only a simple blending is performed due to real-time constraints. The authoring system permits the definition of kinematic constraints, allowing, for example, a hand to hold on to a lever during a particular time period (via standard inverse kinematic techniques).
Moccozet et al. describe an innovative interactive animation system for building and simulating real-time virtual humans. The system emphasizes aspects of modeling and deformation that increase the realism of virtual humans' appearance. Two applications illustrate the system's usability and performance. The first, virtual tennis, allows two virtual humans to play a game of tennis judged by an autonomous virtual referee. In the second, CyberDance, a real choreographer is linked via sensors to a metallic robot. A further sequence links a real dancer to a virtual one.
The article by Brogan, Metoyer, and Hodgins describes two VEs showing novel uses of dynamically simulated characters. The first is a border collie environment, and the second, an Olympic bicycle race. Both examples use dynamically simulated, animated characters in networked VEs, and thus let the user interact intuitively with responsive characters. The article presents a real-time solution with 16 dynamically controlled characters. The system architecture for integrating various components to give the required real-time performance is also a significant contribution. Such an environment can test the hypothesis of whether the generation of complex and interesting behaviors in response to real-time user actions facilitates the user's involvement in the scenarios being simulated.
Eisert and Girod present a technique for analyzing video sequences of people's heads and faces. The rigid movement and deformation of the face are estimated from the sequence by combining optical flow techniques with a synthetic 3D model of the person. This leads to a robust and linear algorithm that estimates facial animation parameters with low computational complexity. A multiresolution framework overcomes the restriction of small object motion. A head model constrains the motion and deformation in the face to a set of facial animation parameters defined by the MPEG-4 video standard. This enables a description of both global and local 3D head motion as a function of the unknown facial parameters to be obtained.
The Way Forward
This issue presents significant and important developments of computer animation for virtual humans, particularly in the context of networked environments with distributed users. These developments have great potential as technologies converge and tools for content creation become increasingly synergetic with those for shared environments and interaction. Content scripts need high-level tools for translation into life-like and realistic behaviors of computer-generated characters capable of emotional responses (just as real actors do). In turn, this will engage users and achieve the same levels of satisfaction and enablement in shared applications as users currently do with entertainment applications.
We acknowledge input received from David Leevers, chair of the European Commission special interest group on distributed environments (SID) Chain on Telepresence and Shared Virtual Environments. Work in progress and current documents may be found at http://www.infowin.org/acts/analysys/concertation/chains/si/home/ch_sid/. This Web site also contains a proposed Reference Model for Telepresence and Shared Virtual Environments.
A list of Virtual Human Web pointers may be found at http://www.cis.upenn.edu/badler/vhlist.html/.

References

Rae Earnshaw is professor and head of electronic imaging and media communications at the University of Bradford, UK. His research interests include imaging, graphics, visualization, animation, multimedia, virtual reality, art, design, and the convergence of computing, telephony, media, and broadcasting. He obtained his PhD in computer science from the University of Leeds. He is a member of the editorial boards of The Visual Computer, IEEE Computer Graphics and Applications, and The Journal of Visualization and Computer Animation, and as managing editor of Virtual Reality, vice-president of the Computer Graphics Society, chair of the British Computer Society Computer Graphics and Displays Group, and a fellow of the British Computer Society. He is a member of ACM, IEEE, and Eurographics.

Nadia Magnenat-Thalmann has researched virtual humans for more than 20 years. She studied psychology, biology, and chemistry at the University of Geneva and obtained her PhD in computer science (cum laude) in l977. In l989 she founded Miralab, an interdisciplinary creative research laboratory at the University of Geneva. Some recent awards for her work include the l992 Moebius Prize for the best multimedia system awarded by the European Community, "Best Paper" at the British Computer Graphics Society congress in l993, to the Brussels Film Academy for her work in virtual worlds in 1993, and election to the Swiss Academy of Technical Sciences in l997. She is president of the Computer Graphics Society and chair of the IFIP Working Group 5.10 in computer graphics and virtual worlds.

Demetri Terzopoulos is a professor of computer science and electrical engineering at the University of Toronto and heads the computer animation research group at Intel. He received his PhD from MIT. He has held fellowships from the Natural Sciences and Engineering Research Council of Canada and the Canadian Institute for Advanced Research. In 1998 he was named a Killam Fellow of the Canada Council for the Arts. He has written extensively about computer vision and graphics, medical imaging, computer-aided design, artificial intelligence, and artifical life. He has received awards from the IEEE, AAAI, Nicograph, Ars Electronica, International Digital Media Foundation, Canadian Image Processing and Pattern Recognition Society, and the University of Toronto. He was program chair of the 1998 conference on Computer Vision and Pattern Recogntion (CVPR 98).

Daniel Thalmann researches real-time virtual humans in virtual reality, networked virtual environments, artificial life, and multimedia at the Swiss Federal Institute of Technology (Ecole Polytechnique Fédéral de Lausanne—EPFL). He received a diploma in nuclear physics in 1970, a certificate in statistics and computer science in 1972, and a PhD in computer science (cum laude) in 1977 from the University of Geneva. He is co-editor-in-chief of the Journal of Visualization and Computer Animation, member of the editorial board of The Visual Computer, CADDM Journal (China Engineering Society), and Computer Graphics (Russia). He is co-chair of the Eurographics Working Group on Computer Simulation and Animation and a member of the executive board of the Computer Graphics Society.