The Community for Technology Leaders
RSS Icon
Issue No.05 - September-October (1998 vol.18)
pp: 24-31
Technologies for computer animation enable users to generate, control, and interact with life-like representations of humans in virtual worlds. Such worlds may be 2D, 3D, real-time 3D, or real-time 3D and shared with other participants at remote locations. Users can produce virtual humans and interact with them. Doing so requires an interface and some model of how the virtual human behaves in response to some external stimulus or the presence of another virtual human in the environment. Programming behavioral models with emotional responses and encapsulating them in virtual humans is a current challenge for research. A principal challenge of current work is to facilitate interesting and engaging responses from virtual humans with nondeterministic behaviors to reflect how real people interact in real time.
Marc Cavazza, Rae Earnshaw, Nadia Magnenat-Thalmann, Daniel Thalmann, "Motion Control of Virtual Humans", IEEE Computer Graphics and Applications, vol.18, no. 5, pp. 24-31, September-October 1998, doi:10.1109/38.708558
1. N.I. Badler et al., "Positioning and Animating Figures in a Task-Oriented Environment," The Visual Computer, Vol.1, No.4, 1985, pp. 212-220.
2. R. Boulic, R. Mas, and D. Thalmann, "Complex Character Positioning Based on a Compatible Flow Model of Multiple Supports," IEEE Trans. in Visualization and Computer Graphics, Vol. 3, No. 3, 1997, pp. 245-261.
3. W.W. Armstrong, M. Green, and R. Lake, "Near Real-Time Control of Human Figure Models," IEEE CG&A, Vol.7, No. 6, Nov. 1987, pp. 28-38.
4. A. Witkin and M. Kass, "Spacetime Constraints," Computer Graphics, vol. 22, no. 4, pp. 159-168, 1988.
5. M.F. Cohen, “Interactive Spacetime Control for Animation,” Computer Graphics, vol. 26, no. 2, pp. 293-302, 1992.
6. D. Zeltzer, "Towards an Integrated View of 3D Computer Animation," The Visual Computer, Vol. 1, No. 4, 1985, pp. 249-259.
7. N. Magnenat-Thalmann and D. Thalmann, "Complex Models for Visualizing Synthetic Actors," IEEE CG&A, Vol. 11, No. 5, Sept. 1991, pp. 32-44.
8. D. Thalmann, "A New Generation of Synthetic Actors: the Interactive Perceptive Actors," Proc. Pacific Graphics 96, National Chiao Tung University Press, Hsinchu, Taiwan, 1996, pp. 200-219.
9. D. Thalmann, "Using Virtual Reality Techniques in the Animation Process," Virtual Reality Systems, Academic Press, San Diego, 1993, pp. 143-159.
10. H. Noser et al., "Navigation for Digital Actors Based on Synthetic Vision, Memory, and Learning," Computers and Graphics, Pergamon Press, Exeter, UK, Vol. 19, No. 1, 1995, pp. 7-19.
11. O. Renault, N. Magnenat-Thalmann, and D. Thalmann, "A Vision-Based Approach to Behavioral Animation," J. Visualization and Computer Animation, Vol. 1, No. 1, 1990, pp. 18-21.
12. L. Emering, R. Boulic, and D. Thalmann, "Interacting with Virtual Humans through Body Actions," IEEE Computer Graphics and Applications, vol. 18, no. 1, 1998, pp. 8-11.
13. K. Mase and A. Pentland, "Automatic Lipreading by Computer," Trans. Inst. Elect. Information and Communication Eng, Vol. J73-D-II, No. 6, 1990, pp. 796-803.
14. I. Essan and A. Pentland, "Facial Expression Recognition using Visually Extracted Facial Action Parameters," Proc. Int'l Workshop on Automatic Face and Gesture Recognition,Zurich, Switzerland, 1995.
15. K. Waters and D. Terzopoulos, "Modeling and Animating Faces using Scanned Data," J. Visualization and Computer Animation, Vol. 2, No. 4, 1991, pp. 123-128.
16. D. Terzopoulos and K. Waters, "Techniques for Realistic Facial Modeling and Animation," Proc. Computer Animation 91, Springer-Verlag, New York, 1991, pp. 59-74.
17. E.M. Caldognetto et al., "Automatic Analysis of Lips and Jaw Kinematics in VCV Sequences," Proc. Eurospeech '89 Conf. 2, European Speech Communication Assoc. (ESCA), Grenoble, France, 1989, pp. 453-456.
18. E.C. Patterson, P.C. Litwinowich, and N. Greene, "Facial Animation by Spacial Mapping," Proc. Computer Animation 91, Springer-Verlag, New York, pp. 31-44, 1991.
19. A. Azarbayejani,T. Starner,B. Horowitz,, and A. Pentland,“Visually controlled graphics,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp. 602-605, June 1993. (Special Section on 3D Modeling in Image Analysis and Synthesis).
20. H. Li, P. Roivainen, and R. Forchheimer, "3D Motion Estimation in Model-Based Facial Image Coding," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp. 545-555, June 1993.
21. I.S. Pandzic et al., "Real-Time Facial Interaction," Displays, Vol. 15, No. 3, 1994, pp. 157-163.
22. T. Capin et al., "Virtual Human Representation and Communication in the VLNet Networked Virtual Environments," IEEE CG&A, Vol. 17, No. 2, 1997, pp. 42-53.
23. B. Webber et al., Instructions, Intentions, and Expectations, IRCS Report 94-01, Institute for Research in Cognitive Science, University of Pennsylvania, 1994.
24. C.R. Reeve and I.J. Palmer, "Virtual Rehearsal over Networks," to be published in Proc. Digital Convergence, British Computer Society, Bradford, UK.
25. P.R. Cohen and S.L. Oviatt, "The Role of Voice in Human-Machine Communication," Voice Communication between Humans and Machines, D. Roe and J. Wilpon, eds., National Academy of Sciences Press, Washington, DC, 1994, pp. 34-75.
26. C. Geib, L. Levison, and M.B. Moore, Sodajack: An Architecture for Agents that Search for and Manipulate Objects, Tech. Report MS-CIS-94-13/Linc Lab 265, Dept. of Computer and Information Science, University of Pennsylvania, 1994.
27. A. Joshi, L. Levy, and M. Takahashi, "Tree Adjunct Grammars," J. Computer and System Sciences, Vol. 10, No. 1., 1975, pp.136-163.
28. M. Cavazza, "An Integrated TFG Parser with Explicit Tree Typing," to be published in Proc. Fourth TAG + Workshop, University of Pennsylvania, 1998.
29. C. Beardon and V. Ye, “Using Behavioural Rules in Animation,” Computer Graphics: Developments in Virtual Environments, R.A. Earnshaw and J.A. Vince, eds., pp. 217-234, London: Academic Press, 1995.
30. K. Perlin and A. Goldberg, "Improv: A System for Scripting Interactive Actors in Virtual Worlds," Proc. Siggraph 96, ACM Press, New York, 1996, pp. 205-216.
31. D. Kurlander and D.T. Ling, "Planning-Based Control of Interface Animation," Proc. CHI 95 Conf., ACM Press, New York, 1995, pp. 472-479.
32. N. Magnenat-Thalmann and D. Thalmann, "Digital Actors for Interactive Television," Proc. IEEE, Special Issue on Digital Television, Part 2, July 1995, pp.1022-1031.
33. R.C. Schank and R.P. Abelson, Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures, Lawrence Erlbaum Associates, Hillsdale, N.J., 1977.
34. X. Tu and D. Terzopoulos, "Artificial Fishes: Physics, Locomotion, Perception, Behavior," Proc. Siggraph 94, ACM, New York, July 1994, pp. 43-50.
35. S.R. Clay and J. Wilhelms, "Put: Language-Based Interactive Manipulation of Objects," IEEE CG&A, Vol. 16, No. 2, March 1995, pp. 31-39.
36. R. Bolt, “Put-That-There: Voice and Gesture at the Graphics Interface,” Computer Graphics, vol. 14, no. 3, pp. 262-270, 1980.
37. J.K. Hodgins, "Animation Human Motion," Scientific American, Vol. 278, No. 3, March 1998, pp. 46-51, .
38. J.K. Hodgins and N.S. Pollard, “Adapting Simulated Behaviors for New Characters,” Proc. SIGGRAPH '97, pp. 153-162, 1997.
39. N.I. Badler, "Real-Time Virtual Humans," Proc. Pacific Graphics 97, IEEE CS Press, Los Alamitos, Calif., 1997, pp. 4-13, .
27 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool