This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Social-Event-Driven Camera Control for Multicharacter Animations
Sept. 2012 (vol. 18 no. 9)
pp. 1496-1510
Tong-Yee Lee, Dept. of Comput. Sci. & Inf. Eng., Nat. Cheng Kung Univ., Tainan, Taiwan
Wen-Chieh Lin, Dept. of Comput. Sci., Nat. Chiao Tung Univ., Hsinchu, Taiwan
I-Cheng Yeh, Dept. of Comput. Sci. & Inf. Eng., Nat. Cheng Kung Univ., Tainan, Taiwan
Hsin-Ju Han, Dept. of Comput. Sci. & Inf. Eng., Nat. Cheng Kung Univ., Tainan, Taiwan
Jehee Lee, Sch. of Comput. Sci. & Eng., Seoul Nat. Univ., Seoul, South Korea
Manmyung Kim, Sch. of Comput. Sci. & Eng., Seoul Nat. Univ., Seoul, South Korea
In a virtual world, a group of virtual characters can interact with each other, and these characters may leave a group to join another. The interaction among individuals and groups often produces interesting events in a sequence of animation. The goal of this paper is to discover social events involving mutual interactions or group activities in multicharacter animations and automatically plan a smooth camera motion to view interesting events suggested by our system or relevant events specified by a user. Inspired by sociology studies, we borrow the knowledge in Proxemics, social force, and social network analysis to model the dynamic relation among social events and the relation among the participants within each event. By analyzing the variation of relation strength among participants and spatiotemporal correlation among events, we discover salient social events in a motion clip and generate an overview video of these events with smooth camera motion using a simulated annealing optimization method. We tested our approach on different motions performed by multiple characters. Our user study shows that our results are preferred in 66.19 percent of the comparisons with those by the camera control approach without event analysis and are comparable (51.79 percent) to professional results by an artist.

[1] D. Arijon, Grammar of the Film Language. Silman-James Press, 1976.
[2] J. Assa, L. Wolf, and D. Cohen-Or, "The Virtual Director: A Correlation-Based Online Viewing of Human Motion," Proc. Eurographics Conf., pp. 595-604, 2010.
[3] J. Assa, Y. Caspi, and D. Cohen-Or, "Action Synopsis: Pose Selection and Illustration," Proc. SIGGRAPH, vol. 24, pp. 667-676, Aug. 2005.
[4] J. Assa, D. Cohen-Or, I.-C. Yeh, and T.-Y. Lee, "Motion Overview of Human Actions," Proc. SIGGRAPH, vol. 27, no. 5, pp. 1-10, Dec. 2008.
[5] W.H. Bares, S. Thainimit, and S. McDermott, "A Model for Constraint-Based Camera Planning," Proc. AAAI Spring Symp. Smart Graphics, vol. 4, pp. 84-91, 2000.
[6] Y.G. Cheong, A. Jhala, B.C. Bae, and R.M. Young, "Automatically Generating Summary Visualizations from Game Logs," Proc. Fourth Artificial Intelligence and Interactive Digital Entertainment Conf. (AIIDE), C. Darken, M. Mateas, C. Darken, and M. Mateas, eds., The AAAI Press, http://dblp.uni-trier.de/rec/bibtex/conf/ aiideCheongJBY08, 2008.
[7] M. Christie and P. Olivier, "Camera Control in Computer Graphics: Models, Techniques and Applications," Proc. SIGGRAPH ASIA, pp. 1-197, 2009.
[8] T. Corrigan and P. White, The Film Experience, p. 140. Bedford/St. Martin's, 2004.
[9] H.A. David, The Method of Paired Comparisons. Charles Griffin & Company, 1988.
[10] D. DeMenthon, V. Kobla, and D. Doermann, "Video Summarization by Curve Simplification," Proc. MULTIMEDIA '98: Sixth ACM Int'l Conf. Multimedia, pp. 211-218, 1998.
[11] S.M. Drucker and D. Zeltzer, "Intelligent Camera Control in a Virtual Environment," Proc. GI '94: Graphics Interface Conf., pp. 190-199, 1994.
[12] L. Freeman, The Development of Social Network Analysis. Empirical Press, 2006.
[13] R. Grassi, S. Stefani, and A. Torriero, "Some New Results on the Eigenvector Centrality," The J. Math. Sociology, vol. 31, no. 3, pp. 237-248, 2007.
[14] E.T. Hall, The Hidden Dimension. Anchor Book, 1966.
[15] N. Halper, R. Helbing, and T. Strothotte, "A Camera Engine for Computer Games: Managing the Trade-Off between Constraint Satisfaction and Frame Coherence," Proc. Eurographics Conf., vol. 20, no. 3, pp. 174-183, 2001.
[16] L.-W. He, M.F. Cohen, and D.H. Salesin, "The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing," Proc. SIGGRAPH, pp. 217-224, 1996.
[17] D. Helbing and P. Molnar, "Social Force Model for Pedestrian Dynamics," Physical Rev. E, vol. 51, no. 5, pp. 4282-4286, May 1995.
[18] Y. Hu, S. Wu, S. Xia, J. Fu, and W. Chen, "Motion Track: Visualizing Variations of Human Motion Data," Proc. PacificVis '10: IEEE Pacific Visualization Symp., pp. 153-160, 2010.
[19] A. Johansson, D. Helbing, and P. Shukla, "Specification of the Social Force Pedestrian Model by Evolutionary Adjustment to Video Tracking Data," Advances in Complex Systems, vol. 10, no. 2, pp. 271-288, 2007.
[20] K. Kardan and H. Casanova, "Virtual Cinematography of Group Scenes Using Hierarchical Lines of Actions," Proc. Sandbox '08: ACM SIGGRAPH Symp. Video Games, pp. 171-178, 2008.
[21] S.D. Katz, Film Directing Shot by Shot: Visualizing from Concept to Screen. Michael Wiese Productions, 1991.
[22] M. Kim, K. Hyun, J. Kim, and J. Lee, "Synchronized Multi-Character Motion Editing," ACM Trans. Graphics, vol. 28, no. 3, pp. 1-9, 2009.
[23] J.-Y. Kwon and I.-K. Lee, "Determination of Camera Parameters for Character Motions Using Motion Area," The Visual Computer, vol. 24, pp. 475-483, 2008.
[24] K. Lewin, Field Theory in Social Science. Harper and Brothers, 1951.
[25] C. Lino, M. Christie, F. Lamarche, G. Schofield, and P. Olivier, "A Real-Time Cinematography System for Virtual 3D Environments," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, 2010.
[26] H. McCabe and J. Kneafsey, "A Virtual Cinematography System for First Person Shooter Games," Proc. Int'l Digital Games Conf., pp. 25-35, 2006.
[27] R. Mehran, A. Oyama, and M. Shah, "Abnormal Crowd Behavior Detection Using Social Force Model," Proc. CVPR '09: IEEE Conf. Computer Vision and Pattern Recognition, pp. 935-942, 2009.
[28] Nearest-Neighbor Methods in Learning and Vision: Theory and Practice, G. Shakhnarovich, T. Darrell, and P. Indyk, eds. MIT Press, Mar. 2006.
[29] C. Turkay, E. Koc, and S. Balcisoy, "An Information Theoretic Approach to Camera Control for Crowded Scenes," The Visual Computer, vol. 25, nos. 5-7, pp. 451-459, 2009.
[30] T. Vieira, A. Bordignon, A. Peixoto, G. Tavares, H. Lopes, L. Velho, and T. Lewiner, "Learning Good Views through Intelligent Galleries," Proc. Eurographics Conf., pp. 717-726, 2009.
[31] M. Vlachos, G. Kollios, and D. Gunopulos, "Discovering Similar Multidimensional Trajectories," Proc. Int'l Conf. Data Eng., pp. 673-684, 2002.
[32] S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications. Cambridge Univ. Press, 1994.

Index Terms:
virtual reality,cameras,computer animation,simulated annealing optimization method,social event driven camera control,multicharacter animations,virtual world,virtual characters,social events,group activities,mutual interactions,smooth camera motion,social network analysis,social force,Proxemics,spatiotemporal correlation,motion clip,Cameras,Trajectory,Animation,Social network services,Force,Particle measurements,Atmospheric measurements,social network analysis.,MOCAP,multicharacter animation,event analysis
Citation:
Tong-Yee Lee, Wen-Chieh Lin, I-Cheng Yeh, Hsin-Ju Han, Jehee Lee, Manmyung Kim, "Social-Event-Driven Camera Control for Multicharacter Animations," IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 9, pp. 1496-1510, Sept. 2012, doi:10.1109/TVCG.2011.273
Usage of this product signifies your acceptance of the Terms of Use.