The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (2012 vol.18)
pp: 501-515
Rukun Fan , Coll. of Comput. Sci., Zhejiang Univ. (Yuquan Campus), Hangzhou, China
Songhua Xu , Oak Ridge Nat. Lab., Oak Ridge, TN, USA
Weidong Geng , Coll. of Comput. Sci., Zhejiang Univ. (Yuquan Campus), Hangzhou, China
ABSTRACT
We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with a piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.
INDEX TERMS
music, dynamic programming, graphics processing units, image matching, image motion analysis, image sequences, learning (artificial intelligence), Asian dance genres, example based automatic music driven conventional dance motion synthesis, learning based approach, motion mapping relationship, motion matching quality rating function, synchronized music, professional human dance performance, optimal sequence, dance motion segments, constraint based dynamic programming, visual smoothness, resultant dance motion sequence, two-way evaluation strategy, GPU based implementation, peer method, Motion segmentation, Feature extraction, Correlation, Training, Joints, Synchronization, Humans, learning-based dance motion synthesis., Dance motion and music mapping relationship, music-driven dance motion synthesis
CITATION
Rukun Fan, Songhua Xu, Weidong Geng, "Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis", IEEE Transactions on Visualization & Computer Graphics, vol.18, no. 3, pp. 501-515, March 2012, doi:10.1109/TVCG.2011.73
REFERENCES
[1] G. Alankus, A. Bayazit, and O. Bayazit, “Automated Motion Synthesis for Virtual Choreography,” J. Computer Animation and Virtual Worlds, vol. 16, no. 3/4, pp. 259-271, 2005.
[2] O. Arikan and D. Forsyth, “Interactive Motion Generation from Examples,” ACM Trans. Graphics, vol. 21, no. 3, pp. 483-490, 2002.
[3] C. Bregler, M. Covell, and M. Slaney, “Video Rewrite: Driving Visual Speech with Audio,” Proc. ACM SIGGRAPH '97, pp. 353-360, 1997.
[4] L. Ren, G. Shakhnarovich, J. Hodgins, H. Pfister, and P. Viola, “Learning Silhouette Features for Control of Human Motion,” Proc. ACM SIGGRAPH '04, 2004.
[5] J. Kim, H. Fouad, J. Sibert, and J. Hahn, “Perceptually Motivated Automatic Dance Motion Generation for Music,” Computer Animation and Virtual Worlds, vol. 20, no. 2/3, pp. 375-384, 2009.
[6] M. Brand and A. Hertzmann, “Style Machines,” Proc. ACM SIGGRAPH '00, pp. 183-192, 2000.
[7] M. Cardle, L. Barthe, S. Brooks, and P. Robinson, “Music-Driven Motion Editing: Local Motion Transformations Guided by Music Analysis,” Proc. 20th UK Conf. Eurographics (EGUK '02), pp. 38-44, 2002.
[8] J. Chen and T. Li, “Rhythmic Character Animation: Interactive Chinese Lion Dance,” Proc. ACM SIGGRAPH '05, 2005.
[9] D. Ellis, “Beat Tracking by Dynamic Programming,” J. New Music Research, vol. 36, no. 1, pp. 51-60, 2007.
[10] J. Friedman, “Fast Mars,” Dept. of Statistics, Technical Report LCS110, Stanford Univ., 1993.
[11] K. Grochow, S.L. Martin, A. Hertzmann, and Z. Popović, “Style-Based Inverse Kinematics,” Proc. ACM SIGGRAPH '04, pp. 522-531, 2004.
[12] A. Hoerl and R. Kennard, “Ridge Regression: Biased Estimation for Nonorthogonal Problems,” J. Technometrics, vol. 42, no. 1, pp. 80-86, 2000.
[13] E. Hsu, S. Gentry, and J. Popović, “Example-Based Control of Human Motion,” Proc. Symp. Computer Animation, pp. 69-77, 2004.
[14] E. Hsu, K. Pulli, and J. Popović, “Style Translation for Human Motion,” Proc. ACM SIGGRAPH '05, pp. 1082-1089, 2005.
[15] E. Keogh and C. Ratanamahatana, “Exact Indexing of Dynamic Time Warping,” Knowledge and Information Systems, vol. 7, no. 3, pp. 358-386, 2005.
[16] T. Kim, S. Park, and S. Shin, “Rhythmic-Motion Synthesis Based on Motion-Beat Analysis,” ACM Trans. Graphics, vol. 22, no. 3, pp. 392-401, 2003.
[17] L. Kovar, M. Gleicher, and F. Pighin, “Motion Graphs,” ACM Trans. Graphics, vol. 21, no. 3, pp. 473-482, 2002.
[18] R. Laban and L. Ullmann, “The Mastery of Movement,” 1971.
[19] O. Lartillot and P. Toiviainen, “MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio,” Proc. Int'l Conf. Music Information Retrieval (ISMIR '07), pp. 237-244, 2007.
[20] H. Lee and I. Lee, “Automatic Synchronization of Background Music and Motion in Computer Animation,” Computer Graphics Forum, vol. 24, pp. 353-361, 2005.
[21] J. Lee, J. Chai, P. Reitsma, J. Hodgins, and N. Pollard, “Interactive Control of Avatars Animated with Human Motion Data,” ACM Trans. Graphics, vol. 21, no. 3, pp. 491-500, 2002.
[22] Y. Li, T. Wang, and H. Shum, “Motion Texture: A Two-Level Statistical Model for Character Motion Synthesis,” Proc. ACM SIGGRAPH '02, pp. 465-472, 2002.
[23] M. Maltamo and A. Kangas, “Methods Based on k-Nearest Neighbor Regression in the Prediction of Basal Area Diameter Distribution,” Canadian J. Forest Research, vol. 28, no.8, pp. 1107-1115, 1998.
[24] E. Moulines and F. Charpentier, “Pitch-Synchronous Waveform Processing Techniques for Text-to-Speech Synthesis Using Diphones,” Speech Comm., vol. 9, no. 5/6, pp. 453-467, 1990.
[25] P. Nardiello, F. Sebastiani, and A. Sperduti, “Discretizing Continuous Attributes in AdaBoost for Text Categorization,” Proc. 25th European Conf. IR Research (ECIR '03), pp. 320-334, 2003.
[26] M. Neff, I. Albrecht, and H. Seidel, “Layered Performance Animation with Correlation Maps,” Computer Graphics Forum, vol. 26, no. 3, pp. 675-684, 2007.
[27] M. Nørgaard, Neural Networks for Modelling and Control of Dynamic Systems: A Practitioner's Handbook. Springer, 2000.
[28] S. Oore and Y. Akiyama, “Learning to Synthesize Arm Music to Motion By Example,” Proc. Int'l Conf. Central Europe on Computer Graphics Visualization and Computer Vision (WSCG '06), 2006.
[29] M. Orr, “Introduction to Radial Basis Function Networks,” technical report, Inst. for Adaptive and Neural Computation, Edinburgh Univ., 1996.
[30] E. Pampalk, “A Matlab Toolbox to Compute Music Similarity from Audio,” Proc. Fifth Int'l Conf. Music Information Retrieval (ISMIR '04), pp. 254-257, 2004.
[31] S. Park, H. Shin, and S. Shin, “On-Line Locomotion Generation Based on Motion Blending,” Proc. Symp. Computer Animation, pp. 105-111, 2002.
[32] H. Peng, F. Long, and C. Ding, “Feature Selection Based on Mutual Information: Criteria of Max-Dependency Max-Relevance, and Min-Redundancy,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226-1238, Aug. 2005.
[33] R. Schapire, “The Boosting Approach to Machine Learning: An Overview,” Lecture Notes In Statistics-New York-Springer Verlag, pp. 149-172, 2003.
[34] T. Shiratori, A. Nakazawa, and K. Ikeuchi, “Dancing-to-Music Character Animation,” Computer Graphics Forum, vol. 25, pp. 449-458, 2006.
[35] T. Strohmann and G. Grudic, “A Formulation for Minimax Probability Machine Regression,” Proc. Advances in Neural Information Processing Systems, pp. 785-792, 2003.
[36] J. Suykens and J. Vandewalle, “Least Squares Support Vector Machine Classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293-300, 1999.
[37] J. Wang and B. Bodenheimer, “An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments,” Proc. Symp. Computer Animation, pp. 232-238, 2003.
[38] J. Wichard and C. Merkwirth, “ENTOOL—A Matlab Toolbox for Ensemble Modeling,” http://www.j-wichard.deentool, 2007.
[39] A. Witkin and Z. Popovic, “Motion Warping,” Proc. ACM SIGGRAPH '05, pp. 105-108, 1995.
[40] L. Zhao and A. Safonova, “Achieving Good Connectivity in Motion Graphs,” Proc. Symp. Computer Animation, 2008.
[41] J. Zhu, S. Rosset, H. Zou, and T. Hastie, “Multi-Class Adaboost,” technical report, Stanford Univ., 2005.
[42] F. Ofli, E. Erzin, Y. Yemez, and A.M. Tekalp, “Multi-Modal Analysis of Dance Performances for Music-Driven Choreography Synthesis,” Proc. IEEE Int'l Conf. Acoustics Speech and Signal Processing (ICASSP '10), 2010.
[43] R. Fan, J. Fu, S. Cheng, X. Zhang, and W. Geng, “Rhythm Based Motion-Music Matching Model,” J. Computer-Aided Design and Computer Graphics, vol. 22, pp. 990-996, 2010.
[44] D. Cooke, The Language of Music. Oxford Univ. Press, 2010.
[45] M. Goto, “An Audio-Based Real-Time Beat Tracking System for Music with or without Drum-Sounds,” J. New Music Research, vol. 30, pp. 159-171, 2001.
[46] M.J. Carey and E.S. Parris, and H. Lloyd-Thomas, “A Comparison of Features for Speech, Music Discrimination,” Proc. Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '99), 1999.
[47] D. Liu and L. Lu, and H.J. Zhang, “Automatic Mood Detection from Acoustic Music Data,” Proc. Int'l Conf. Music Information Retrieval (ISMIR '03), 2003.
[48] D. Liu and L. Lu, and H.J. Zhang, “Phase-Based Note Onset Detection for Music Signals,” Proc. Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '03), 2003.
[49] O. Izmirli, “Using a Spectral Flatness Based Feature for Audio Segmentation and Retrieval,” Proc. Int'l Conf. Music Information Retrieval (ISMIR '00), 2000.
[50] L. Knopoff and W. Hutchinson, “Entropy as a Measure of Style: The Influence of Sample Length,” J. Music Theory, vol. 27, pp. 75-97, 1983.
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool