The Community for Technology Leaders
Green Image
Issue No. 05 - May (2013 vol. 19)
ISSN: 1077-2626
pp: 811-823
Yin Yang , Dept. of Comput. Sci., Univ. of Texas at Dallas, Richardson, TX, USA
Xiaohu Guo , Dept. of Comput. Sci., Univ. of Texas at Dallas, Richardson, TX, USA
J. Vick , Dept. of Psychological Sci., Case Western Reserve Univ., Cleveland, OH, USA
L. G. Torres , Dept. of Comput. Sci., Univ. of North Carolina at Chapel Hill, Chapel Hill, NC, USA
T. F. Campbell , Callier Center for Commun. Disorders, Univ. of Texas at Dallas, Richardson, TX, USA
In this paper, a physics-based framework is presented to visualize the human tongue deformation. The tongue is modeled with the Finite Element Method (FEM) and driven by the motion capture data gathered during speech production. Several novel deformation visualization techniques are presented for in-depth data analysis and exploration. To reveal the hidden semantic information of the tongue deformation, we present a novel physics-based volume segmentation algorithm. This is accomplished by decomposing the tongue model into segments based on its deformation pattern with the computation of deformation subspaces and fitting the target deformation locally at each segment. In addition, the strain energy is utilized to provide an intuitive low-dimensional visualization for the high-dimensional sequential motion. Energy-interpolation-based morphing is also equipped to effectively highlight the subtle differences of the 3D deformed shapes without any visual occlusion. Our experimental results and analysis demonstrate the effectiveness of this framework. The proposed methods, though originally designed for the exploration of the tongue deformation, are also valid for general deformation analysis of other shapes.
Tongue, Sensors, Speech, Production, Shape, Deformable models, Visualization

Yin Yang, Xiaohu Guo, J. Vick, L. G. Torres and T. F. Campbell, "Physics-Based Deformable Tongue Visualization," in IEEE Transactions on Visualization & Computer Graphics, vol. 19, no. 5, pp. 811-823, 2013.
378 ms
(Ver 3.3 (11022016))