This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Extracting Social Semantics from Multimodal Meeting Content
April-June 2013 (vol. 12 no. 2)
pp. 68-75
Zhiwen Yu, Northwestern Polytechnical University
Xingshe Zhou, Northwestern Polytechnical University
Yuichi Nakamura, Kyoto University
Extracting social semantics from multimodal meeting content can help meeting participants, organizers, and sponsors better understand the social dynamics of exchanging information. The authors present a framework for extracting both low-level (individual) and high-level (group) semantics.
Index Terms:
Semantics,Human factors,Data mining,Feature extraction,Speech recognition,Support vector machines,Social factors,multimodal,Semantics,Human factors,Data mining,Feature extraction,Speech recognition,Support vector machines,Social factors,data mining,social semantics,human interaction,meeting
Citation:
Zhiwen Yu, Xingshe Zhou, Yuichi Nakamura, "Extracting Social Semantics from Multimodal Meeting Content," IEEE Pervasive Computing, vol. 12, no. 2, pp. 68-75, April-June 2013, doi:10.1109/MPRV.2012.55
Usage of this product signifies your acceptance of the Terms of Use.