This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
TagSense: Leveraging Smartphones for Automatic Image Tagging
Jan. 2014 (vol. 13 no. 1)
pp. 61-74
Chuan Qin, University of South Carolina, Columbia
Xuan Bao, Duke University, Durham
Romit Roy Choudhury, Duke University, Durham
Srihari Nelakuditi, University of South Carolina, Columbia
Mobile phones are becoming the convergent platform for personal sensing, computing, and communication. This paper attempts to exploit this convergence toward the problem of automatic image tagging. We envision TagSense, a mobile phone-based collaborative system that senses the people, activity, and context in a picture, and merges them carefully to create tags on-the-fly. The main challenge pertains to discriminating phone users that are in the picture from those that are not. We deploy a prototype of TagSense on eight Android phones, and demonstrate its effectiveness through 200 pictures, taken in various social settings. While research in face recognition continues to improve image tagging, TagSense is an attempt to embrace additional dimensions of sensing toward this end goal. Performance comparison with Apple iPhoto and Google Picasa shows that such an out-of-band approach is valuable, especially with increasing device density and greater sophistication in sensing and learning algorithms.
Index Terms:
Sensors,Accelerometers,Cameras,Compass,Tagging,Face recognition,Smart phones,context-awareness,Sensors,Accelerometers,Cameras,Compass,Tagging,Face recognition,Smart phones,activity recognition,Image tagging,face recognition,sensing,smartphone
Citation:
Chuan Qin, Xuan Bao, Romit Roy Choudhury, Srihari Nelakuditi, "TagSense: Leveraging Smartphones for Automatic Image Tagging," IEEE Transactions on Mobile Computing, vol. 13, no. 1, pp. 61-74, Jan. 2014, doi:10.1109/TMC.2012.235
Usage of this product signifies your acceptance of the Terms of Use.