2014 12th International Conference on Frontiers of Information Technology (FIT) (2014)
Dec. 17, 2014 to Dec. 19, 2014
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/FIT.2014.64
This paper proposes an automated sign language to speech interpreter that begins by capturing the 3D video stream through Kinect and the joints of interest in the human skeleton are then worked upon. The proposed system deals with the problems faced by mute people in conveying their message through Pakistani sign language. This research makes use of the 3D trajectory algorithm for processing the normalized data. Performed gestures are classified using the robust learning technique of ensemble. Once recognized, the gestures are translated to speech. This system has been tested on several signs taken from PSL, demonstrating the real time practicality of using ASLSI.
Assistive technology, Gesture recognition, Joints, Three-dimensional displays, Classification algorithms, Speech, Hidden Markov models,Xbox, 3D, algorithm, automated, bagging, deaf, depth, gestures, interpreter, joints, Kinect, language, network, neural, mute, Pakistan, PSL, sign, signers, skeleton, speech, stream, Trajectory
Fariha Nasir, Umer Farooq, Zunaira Jamil, Maham Sana, Kashif Zafar, "Automated Sign Language to Speech Interpreter", 2014 12th International Conference on Frontiers of Information Technology (FIT), vol. 00, no. , pp. 307-312, 2014, doi:10.1109/FIT.2014.64