2015 13th International Conference on Frontiers of Information Technology (FIT) (2015)
Dec. 14, 2015 to Dec. 16, 2015
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/FIT.2015.26
For years, people have tried to develop interfaces controlled by the human brain. For the purpose, Electroencephalography (EEG) signals recorded from the brain are used to control not only machines but also artificial prosthesis. In this paper, we present a brain-machine interface based on a hybrid system which utilizes steady-state visually evoked potentials (SSVEP), alpha waves and Electromyography (EMG) signals. For the interface, two LED screens, each flickering at different but fixed frequencies were used to induce SSVEP signals. Signals were derived from channels found in the occipital region of the brain and in regions just above the eyes and jaws. To calculate features and classify the EEG data, we used Canonical Correlation Analysis (CCA) along with Fast Fourier Transform (FFT) and Power Spectral Density Analysis (PSDA). We mapped our classification results onto a robotic crane which had three degrees of freedom. We were successfully able to implement a Brain-machine interface, which did not require extensive amounts of training and had a maximum latency of 16 seconds with an overall classification accuracy of 97%.
Electroencephalography, Headphones, Electromyography, Training, Real-time systems, MATLAB, Robots
M. A. Shah, A. A. Sheikh, A. M. Sajjad and M. Uppal, "A Hybrid Training-Less Brain-Machine Interface Using SSVEP and EMG Signal," 2015 13th International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 2015, pp. 93-97.