2016 International Conference on Frontiers of Information Technology (FIT) (2016)
Dec. 19, 2016 to Dec. 21, 2016
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/FIT.2016.054
In this paper, we generated an activity recognition model using an ANN and trained it using Backpropagation learning. We considered a sandwich making scenario and identified the hand-motion-based activities of reaching, sprinkling, spreading and cutting. The contribution of this paper is twofold: First, given the fact that many image processing steps like feature identification are computation intensive and execution time increases sharply as more images are added, we've shown that it is not always useful to add more data. We trained our system using (i) single (front) camera only and (ii) multiple (left, front, right) cameras, and have shown that adding extra cameras decreased the recognition precision from 89.22% to 79.99%. Hence, we've shown that a properly-positioned camera results in a higher precision than multiple, inappropriately-positioned cameras. Second, in the ANN training part, we've shown that adding additional hidden layers/neurons lead to unnecessary complexity which in turn result in longer computational time and lower precision. In our experiments, using a single hidden layer resulted in a precision of 90.77% and the training was completed in less than 1200 cycles. On the other hand, adding or deleting hidden layers not only decreased the precision, but also increased the training time by many folds.
Activity recognition, Feature extraction, Cameras, Training, Computational modeling, Object recognition, Image segmentation
S. Noor and V. Uddin, "Using ANN for Multi-View Activity Recognition in Indoor Environment," 2016 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 2016, pp. 258-263.