Issue No. 12 - December (2010 vol. 32)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2010.33
Massimiliano Albanese , University of Maryland, College Park
Rama Chellappa , University of Maryland, College Park
Naresh Cuntoor , Kitware Inc., Clifton Park
Vincenzo Moscato , Università di Napoli "Federico II", Napoli
Antonio Picariello , Università di Napoli "Federico II", Napoli
V.S. Subrahmanian , University of Maryland, College Park
Octavian Udrea , IBM T.J. Watson Research Center, Hawthorne
There is now a growing need to identify various kinds of activities that occur in videos. In this paper, we first present a logical language called Probabilistic Activity Description Language (PADL) in which users can specify activities of interest. We then develop a probabilistic framework which assigns to any subvideo of a given video sequence a probability that the subvideo contains the given activity, and we finally develop two fast algorithms to detect activities within this framework. OffPad finds all minimal segments of a video that contain a given activity with a probability exceeding a given threshold. In contrast, the OnPad algorithm examines a video during playout (rather than afterwards as OffPad does) and computes the probability that a given activity is occurring (even if the activity is only partially complete). Our prototype Probabilistic Activity Detection System (PADS) implements the framework and the two algorithms, building on top of existing image processing algorithms. We have conducted detailed experiments and compared our approach to four different approaches presented in the literature. We show that—for complex activity definitions—our approach outperforms all the other approaches.
Applications and expert knowledge-intensive systems, computer vision, vision and scene understanding, video analysis, image processing and computer vision, applications.
O. Udrea et al., "PADS: A Probabilistic Activity Detection Framework for Video Data," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 32, no. , pp. 2246-2261, 2010.