Issue No. 03 - March (2013 vol. 35)
Xiaowei Zhou , Dept. of Electron. & Comput. Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong, China
Can Yang , Dept. of Electron. & Comput. Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong, China
Weichuan Yu , Dept. of Electron. & Comput. Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong, China
Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that the above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios.
Motion segmentation, Object detection, Cameras, Computer vision, Estimation, Computational modeling, Hidden Markov models, motion segmentation, Moving object detection, low-rank modeling, Markov Random Fields
Weichuan Yu, Xiaowei Zhou and Can Yang, "Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. , pp. 597-610, 2013.