This Article 
 Bibliographic References 
 Add to: 
Incremental Learning of 3D-DCT Compact Representations for Robust Visual Tracking
April 2013 (vol. 35 no. 4)
pp. 863-881
Xi Li, Australian Centre for Visual Technol., Univ. of Adelaide, Adelaide, SA, Australia
A. Dick, Australian Centre for Visual Technol., Univ. of Adelaide, Adelaide, SA, Australia
Chunhua Shen, Australian Centre for Visual Technol., Univ. of Adelaide, Adelaide, SA, Australia
A. van den Hengel, Australian Centre for Visual Technol., Univ. of Adelaide, Adelaide, SA, Australia
Hanzi Wang, Sch. of Inf. Sci. & Technol., Xiamen Univ., Xiamen, China
Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.
Index Terms:
video signal processing,computational complexity,discrete cosine transforms,image representation,learning (artificial intelligence),object tracking,particle filtering (numerical methods),signal reconstruction,object state inference,incremental learning,3D-DCT compact representations,robust visual tracking,object appearance model,changing illumination,appearance samples,3D discrete cosine transform,cosine basis functions,3D signal,input video data,compact energy spectrum,high-frequency coefficients,compact 3D-DCT-based object representation,signal reconstruction-based similarity measure,information loss,incremental 3D-DCT algorithm,2D discrete cosine transform,2D-DCT,1D discrete cosine transform,1D-DCT,computational complexity,discriminative criterion,particle filtering framework,Discrete cosine transforms,Algorithm design and analysis,Visualization,Robustness,Loss measurement,Image reconstruction,Adaptation models,template matching,Visual tracking,appearance model,compact representation,discrete cosine transform (DCT),incremental learning
Xi Li, A. Dick, Chunhua Shen, A. van den Hengel, Hanzi Wang, "Incremental Learning of 3D-DCT Compact Representations for Robust Visual Tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 4, pp. 863-881, April 2013, doi:10.1109/TPAMI.2012.166
Usage of this product signifies your acceptance of the Terms of Use.