The Community for Technology Leaders
Green Image
Issue No. 07 - July (2009 vol. 31)
ISSN: 0162-8828
pp: 1195-1209
Ming Yang , Northwestern University, Evanston
Ying Wu , Northwestern University, Evanston
Gang Hua , Microsoft Research, Redmond
Enormous uncertainties in unconstrained environments lead to a fundamental dilemma that many tracking algorithms have to face in practice: Tracking has to be computationally efficient, but verifying whether or not the tracker is following the true target tends to be demanding, especially when the background is cluttered and/or when occlusion occurs. Due to the lack of a good solution to this problem, many existing methods tend to be either effective but computationally intensive by using sophisticated image observation models or efficient but vulnerable to false alarms. This greatly challenges long-duration robust tracking. This paper presents a novel solution to this dilemma by considering the context of the tracking scene. Specifically, we integrate into the tracking process a set of auxiliary objects that are automatically discovered in the video on the fly by data mining. Auxiliary objects have three properties, at least in a short time interval: 1) persistent co-occurrence with the target, 2) consistent motion correlation to the target, and 3) easy to track. Regarding these auxiliary objects as the context of the target, the collaborative tracking of these auxiliary objects leads to efficient computation as well as strong verification. Our extensive experiments have exhibited exciting performance in very challenging real-world testing cases.
Computer vision, visual object tracking, context aware, collaborative tracking, data mining, robust fusion, belief inconsistency.

G. Hua, M. Yang and Y. Wu, "Context-Aware Visual Tracking," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, no. , pp. 1195-1209, 2008.
90 ms
(Ver 3.3 (11022016))