The Community for Technology Leaders
Computer Vision, IEEE International Conference on (2007)
Rio de Janeiro, Brazil
Oct. 14, 2007 to Oct. 21, 2007
ISBN: 978-1-4244-1630-1
pp: 1-8
Feng Tang , UC Santa Cruz, Santa Cruz, CA, USA. tang@soe.ucsc.edu
Shane Brennan , UC Santa Cruz, Santa Cruz, CA, USA. shanerb@soe.ucsc.edu
Qi Zhao , UC Santa Cruz, Santa Cruz, CA, USA. zhaoqi@soe.ucsc.edu
Hai Tao , UC Santa Cruz, Santa Cruz, CA, USA. tao@soe.ucsc.edu
ABSTRACT
This paper treats tracking as a foreground/background classification problem and proposes an online semi-supervised learning framework. Initialized with a small number of labeled samples, semi-supervised learning treats each new sample as unlabeled data. Classification of new data and updating of the classifier are achieved simultaneously in a co-training framework. The object is represented using independent features and an online support vector machine (SVM) is built for each feature. The predictions from different features are fused by combining the confidence map from ach classifier using a classifier weighting method which creates a final classifier that performs better than any classifier based on a single feature. The semi-supervised learning approach then uses the output of the combined confidence map to generate new samples and update the SVMs online. With this approach, the tracker gains increasing knowledge of the object and background and continually improves itself over time. Compared to other discriminative trackers, the online semi-supervised learning approach improves each individual classifier using the information from other features, thus leading to a more robust tracker. Experiments show that this framework performs better than state-of-the-art tracking algorithms on challenging sequences.
INDEX TERMS
CITATION

F. Tang, S. Brennan, Q. Zhao and H. Tao, "Co-Tracking Using Semi-Supervised Support Vector Machines," 2007 11th IEEE International Conference on Computer Vision(ICCV), Rio de Janeiro, 2007, pp. 1-8.
doi:10.1109/ICCV.2007.4408954
88 ms
(Ver 3.3 (11022016))