The Community for Technology Leaders
2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Venice, Italy
Oct. 22, 2017 to Oct. 29, 2017
ISSN: 2380-7504
ISBN: 978-1-5386-1032-9
pp: 1462-1471
ABSTRACT
In this paper, we address the problem of spatio-temporal person retrieval from videos using a natural language query, in which we output a tube (i.e., a sequence of bounding boxes) which encloses the person described by the query. For this problem, we introduce a novel dataset consisting of videos containing people annotated with bounding boxes for each second and with five natural language descriptions. To retrieve the tube of the person described by a given natural language query, we design a model that combines methods for spatio-temporal human detection and multimodal retrieval. We conduct comprehensive experiments to compare a variety of tube and text representations and multimodal retrieval methods, and present a strong baseline in this task as well as demonstrate the efficacy of our tube representation and multimodal feature embedding technique. Finally, we demonstrate the versatility of our model by applying it to two other important tasks.
INDEX TERMS
image motion analysis, image retrieval, natural language processing, query processing, text analysis, video signal processing
CITATION

M. Yamaguchi, K. Saito, Y. Ushiku and T. Harada, "Spatio-Temporal Person Retrieval via Natural Language Queries," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 1462-1471.
doi:10.1109/ICCV.2017.162
170 ms
(Ver 3.3 (11022016))