The Community for Technology Leaders
2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Venice, Italy
Oct. 22, 2017 to Oct. 29, 2017
ISSN: 2380-7504
ISBN: 978-1-5386-1032-9
pp: 1068-1076
ABSTRACT
Understanding the visual relationship between two objects involves identifying the subject, the object, and a predicate relating them. We leverage the strong correlations between the predicate and the hsubj; obji pair (both semantically and spatially) to predict predicates conditioned on the subjects and the objects. Modeling the three entities jointly more accurately reflects their relationships compared to modeling them independently, but it complicates learning since the semantic space of visual relationships is huge and training data is limited, especially for longtail relationships that have few instances. To overcome this, we use knowledge of linguistic statistics to regularize visual model learning. We obtain linguistic knowledge by mining from both training annotations (internal knowledge) and publicly available text, e.g., Wikipedia (external knowledge), computing the conditional probability distribution of a predicate given a (subj, obj) pair. As we train the visual model, we distill this knowledge into the deep model to achieve better generalization. Our experimental results on the Visual Relationship Detection (VRD) and Visual Genome datasets suggest that with this linguistic knowledge distillation, our model outperforms the stateof- the-art methods significantly, especially when predicting unseen relationships (e.g., recall improved from 8.45% to 19.17% on VRD zero-shot testing set).
INDEX TERMS
biology computing, data mining, genomics, learning (artificial intelligence), text analysis
CITATION

R. Yu, A. Li, V. I. Morariu and L. S. Davis, "Visual Relationship Detection with Internal and External Linguistic Knowledge Distillation," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 1068-1076.
doi:10.1109/ICCV.2017.121
188 ms
(Ver 3.3 (11022016))