The Community for Technology Leaders
Green Image
This paper proposes a novel representation space for multimodal information, enabling fast and efficient retrieval of video data. We suggest describing the documents not directly by selected multimodal features (audio, visual or text), but rather by considering cross-document similarities relatively to their multimodal characteristics. This idea leads us to propose a particular form of \emph{dissimilarity space} that is adapted to the asymmetric classification problem, and in turn to the \emph{query-by-example} and \emph{relevance feedback} paradigm, widely used in information retrieval. Based on the proposed dissimilarity space, we then define various strategies to fuse modalities through a kernel-based learning approach. The problem of automatic kernel setting to adapt the learning process to the queries is also discussed. The properties of our strategies are studied and validated on artificial data. In a second phase, a large annotated video corpus, (\emph{ie} TRECVID-05), indexed by visual, audio and text features is considered to evaluate the overall performance of the dissimilarity space and fusion strategies. The obtained results confirm the validity of the proposed approach for the representation and retrieval of multimodal information in a real-time framework.
Multimedia databases, Image/video retrieval, Concept learning, Machine learning

E. Bruno, S. Marchand-Maillet and N. Moenne-Loccoz, "Design of Multimodal Dissimilarity Spaces for Retrieval of Video Documents," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 30, no. , pp. 1520-1533, 2007.
89 ms
(Ver 3.3 (11022016))