This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
1999 International Symposium on Database Applications in Non-Traditional Environments (DANTE'99)
Querying Video Contents by Motion Example
Kyoto, Japan
November 28-November 30
ISBN: 0-7695-0496-5
Pu-Jien Cheng, National Chiao Tung University
Wei-Pang Yang, National Chiao Tung University
This paper presents a new conceptual model for representing visual information about moving objects in video data. Based on available automatic scene segmentation and object tracking algorithms, the proposed model calculates object motions at various levels of semantic granularity. It represents trajectory, color and dimensions of a single moving object and the directional and topological relations among multiple objects over a time interval. To facilitate query processing, there are two optimal approximate matching algorithms designed to match time-series visual features of moving objects. Experimental results indicate that the proposed algorithms outperform the conventional subsequence-matching methods substantially in the similarity between the two trajectories.
Index Terms:
content-based retrieval, video data modeling, approximate string matching and spatio-temporal representation
Citation:
Pu-Jien Cheng, Wei-Pang Yang, "Querying Video Contents by Motion Example," dante, pp.287, 1999 International Symposium on Database Applications in Non-Traditional Environments (DANTE'99), 1999
Usage of this product signifies your acceptance of the Terms of Use.