The Community for Technology Leaders
Green Image
We introduce a behavior-based similarity measure which tells us whether two different space-time intensity patterns of two different video segments could have resulted from a similar underlying motion field. This is done directly from the intensity information, without explicitly computing the underlying motions. Such a measure allows us to detect similarity between video segments of differently dressed people performing the same type of activity. It requires no foreground/background segmentation, no prior learning of activities, and no motion estimation or tracking. Using this behavior-based similarity measure, we extend the notion of 2-dimensional image correlation into the 3-dimensional space-time volume, thus allowing to correlate dynamic behaviors and actions. Small space-time video segments (small video clips) are "correlated" against entire video sequences in all three dimensions (x,y, and t). Peak correlation values correspond to video locations with similar dynamic behaviors. Our approach can detect very complex behaviors in video sequences (e.g., ballet movements, pool dives, running water), even when multiple complex activities occur simultaneously within the field-of-view of the camera. We further show its robustness to small changes in scale and orientation of the correlated behavior.
Space-time analysis, motion analysis, action recognition, motion similarity measure, template matching, video correlation, video indexing, video browsing

M. Irani and E. Shechtman, "Space-Time Behavior-Based Correlation—OR—How to Tell If Two Underlying Motion Fields Are Similar Without Computing Them?," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 29, no. , pp. 2045-2056, 2007.
83 ms
(Ver 3.3 (11022016))