Multimedia Metadata and Semantic Management
Multimedia semantics is more than developing ontologies to describe the nature of multimedia content. It’s the key research area for interoperable, intelligent access to and management of multimedia materials.
There are many metadata standards. More than 10 organizations vie for leadership in content description, including the Dublin Core Metadata Initiative, ISO/IEC’s MPEG working group, and the World Wide Web Consortium (W3C). For a complete list, see the “Semantic Standards” section in the Related Resources below.
Recent studies show that this diversity is a major hindrance to a common multimedia semantic understanding. So, the first challenge to address in this area is the heterogeneity in metadata description and query languages. We must build better bridges across semantic gaps. We also need to cleverly aggregate and concisely present results for users while providing security and related access-control techniques appropriate to multimedia content. Other challenges include synchronizing metadata information to media and vice versa and managing this relationship throughout the metadata life cycle.
Effective multimedia management must span the metadata life cycle—from its creation through processing, storage, distribution, and deployment—and work whether the metadata is tightly connected with or independent of the media it describes.
Finally, we need better integration of situational context. This includes not only domain knowledge, but also legal and cultural issues, metadata and semantic quality, compression and encryption techniques.
Combining the Semantic Web with multimedia semantics offers interesting research opportunities for social-information management, such as collaborative multimedia tagging, semantics-aware social-media engineering, and multimedia mash-ups. These opportunities were well represented at the 2009 International Conference on Semantic and Digital Media Technologies (SAMT 09). The Virtual Campfire exemplifies emerging systems for integrating social multimedia. This project, led by Ralf Klamma at the RWTH Aachen, establishes an advanced framework to create, search, and share multimedia artifacts with context awareness across communities.
Selected Articles on Multimedia Semantics
This month’s theme includes the following featured articles:
In “Managing and Querying Efficiently Distributed Semantic Multimedia Metadata Collections” (IEEE MultiMedia, Oct.–Dec. 2009, pp. 12–20, special issue on Multimedia Metadata and Semantic Management; login required for full text), Sebastien Laborie, Ana-Maria Manzat, and Florence Sedes propose an original model of a centralized metadata resume. Their resume is a concise version of the whole metadata, and it can link to some desired multimedia content on remote servers and databases. The authors also propose an automatic construction process for the metadata resume. They demonstrate the framework with current Semantic Web technologies for representing and querying semantic metadata. Their experimental results show the benefits of their approach.
In “Semantic MPEG Query Format Validation and Processing,” also from IEEE MultiMedia’s special issue (Oct.–Dec. 2009, pp. 22–33; login required for full text) Mario Doeller, Ruben Tous, Matthias Gruhne, Miran Choi, Tae-Beom Lim, Jaime Delgado, and Armelle Yakou describe the semantic validation of the MPEG Query Format (MPQF) and the implementation of an MPQF engine on top of an Oracle database management system. MPQF enables interoperable querying among heterogeneous databases that use different metadata standards for describing multimedia content. This article introduces methods for evaluating MPQF semantic-validation rules not expressed by syntactic means within the XML Schema used by the databases. The authors highlight a prototype implementation of an MPQF-capable processing engine using QueryByFreeText, QueryByXQuery, QueryByDescription, and QueryByMedia query types on a set of MPEG-7 based image annotations.
In “Using Social Networking and Collections to Enable Video Semantics Acquisition,” a third article from IEEE MultiMedia’s special issue (Oct.–Dec. 2009, pp. 52–60; login required for full text), Stephen Davis, Ian Burnett, and Christian Ritz consider the multimedia value chain’s first elements: media production, acquisition, and metadata gathering. The authors bring together methods from video content annotation and social networking to solve problems associated with gathering metadata that describes user interactions with and opinions about video content. Then they aggregate individual users’ interaction metadata to form semantic metadata for a given video. The authors have successfully implemented their techniques in a custom Flex application based on the popular Facebook API.
In “The Ariadne Infrastructure for Managing and Storing Metadata” (IEEE Internet Computing, July/Aug. 2009, pp. 18–25; login required for full text) issue of Stefaan Ternier, Katrien Verbert, Gonzalo Parra, Bram Vandeputte, Joris Klerkx, Erik Duval, Vicente OrdÃ³Ã±ez, and Xavier Ochoa analyze the standards-based Adriane infrastructure for managing learning objects in an open, scalable architecture. The core infrastructure comprises several components such as the repository, federated search engine, finder, harvester, and metadata validation service. This infrastructure enables the integration of learning objects in multiple, distributed repository networks. Finally, the authors review several architectural patterns that they found useful in searching repositories in this area—namely, federated search, search on harvest, search adapter, and harvest adapter. It would be interesting to see this infrastructure working multimedia metadata.
In “Data-Sharing P2P Networks with Semantic Approximation Capabilities” (IEEE Internet Computing, Sept./Oct. 2009, pp. 60–70; login required for full text), Federica Mandreoli, Riccardo Martoglia, Simona Sassatelli, and Wilma Penzo tackle the new information-retrieval challenges posed by heterogeneous data representations within peer-to-peer systems. The authors suggest leveraging the presence of semantic approximations between peers’ schemas to improve query routing. Their approach identifies the peers that best satisfy a user’s query and ranks the answers through a mechanism that promotes the most semantically relevant results. Their work applies to a scenario in which various actors in a multimedia chain-of-value network (such as network and telecom operators and service providers) must actively collaborate.
In “3D Media and the Semantic Web” (IEEE Intelligent Systems, Mar./Apr. 2009, pp. 90–96; login required for full text), Michela Spagnuolo, and Bianca Falcidieno introduce ways to integrate 3D media with Semantic Web technologies. Tools for coding, extracting, sharing, and retrieving the semantic content of 3D media are still far from satisfactory. The authors describe a means for embedding 3D into the Semantic Web, documenting and annotating 3D media for sharing, understanding its meaning, and retrieving it on the basis of content.
Numerous other articles from a wide range of journals and conferences deal with topics related to Multimedia Semantics; see our accompanying list of recommendations below. (Some links may require login.)
The Multimedia Metadata Community Portal presents recent standardization efforts related to multimedia metadata and semantics. It also lists tools for multimedia management, calls for research papers, pointers to educational material and books, and announcements for workshops organized by the community to address different facets and challenges in multimedia metadata and multimedia semantics.
The World Wide Web Consortium (W3C) Media Annotations working group is part of the W3C’s Video in the Web activity. The group defines methods and models (including a general ontology and API) to facilitate cross-community integration of information related to Web media objects.
The Moving Picture Experts Group (MPEG) is an ISO/IEC working group in charge of developing standards for the coded representations of digital audio, video, and related data. These representations include schemes for declaring and describing multimedia-content data structures and related information—in other words, multimedia metadata and its management. The group’s standards include MPEG-7, MPEG-21, MXM (Mobile PCI Express Module), and AIT (Advanced IPTV Terminal).
Organizations and Standards
- Dublin Core
- EBUCore (European Broadcasting Union)
- EXIF (Exchangeable Image Format)
- IPTC (International Press Telecommunications Council)
- LOM (Learning Object Metadata; IEEE Learning Technology Standards Committee working group P1484.12
- W3C Media Annotations working group
- METS (Metadata Encoding and Transmission Standard)
- MXM (MPEG Extensible Middleware)
- NISO MIX (National Information Standards Organization’s Metadata for Images in XML Schema)
- SearchMonkey Media
- SMPTE DMS-1 (Society of Motion Picture and Television Engineers’ Descriptive Metadata Scheme-1, defined in SMPTE 380M-2004 Television–Material Exchange Format)
- VRA (Visual Resources Association) Core 4.0
- XMP (Extensible Metadata Platform)
- YouTube Data API Protocol
- V. Rodriguez-Doncel and J. Delgado, “A Media Value Chain Ontology for MPEG-21,” IEEE Multimedia, Oct.–Dec. 2009, pp. 44–51.
- A. Batzios and P.A. Mitkas, “db4OWL: An Alternative Approach to Organizing and Storing Semantic Data,” IEEE Internet Computing, Nov./Dec. 2009, pp. 48–55.
- S. Park, S.-H. Jeong, “Mobile IPTV: Approaches, Challenges, Standards, and QoS Support,” IEEE Internet Computing, May/June 2009, pp. 23–31.
- C. Petrie, “The Semantics of ‘Semantics,’” IEEE Internet Computing, Sept./Oct. 2009, pp. 94–96.
- C. Bizer, “The Emerging Web of Linked Data,” IEEE Intelligent Systems, Sept./Oct. 2009, pp. 87–92,.
- H. Kosch et al., “The Life Cycle of Multimedia Metadata,” IEEE Multimedia, Jan.–Mar. 2005, pp. 80–86.
- M. Eberhard et al., “An Interoperable Streaming Framework for Scalable Video Coding Based on MPEG-21,” IEEE Wireless Comm., vol. 16, no. 5, 2009, pp. 58–63.
Harald Kosch is a full professor at the Faculty of Informatics and Mathematics, University of Passau, Germany. His research interests include multimedia metadata, multimedia databases, middleware, and Internet applications. Kosch has a PhD in computer science from Ecole Normale SupÃ©rieure de Lyon, France. Contact him at Harald.Kosch@uni-passau.de.
Christian Timmerer is an assistant professor in the Department of Information Technology, Multimedia Communication Group, Klagenfurt University, Austria. His research interests include the transport of multimedia content, multimedia adaptation in constrained and streaming environments, distributed multimedia adaptation, and Quality of Service/Quality of Experience. Timmerer has a PhD in applied informatics from Klagenfurt University. Contact him at firstname.lastname@example.org. Publications and MPEG contributions can be found under http://research.timmerer.com, follow him on http://www.twitter.com/timse7, and subscribe to his blog http://blog.timmerer.com.