Large-Scale Multimedia Data Collections
Submission Deadline: 1 October 2011
Publication Issue: July-September 2012
Pivotal to many tasks in relation to multimedia research and development is the availability of a sufficiently large data set and its corresponding ground truth. Currently, most available data sets for multimedia research are either too small, such as the Corel or Pascal data sets; too specific, such as the Text Retrieval Conference Video (Trecvid) data set; or without ground truth, such as the recent efforts by the Massachusetts Institute of Technology and Microsoft Research Asia that gathered millions of Web images for testing. While it’s relatively easy to crawl and store a huge amount of data, the creation of ground truth necessary to systematically train, test, evaluate, and compare the performance of various algorithms and systems is a major problem. For this reason, more and more research groups are individually putting efforts into the creation of such corpus to carry out research on large-scale data sets. There is a need to unify these individual efforts into the creation of a unified Web-scale repository that would benefit the entire multimedia research community.
The purpose of this special issue is to present and report on the construction and analysis of large-scale multimedia data sets and resources, and to provide a strong reference for multimedia researchers interested in large-scale multimedia data sets. The issue will specifically address the construction of data sets; the creation of ground truths; the sharing and extending of such resources in terms results and analysis related to ground truth, features, algorithms, and tools.
The IEEE MultiMedia special issue on large-scale multimedia data collections solicits original papers that will be of interest for IEEE MultiMedia readers. The list of possible topics includes, but is not limited to:
- Construction, unification, and evolution of corpus: the state of use, the lessons learned, and their impact, scalability of results, and range of applications.
- Framework for sharing of data sets, ground truths, features, algorithms, and tools, as well as comparison and analysis of results.
- Large-scale corpus analysis techniques: knowledge mining from large-scale multimedia corpus, optimization techniques on large-scale multimedia data for efficiency, and techniques for large-scale, content-based multimedia retrieval.
- Performance evaluation methodologies and standards.
For more information, please contact the Guest Editors:
- Benoit Huet, EURECOM
- Alexander Hauptmann, Carnegie Mellon University
- Tat-Seng Chua, National University of Singapore
Submit your paper at https://mc.manuscriptcentral.com/cs-ieee. When uploading your paper, please select the appropriate special issue title under the category "Manuscript Type." If you have any questions regarding the submission system, please contact Andy Morton at email@example.com. All submissions will undergo a blind peer review by at least two expert reviewers to ensure a high standard of quality. Referees will consider originality, significance, technical soundness, clarity of exposition, and relevance to the special issue topics. All submissions must contain original, previously unpublished research or engineering work. Papers must stay within the following limits: 6,500 words maximum, 12 total combined figures and tables with each figure counting as 200 words toward the total word count, and 18 references.
To submit a paper to the July-September 2012 special issue, please observe the following deadlines:
- 1 October 2011: Full paper must be submitted using our online manuscript submission service and prepared according to the instructions for authors (please see the Author Resources page at http://www.computer.org/multimedia/author.htm).
- 15 January 2012: Authors notified of acceptance, rejection, or needed revisions.
- 5 April 2012: Final versions due.