CLOSED – TPAMI Special Issue on Fine-Grained Visual Categorization

IEEE Transactions on Pattern Analysis and Machine Intelligence seeks submissions to an upcoming special issue by 31 Oct. 2018
Share this on:
Submissions Due: 31 October 2018

Aims and Scope
With the techniques for standard supervised image classification becoming increasingly practical, finegrained image categorization where images are classified into subordinate categories, have recently attracted a lot of attention and become an important task in computer vision. Examples of fine-grained visual categorization include but are not limited to recognizing, e.g., detailed animal species, specific subgroups of plants, and car makers and models. Compared with the general-purpose image categorization tasks, such as the ImageNet Challenge of 1,000 general categories, fine-grained categorization pays attention to subtle details that are not easily captured using the off-the-shelf image classifiers — a promising direction in visual perception and image understanding beyond generic labels. In addition, the absence of sufficient training data with the presence of a large number of fine-grained categories, e.g., about 10K species for birds and over 250K species for flowers, makes the problem of fine-grained visual categorization particularly interesting and challenging.

Applying deep neural networks, which are proposed for general-purpose image classification, to finegrained visual categorization has led to notable performance improvement, but the fine-grained categorization problem cannot be solved merely by training modern deep convolutional neural networks. In the past, results for fine-grained image categorization have been mostly attained using classifiers with strong supervision, where detailed labels such as body parts, attributes, and viewpoints are manually
annotated and used in training. Many questions generally arise when the fine-grained categorization task is made more general and broad: How do we alleviate the burden of having fine-grained manual annotations? How can top-down information and domain knowledge be included? How can we make the best use of web data and online resources like Mechanical Turk?

Topics and Guidelines
This special issue targets researchers and practitioners from both industry and academia to provide a forum in which to publish recent state-of-the-art achievements in the fine-grained image recognition area. Topics of interest include, but are not limited to:

  • Fine-grained image recognition and categorization
  • Fine-grained vehicle categorization and verification
  • Visual logo detection, categorization and verification
  • Person re-identification
  • Vehicle re-identification
  • Fashion image recognition, search and attribute prediction
  • Fine-grained food recognition and ingredient analysis
  • Transfer-learning from categories to subcategories
  • Part-based models for fine-grained categorization
  • Attribute-based models for fine-grained feature learning
  • Ontology-based fine-grained visual categorization
  • Fine-grained categorization with human in the loop
  • Fine-grained categorization in the wild by domain adaptation
  • Multi-modal data for fine-grained categorization
  • Novel benchmark data
  • Novel annotation, crowdsourcing approaches and tools for fine-grained data labeling

Before submitting your manuscript, please ensure you have carefully read the Instructions for Authors for IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI). The complete manuscript should be submitted through T-PAMI’s submission system (https://mc.manuscriptcentral.com/tpami-cs). To ensure that you submit to the correct special issue, please select the appropriate section in the dropdown menu upon submission. In your cover letter, please also clearly mention the title of the SI.

Important Dates

  • Paper submission due: October 31, 2018
  • First notification: December 31, 2018
  • Revision: February 28, 2019
  • Final decision: May 15, 2019
  • Publication date: August, 2019 (tentative)

Guest Editors

Contact e-mail: jianf@microsoft.com, jingdw@microsoft.com