Call for Papers: Special Section on Parallel and Distributed Computing Techniques for AI, ML, and DL
TPDS seeks submissions for this upcoming special section.
 

Artificial intelligence (AI), machine learning (ML), and deep learning (DL) have established themselves in a multitude of domains because of their ability to process and model unstructured input data. As these fields are becoming increasingly integrated into our daily lives, there is a significant amount of interest among the community in improving AI/ML/DL through the use of parallel and distributed computing techniques (sometimes referred to as “PDC for AI/ML/DL”) as well as to apply AI/ML/DL techniques to improve traditional parallel and distributed computing systems (sometimes referred to as “AI/ML/DL for PDC”).  In this special section, we hope to bring together community research in this area into a curated selection of articles.

About TPDS special sections

TPDS has recently started a new initiative called “special sections.” Compared with regular submissions to TPDS, special sections have some differences: (1) submissions are focused on special topics of interest (similar to special issues); (2) special sections have fixed deadlines for submission and notifications; and (3) special sections have a standing committee of reviewers similar to conferences. This is the first such special section that we are planning.

Timeline

The timeline for the submission and review process is as follows (all deadlines are 23:59 (11:59pm) anywhere on earth (https://www.worldtimeserver.com/time-zones/aoe/)).

Round 1:

  • Submission deadline: 1 September 2021 (no extensions)
  • First-round review notification: 13 October 2021 (6 weeks for reviews)
  • Notification would be one of ACCEPT, REJECT, MAJOR REVISIONS, or MINOR REVISIONS

Round 2a (only for papers that get a minor revision in Round 1):

  • Second-round submission deadline: 27 October 2021 (2 weeks for re-submission)
  • Second-round review notification: 10 November 2021 (2 weeks for reviews)
  • Notification would be one of ACCEPT or REJECT

Round 2b (only for papers that get a major revision in Round 1):

  • Second-round submission deadline: 10 November 2021 (4 weeks for re-submission)
  • Second-round review notification: 8 December 2021 (4 weeks for reviews)
  • Notification would be one of ACCEPT, REJECT, or MINOR REVISIONS

Round 3 (only for papers that got a minor revision in Round 2b):

  • Third-round submission deadline: 22 December 2021 (2 weeks for re-submission)
  • Third-round review notification: 5 January 2022 (2 weeks for reviews)
  • Notification would be one of ACCEPT or REJECT

Topics of interest

The special section is dedicated to parallel and distributed computing (PDC) techniques for AI/ML/DL. That includes both “PDC for AI/ML/DL”- and “AI/ML/DL for PDC”-oriented articles (please see the description above). Topics of interest include, but are not limited to:

  • AI/ML/DL for PDC and PDC for AI/ML/DL
  • Data parallelism and model parallelism
  • Efficient hardware for AI, ML, and DL
  • Hardware-efficient training and inference
  • Performance modeling of AI/ML/DL applications
  • Scalable optimization methods for AI/ML/DL
  • Scalable hyper-parameter optimization
  • Scalable neural architecture search
  • Scalable IO for AI/ML/DL
  • Systems, compilers, and languages for AI/ML/DL at scale
  • Testing, debugging, and profiling AI/ML/DL applications
  • Visualization for AI/ML/DL at scale

Submission instructions

Submissions to the special section will be received as TPDS regular papers (survey and comment-style papers are not allowed). Please check submission instructions including page limit, manuscript format, and submission guidance on the TPDS Author Information page. Please note that review versions of the papers are limited to 12 pages, and overlength page charges are only for the final versions of the papers.

Submissions are *NOT* double blind. Authors can disclose their names, and they can freely cite their previous work without referring to it in a third-party fashion.

Authors can submit papers till the deadline through ScholarOne. Once you start the submission process, in Step 1 of the process, you’ll be asked to pick a “Type” for the paper. Please pick “SS for Parallel and Distributed Computing Techniques for AL, ML and DL.”

Extensions of Prior Papers

All papers need to have sufficient new content and contributions (see examples of extension material below) to warrant a separate publication. While the specific amount of acceptable new content is subjective and depends on the reviewer, we estimate that most reviewers expect new material that represents novel research contributions beyond the original publication. Acceptance of the paper is based on this new content and its contributions. Old content from previous conference papers is mainly to help reviewers understand the context. Old content should be clearly cited from the original source. Furthermore, any content used verbatim from previous publications should be appropriately quoted and cited to avoid self-plagiarism.

Authors submitting an extension of a prior publication should clearly respond to the following questions:

  1. What are the novel contributions of the submitted paper (beyond the authors’ previous publication(s))?
  2. What is the new content and in which sections does this content appear in the submission?
  3. How do the contributions (and content) build on the previous published material?

Examples of Extension Material:

Acceptable new content and contributions

  1. New conceptual extensions
  2. Experiments that provide new insights
  3. New theoretical analysis and/or proofs supporting empirical results

Allowable but insufficient content and contributions

  1. Extension to background and/or related work
  2. Elaboration on the same points in the introduction, observations, and conclusions
  3. Additional figures/plots that merely illustrate already-published content
  4. Additional experimental results without new insights.

Unacceptable content and contributions

  1. Simple union of content from multiple prior publications

Co-editors

  • Antonio J. Peña (Barcelona Supercomputing Center)
  • Min Si (Argonne National Laboratory)
  • Jidong Zhai (Tsinghua University)

Committee members

Junya Arai, Nippon Telegraph and Telephone Corporation, Japan
Neelima Bayyapu, NITK Surathkal, India
Adrián Castelló, Universitat Jaume I de Castello, Spain
Quan Chen, Shanghai Jiaotong University, China
Amelie Chi Zhou, Shenzhen University, China
Bronis de Supinski, Lawrence Livermore National Laboratory, USA
Sheng Di, Argonne National Laboratory, USA
Lin Gan, Tsinghua Unversity, China
Balazs Gerofi, RIKEN Center for Computational Science, Japan
Stephen Herbein, Laurence Livermore National Laboratory, USA
Zhiyi Huang, University of Otago, New Zealand
Jithin Jose, Microsoft, USA
Ang Li, Pacific Northwest National Laboratory, USA
Dong Li, University of California, Merced, USA
Jiajia Li, Pacific Northwest National Laboratory, USA
Haikun Liu, Huazhong University of Science and Technology, China
Weifeng Liu, China University of Petroleum-Beijing, China
Naoya Maruyama, NVIDIA, USA
Xuehai Qian, University of Southern California, USA
Dandan Song, Beijing Institute of Technology, China
Shanjiang Tang, Tianjin University, China
Hao Wang, The Ohio State University, USA
Zhaoguo Wang, Shanghai Jiaotong University, China
Rio Yokota, Tokyo Institute of Technology, Japan
Yang You, National University of Singapore, Singapore
Teng Yu, Tsinghua Unversity, UK
Feng Zhang, Renmin University, China

Special Section on Parallel and Distributed Computing Techniques for AI, ML, and DL
1 September 2021