CLOSED Call for Papers: Special Issue on Hardware Acceleration of Machine Learning

Share this on:
Submissions Due: 15 January 2022

The confluence of the end of Dennard scaling and the demand for machine learning (ML) processing has given rise to an arms race for achieving the highest throughput, minimal latency, and lowest power consumption. In this environment, special-purpose ML hardware accelerators have skyrocketed in popularity. This special issue aims to describe the state of the art in developing, optimizing, programming, and deploying hardware accelerator technologies for ML. We are seeking papers that demonstrate performance and power efficiency levels enabling the ML revolution today and in the foreseeable future. Topics of interest to this special issue include, but are not limited to:

  • Hardware accelerator architectures and methodologies for ML inference and training at chip, wafer, system, and datacenter scale
  • Specialized hardware acceleration tailored for CNNs, transformers, recommender systems, reinforcement learning, and other leading DNN algorithms
  • ML acceleration on GPUs, FPGAs, CGRAs, and ASICs, including extensions of these hardware platforms with features specific to ML workloads
  • Performance characterization and analysis of ML workloads running on hardware accelerators
  • Compilers and ISAs for ML accelerators
  • Reduced precision, bit-serial evaluation, and structured and unstructured sparsity support
  • Approximate, error-aware, and error-resilient ML accelerators and development methodologies
  • Joint co-optimization of ML algorithms and accelerator architecture
  • ML accelerator deployment, multi-tenancy, and virtualization
  • On-chip and off-chip networks for ML accelerators, including photonics and other forward-looking interconnection technologies
  • ML acceleration in the cloud and at the edge, including integration into end-user devices

Important Dates

Submission Deadline: January 15, 2022 [submission site open on 1 January]
Reviews Completed: February 24, 2022
Major Revisions Due: March 16, 2022
Reviews of Revisions Completed: April 12, 2022
Notification of Final Acceptance: April 22, 2022
Publication Materials for Final Manuscripts Due: May 4, 2022
Publication: June 2022

Submission Guidelines

For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.

Questions?

Please address correspondence regarding this special issue to Lead Guest Editor Michael Ferdman (mferdman@cs.stonybrook.edu).

Guest editors:

Michael FerdmanMichael Ferdman is an Associate Professor of Computer Science at Stony Brook University, where he co-directs the Computer Architecture Stony Brook Lab. His research interests are in the area of computer architecture, with particular emphasis on the server computing stack. His current projects center on FPGA accelerators for machine learning, emerging memory technologies, and speculative micro-architectural techniques. Mike received a BS in computer science, and BS, MS, and PhD in electrical and computer engineering from Carnegie Mellon University. He is a senior member of IEEE.

 

Jorge Albericio

Jorge Albericio is an Applied Deep Learning Research Scientist at NVIDIA. His research focuses on the development of accelerators for machine intelligence, which he has done at Cerebras Systems and in his role as Deep Learning Architect at NVIDIA. Jorge has a PhD in systems engineering and computing from the University of Zaragoza. He was a postdoctoral fellow at the University of Toronto from 2013 to 2016, where he worked on branch prediction, approximate computing, and hardware accelerators for machine learning. He is a member of IEEE.

 

 

Tushar Krishna

Tushar Krishna is an Associate Professor in the School of Electrical and Computer Engineering at Georgia Tech. He has a PhD in electrical engineering and computer science from MIT (2014), a M.S.E in electrical engineering from Princeton University (2009), and a B.Tech in electrical engineering from the Indian Institute of Technology (IIT) Delhi (2007). Before joining Georgia Tech in 2015, Tushar spent a year as a post-doctoral researcher at Intel, Massachusetts. His research spans computer architecture, interconnection networks, networks-on-chip (NoC) and deep learning accelerators – with a focus on optimizing data movement in modern computing systems. He is a member of IEEE.

Coordinating topical editor:

Peter Milder Peter Milder is an Associate Professor in the Department of Electrical and Computer Engineering at Stony Brook University. His research focuses on FPGA hardware acceleration, exploring how to use computer-based tools and systems to make FPGA acceleration more efficient and easier to use. Peter received BS, MS, and PhD degrees in electrical and computer engineering from Carnegie Mellon University in 2004, 2005, and 2010, respectively. From 2010–2012, he was a post-doctoral researcher at Carnegie Mellon, and in 2012 he joined the faculty of Stony Brook. He is a senior member of IEEE.

Review Committee

  • Tor Aamodt, UBC
  • Mieszko Lis, UBC
  • Yu-Hsin Chen, Facebook
  • Divya Mahajan, Microsoft
  • Peter Y. K. Cheung, Imperial College
  • Brett Meyer, McGill University
  • Jungwook Choi, Hanyang
  • Jongse Park, KAIST
  • Jason Cong, UCLA
  • Brandon Reagen, NYU
  • Hadi Esmaeilzadeh, UCSD
  • Joshua San Miguel, University of Wisconsin
  • Patrick Judd, Nvidia
  • Muhammad Shafique, NYU
  • EJ Kim, Texas A&M
  • Hardik Sharma, Google
  • Jangwoo Kim, SNU
  • Yongming Shen, Waymo
  • Hyoukjun Kwon, Facebook
  • Ganesh Venkatesh, Facebook
  • Jae Lee, SNU
  • Gabriel Weisz, Microsoft