Artificial intelligence (AI) algorithms require powerful and efficient computer architectures for two distinct tasks: training and inference. The first is extremely compute- and memory-intensive, and in some cases can take days or weeks to complete. The latter usually implies real-time performance. As an increasing number of smart devices come online, performing more AI inference at the edge (far away from the cloud), they have the power to enable lower latency, greater privacy, better data governance, and higher availability when compared to cloud/datacenter-based inference. Also, training can be favored by adopting new federated learning computational models that perform part of the training concurrently on each edge node and merge parameters on the cloud/server later. The thorough understanding of the interaction of hardware and software in edge architectures creates a unique opportunity to innovate the ever-increasing energy efficiency, performance, and usability of AI systems at the edge.
This special issue of IEEE Micro will explore academic and industrial research on topics that combine AI and edge computing. Contributions should relate to the design, performance, or application of microprocessors and microcomputers. Topics include, but are not limited to:
- Digital or analog AI systems at the edge
- Low-power neural processing engines
- Energy efficiency of AI systems at the edge
- DSL and parallel programming models for AI systems at the edge
- FPGAs and low-power GPUs for AI at the edge
- Customized accelerator architectures and functional unit design
- Methods for efficient reconfigurability and adaptability at the edge
- High-level synthesis design of customized AI engines at the edge
- Emerging memory technologies for edge AI, such as processing-in-memory systems
- Computer systems abstractions to federated learning models
- Computation and communication tradeoffs in federated learning models
- Tools and libraries to support AI systems at the edge, such as IoT, virtual reality/augmented reality (VR/AR) systems, and wearables
- AI systems for big-data processing at the edge, such as bioinformatics, genomics, digital agriculture, and weather prediction
- Formal methods and techniques for data privacy in medical data, finance data, etc.
- Security implications of AI computing at the edge
- Sensor preprocessing and platform integration
- Techniques and analysis of quantization and network pruning
- Real-time performance, accuracy, and system complexity constraint analysis
Submission deadline: March 1, 2022
Initial notifications: May 17, 2022
Revised papers due: June 7, 2022
Final notifications: July 12, 2022
Final versions due: July 26, 2022
Publication: September/October 2022
Manuscripts should not exceed 5,000 words including references, with each average-size figure counting as 250 words toward this limit. Please include all figures and tables, as well as a cover page with author contact information (name, postal address, phone, fax, and email address) and a 200-word abstract. Submitted manuscripts must not have been previously published or be currently submitted for publication elsewhere, and all manuscripts must be cleared for publication. All previously published papers must have at least 30% new content compared to any conference (or other) publication. Acceptable file formats include Microsoft Word and PDF. Please read the Author Information page before submitting. When you are ready to submit, go to ScholarOne Manuscripts.
Contact guest editors Gabriel Falcao and Joseph R. Cavallaro at firstname.lastname@example.org or Editor-in-Chief Lizy John at email@example.com.
For questions about the ScholarOne submission system, contact the IEEE Micro magazine assistant at firstname.lastname@example.org.