FPGAs have become ubiquitous in designing, evaluating, and accelerating heterogenous computing architectures. From early uses such accelerating network packet processing, signal processing, and logic emulation, FPGAs are now additionally used across the computing fabric in IoT devices, in memory and storage systems, and as compute node accelerators. With the profusion of system-on-chip designs, FPGAs with embedded processors offer unique opportunities for microarchitectural innovation in closely integrating custom logic with the cache hierarchy and CPU cores.
This special issue of IEEE Micro will explore academic and industrial research on topics that relate to FPGAs in computing. Topics include, but are not limited to:
- FPGA compute node accelerators in server and HPC data centers for compute and data-intensive offload
- FPGAs in edge/IoT computing
- FPGAs in embedded architectures such as near memory/storage, “fog” computing, and sensor-integrated processing
- Acceleration of architecture evaluation through FPGA emulation of SoC components, including cores, network on chip, and cache hierarchies
- Interaction of FPGA components with CPU architecture
o shared scratchpads
o communication between conventional cores and FPGA IP blocks
o memory coherence for FPGA accelerators independently accessing CPU memory systems
o cache coherence
- NUMA domains for FPGA-managed memories (on-chip scratchpads and FPGA board-level DRAM) and reconfigurable cores
Submission Deadline: January 20, 2021
Initial notifications: March 15, 2021
Revised papers due: April 12, 2021
Final notifications: May 11, 2021
Final versions due: May 25, 2021
Publication: July/August 2021
Please see the Author Information page and the Magazine Peer Review page for more information. Please submit electronically through ScholarOne Manuscripts, selecting this special-issue option.
Contact guest editors Maya Gokhale and Lesley Shannon at firstname.lastname@example.org or editor-in-chief Lizy John at email@example.com.