Closed Call for Papers: Special Section on Emerging In-Memory Computing Architectures and Applications
Share this on:
Submissions Due: 1 October 2022
Submissions Due: 1 October 2022
Initial Notification (Accept/Reject/Revise): (Tentative) 31 December 2022
Revisions Due: 1 February 2023
Final Notifications: (Tentative) 7 April 2023
Publication: (Tentative) Before July 2023
Computer architecture stands at an important crossroad facing several severe challenges. For more than four decades, the performance of computing systems has been improving by 20-50% per year. In the last decade, this number has dropped to less than 7% per year. At the moment, this rate is at the very low rate of 3% per year. The demand for performance improvement, however, keeps increasing and diversifies its profile. This higher performance, however, often has to come at a lower power consumption cost too, adding to the complexity of the problem.
Both today’s computer architectures and device technologies (used to manufacture them) are facing major challenges, making them incapable of delivering the performances required by complex applications such as artificial intelligence (AI). The complexity stems from the extremely high number of operations to be computed and the involved amount of data. The direct consequence is that the computational workload involved in such applications is limited by the well-known walls of actual computing systems: (1) the memory wall, due to the increasing gap between processor and memory speeds, and the limited memory bandwidth making memory access the performance killer and power drain for memory access dominated applications; and (2) the power wall, concerning the practical power limit for cooling, which means no further increase in CPU clock speed.
Nanoscale CMOS technology, which has been the enabler of the computing revolution, also faces three walls: (1) the reliability wall, as technology scaling leads to reduced device lifetime and higher failure rate; (2) the leakage wall, as the static power is becoming dominant at smaller technologies (due to the volatility technology and lower supply voltages); and (3) the cost wall, as the cost per device via pure geometric scaling of process technology is plateauing. All of these have led to the slowdown of the traditional device scaling.
In order for computing systems to continue delivering sustainable benefits for the foreseeable future, alternative computing architectures and paradigms have to be explored in conjunction with emerging device technologies. This special section aims to promote in-memory computing (IMC) and its applications as a promising solution. In doing so, we consider both well-established memory technologies (such as SRAM, DRAM, and FLASH) that can lead to more immediate solutions, and emerging memory technologies (such as RRAM, PCM, MRAM, and FeFET) that hold the promise for longer-term solutions.
Authors are invited to submit a manuscript to the special section on emerging IMC architectures and applications. Relevant topics of interest to this special section include (but are not limited to):
Emerging IMC-based systems: Architectures, design methodologies and frameworks, circuits, and device modeling
Emerging logic and circuit design concepts using memory devices: Threshold logic, stateful logic, and multi-level logic
Test and reliability for IMC circuits and systems: Defect, fault modeling, test generation, DfT, and fault tolerance techniques applied to IMC circuits and systems
Security for IMC systems and IMC paradigm for security: Threats, attacks and countermeasures for IMC, and exploiting IMC paradigm to enhance the security of a computing system
Emerging paradigms for IMC programming: Code generation, optimization, and programming models
Real-world applications of IMC
For author information and guidelines on submission criteria, please visit IEEE TETC‘s Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.