CLOSED: Thematic Section on Memory-Centric Designs: Processing-in-Memory, In-Memory Computing, and Near-Memory Computing for Real-World Applications

Share this on:

The von Neumann architecture has been the status quo since the dawn of modern computing. Computers built on the von Neumann architecture are composed of an intelligent master processor (e.g., CPU) and dumb memory/storage devices incapable of computation (e.g., memory and disk). However, the skyrocketing data volume in modern computing is calling such status quo into question. The excessive amounts of data movement between processor and memory/storage in more and more real-world applications (e.g., machine learning and AI applications) have made the processor-centric design a severe power and performance bottleneck. The diminishing Moore’s Law also raises the need for a memory-centric design, which is rising on top of the recent material advancement and manufacturing innovation to open a paradigm shift. By doing computation right inside or near the memory, memory-centric design promises massive throughput and energy savings.

Due to the fast-growing demands and developments of memory-centric designs, the term definition for memory-centric designs keeps changing and people from different disciplines also use different terms to define the technology for memory-centric designs. For example, researchers in computer systems and architectures consider processing-in-memory (PIM) as the emerging key memory-centric design technique for reducing costly data movement, called von Neumann bottleneck, by enabling computation to be executed in the memory modules. PIM can be divided into two categories: (1) processing-near-memory to add computing logic/hardware close to or inside the memory modules and (2) processing-using-memory to use the intrinsic properties of memory cells to let memory cells have the capability to perform computation. Another group of researchers considers memory-centric designs as memory-centric computing, which revolves around two technologies: (1) near-memory computing to incorporate memory and logic in an advanced IC package and (2) in-memory computing to bring the processing tasks near or inside the memory. Differently, the database world and data scientists consider in-memory computing for caching and putting application data in memory. All the above perspectives and definitions are within the scope of memory-centric designs and fall within the topics of interest in this thematic section.

Although many memory-centric designs and technologies have been proposed to resolve a severe power and performance bottleneck in traditional processor-centric designs, there are still a lot of new challenges for adopting memory-centric designs in real-world applications. These applications range from IoT to data-center applications, and applications in different domains usually have diversified requirements and constraints. In addition to the challenges brought by the diversified real-world applications, the new memory-centric designs also create challenges to the designs at multiple levels of computer systems, ranging from circuit/device levels to architecture and system levels. For example, a PIM accelerator that computes data in analog could encounter the data accuracy issue at the circuit and device levels; at the same time, to better utilize the PIM accelerator, the systems and applications need to be redesigned to offload suitable computation workloads and have better data placement with considering the special characteristics of PIM accelerators. Thus, for the memory-centric designs to resolve the severe power and performance bottleneck of processor-centric designs built on the von Neumann architecture, there is an urgent need for technology, innovation, modeling, analysis, design, and application, ranging from circuit/device level to architecture/system level and application level.

This thematic section aims to present the technological advancements in memory-centric designs, including processing-in-memory, in-memory computing, and near-memory computing, for real-world applications. Cross-disciplinary and emerging applications for memory-centric designs are also welcomed. Topics of interest to this thematic section include (but are not limited to):

  • Efficient, low-power, and novel implementation of memory-centric designs: circuits, devices, architectures, and systems.
  • Test, verification, formal proof, computer aided design (CAD) automation, and fault/error-tolerance for memory-centric designs.
  • Memory-centric designs for specific application domains such as cryptography, security, neural networks, deep learning, signal processing, computer graphics, multimedia, computer vision, distributed and parallel computing (e.g. HPC), finance, etc.
  • Emerging material, circuit, device, architecture, and system technologies for the advancement of memory-centric designs.
  • Memory-centric designs for next-generation machine learning and AI applications.
  • Emerging applications with memory-centric designs.

Important Dates

  • Deadline for submissions: April 1, 2022
  • First decision (accept/reject/revise, tentative): June 15, 2022
  • Submission of revised papers: August 15, 2022
  • Notification of final decision (tentative): October 15, 2022
  • Journal publication (tentative): first half of 2023

Submission Guidelines

This thematic section only accepts submissions upon invitation. To submit to this thematic section, author(s) should have received a prior written invitation by the guest editors, otherwise the non-invited manuscripts will be withdrawn/unsubmitted.

Submitted papers must include new significant research-based technical contributions in the scope of the journal. Purely theoretical, technological or lacking methodological-and-generality papers are not suitable to this thematic section. The submissions must include clear evaluations of the proposed solutions (based on simulation and/or implementations results) and comparisons to state-of-the-art solutions. For additional information, please contact the guest editors by sending an email to

Papers under review elsewhere are not acceptable for submission. Extended versions of published conference papers (to be included as part of the submission together with a summary of differences) are welcome but there must have at least 40% of new impacting technical or scientific material in the submitted journal version, and there should be less than 50% verbatim similarity level as reported by a tool (such as CrossRef).

Guidelines concerning the submission process and LaTeX and Word templates can be found here. As per TETC policies, only full-length papers (10-16 pages with technical material, double column – papers beyond 12 pages will be subject to MOPC, as per CS policies) can be submitted to special/thematic sections. References should not exceed 45 items and each author’s bio should not exceed 150 words. Specifically, kindly comply with the detailed policies on MOPC and bibliographies (including self-citations) as reported on the Author Information page. While submitting through ScholarOne, please select the thematic section name in the Manuscript Type section.

Guest Editors

Yuan-Hao Chang, Academia Sinica, Taiwan (IEEE Senior Member)
Vincenzo Piuri, Università degli Studi di Milano, Italy (IEEE Fellow)