
An Interview with Hai (Helen) Li, Marie Foote Reel E’46 Distinguished Professor and Department Chair of Electrical and Computer Engineering at Duke University, whose pioneering work in neuromorphic computing, AI hardware design, and memory architecture has helped shape the future of intelligent systems. As a leading researcher and educator, Dr. Li bridges academic excellence with industrial impact, advancing sustainable, trustworthy AI through innovations in hardware-software co-design and brain-inspired computing. We connected with Dr. Li to explore her journey, her vision for edge intelligence, and the collaborative ecosystems driving the next wave of AI hardware breakthroughs.
Your research encompasses neuromorphic circuits and systems for brain-inspired computing. How do you envision the future of neuromorphic computing in mainstream applications?
Neuromorphic computing draws inspiration directly from the structure and function of the human brain, representing a fundamental shift from traditional computing paradigms. Unlike conventional processors such as CPUs and GPUs that execute sequential or parallel instructions through fixed architectures, neuromorphic systems use networks of artificial neurons and synapses to process information in a distributed, event-driven, and highly energy-efficient manner. This brain-inspired design allows computation and memory to coexist, drastically reducing data movement, the major bottleneck in modern computing.
The original vision of neuromorphic computing was to create machines capable of learning, adapting, and assisting human intelligence in daily life. Over the past decade, we’ve witnessed remarkable progress in AI, with large-scale models (LLM) achieving impressive capabilities in language, vision, and reasoning. However, these achievements come at a steep cost. Today’s AI systems require massive computational resources and consume extraordinary amounts of energy, training a single large model can use as much power as hundreds of homes. This model of growth is neither sustainable nor accessible for most individuals or organizations. To bring intelligence to the edge, where devices operate in real time and under strict energy constraints, we must rethink the underlying hardware. Neuromorphic systems offer a promising path forward. By mimicking the brain’s ability to process information efficiently and sparsely, they can enable AI that is scalable, low-power, and responsive. I envision a future where neuromorphic hardware becomes a foundational technology for mainstream applications, from autonomous systems and healthcare devices to personalized, always-on assistants, ushering in a new era of sustainable and pervasive intelligence. Ultimately, reimagining computing from the ground up will be essential for the next wave of AI innovation.
As the founding director of the Duke Center for Computational Evolutionary Intelligence and co-founder of the first NSF IUCRC center dedicated to AI computing hardware, what are the primary goals of these centers, and how do they contribute to the advancement of AI hardware?
Dr. Yiran Chen and I founded the Center for Computational Evolutionary Intelligence over fifteen years ago, when we were colleagues at the University of Pittsburgh. Our shared vision was to explore how computational models could emulate the adaptability and efficiency of natural intelligence. After joining Duke University, we expanded the center’s mission further to integrate machine learning, optimization, and neuromorphic hardware design into a unified research framework. Our goal is to advance fundamental research in intelligent systems while developing computing technologies that are more efficient, scalable, and sustainable. Complementing this academic foundation, the NSF IUCRC Center for ASIC (Accelerated System for Intelligent Computing) represents the first NSF Industry-University Cooperative Research Center devoted to AI computing hardware. This center brings together multiple universities and principal investigators, fostering deep collaboration across academia and industry. While our Duke center focuses on long-term, curiosity-driven research, the IUCRC center emphasizes technology transfer, bridging scientific discoveries with industrial applications. Together, they create a complementary ecosystem that moves innovation seamlessly from concept to commercialization. Through close partnerships with leading technology companies, these centers serve as catalysts for real-world impact. Our students and researchers work side-by-side with industry partners, addressing pressing challenges in AI hardware efficiency, reliability, and scalability. The collaborative culture has also inspired entrepreneurship. For example, several of our alumni have founded startups such as TetraMem and Nexsys, translating neuromorphic and AI-centric innovations into emerging products and platforms. Ultimately, both centers share a common goal: to redefine the boundaries of intelligent computing. By uniting interdisciplinary research with industrial collaboration, we aim to accelerate the development of AI hardware that is not only powerful but also energy-efficient and broadly accessible, paving the way for the next generation of intelligent systems.
Your experience spans both industry and academia, including positions at Qualcomm, Intel, and Seagate Technology. How has your industry experience influenced your academic research and teaching?
My industrial experience has profoundly shaped both my research philosophy and my approach to teaching. Working at companies taught me the importance of grounding academic research in real-world challenges and ensuring that our innovations deliver tangible, practical impact. I believe that effective research should not only push scientific boundaries but also address problems that matter to industry and society. After completing my Ph.D., I began my career at Qualcomm, where I focused on advanced memory design. That experience gave me a deep appreciation for the intersection of circuit efficiency, reliability, and manufacturability - concepts that remain central to my academic work today. At Intel, I was part of the Penryn processor team, which developed the first dual-core architecture - a milestone that laid the foundation for today’s ubiquitous multi-core processors. Witnessing how architectural innovation could fundamentally reshape computing performance inspired my later exploration of neuromorphic and heterogeneous architectures. At Seagate Technology, I led circuit design efforts within the Advanced Technology Group, resulting in the first STT-RAM (Spin-Transfer Torque Random Access Memory) test chip in the US. This exposure to emerging memory technologies deepened my long-term interest in hardware systems - principles that now guide much of my research in AI hardware design. In both my lab and classroom, I emphasize translating theoretical ideas into practical outcomes. I encourage students to think beyond papers and simulations, but to design, prototype, and test their ideas. Whether tackling short-term engineering challenges or anticipating the long-term evolution of computing, my goal is to prepare students to think like innovators. For example, our work on structured sparsity in AI models focused not only on theoretical model reduction but also on achieving real speedups in deployment. This research, now widely adopted in industrial products, exemplifies how solutions rooted in real-world constraints can yield both scientific advances and industrial impact.
What are your future research directions and goals? Are there any specific areas or challenges within AI that you are particularly interested in exploring? Any challenges you hope to overcome in the near future?
Looking ahead, my research aims to enable intelligence at the edge, bringing AI capabilities closer to users in a way that is accessible, energy-efficient, and trustworthy. I envision a future where intelligent systems are seamlessly embedded in our daily environments, assisting individuals in real time without relying on massive cloud infrastructures. Achieving this vision requires a deep rethinking of computing hardware. I believe the next revolution in AI will come from advances in AI hardware systems, and their tight integration with sensing technologies for data acquisition and adaptive algorithms for learning. This convergence of new devices, novel architectures, and processing paradigms will remain at the heart of my research. Another important direction I am pursuing is ensuring privacy and trustworthiness in AI systems. Additionally, as we push toward ubiquitous intelligence, it is crucial that we preserve the privacy and integrity of individual data. However, balancing privacy with computational efficiency often introduces trade-offs. For instance, privacy-preserving computation can demand higher energy and resource costs. Addressing this tension calls for holistic approaches—rethinking algorithms, architectures, and hardware together. I see hardware-software co-design as a critical enabler for future AI breakthroughs. In our recent publication, “A Survey: Collaborative Hardware and Software Design in the Era of Large Language Models,” my team and I explored how tightly coupling hardware design with software optimization can dramatically improve the performance, efficiency, and scalability of large AI models. This co-design philosophy will guide much of our future work, especially as AI workloads continue to evolve rapidly. In short, my long-term goal is to build systems that are not only powerful and sustainable, but also human-centered, AI that serves people reliably, respects privacy, and remains within everyone’s reach.
Dr. Hai (Helen) Li’s journey exemplifies how visionary leadership, interdisciplinary research, and a commitment to real-world impact can redefine the boundaries of computing. Her contributions to neuromorphic systems, memory technologies, and AI acceleration reflect a deep understanding of both theoretical foundations and practical constraints and are why she has become one of the 2025 Edward J. McCluskey Technical Achievement Award recipients. Through her roles in academia, industry, and national research initiatives, Dr. Li continues to shape a future where intelligent systems are not only powerful and efficient, but also sustainable, secure, and accessible—bringing AI closer to people and the environments they live in.