Tarek El-Ghazawi is a Professor in the Department of Electrical and Computer Engineering at The George Washington University, where he leads the university-wide Strategic Academic Program in High-Performance Computing. He is the founding director of The GW Institute for Massively Parallel Applications and Computing Technologies (IMPACT) and was a founding Co-Director of the NSF Industry/University Center for High-Performance Reconfigurable Computing (CHREC), established with funding from NSF, government and industry. El-Ghazawi’s research interests include high-performance computing, computer architectures, reconfigurable and embedded computing, nano-photonic based computing, and computer vision and remote sensing. He is one of the principal co-authors of the UPC parallel programming language and the first author of the UPC book from John Wiley and Sons. El-Ghazawi is also one of the pioneers of the area of High-Performance Reconfigurable Computing (HPRC).
Dr. El-Ghazawi was also one of the early researchers in Cluster Computing and has built the first GW cluster in 1995. At present he is leading efforts for rebooting computing based on new paradigms including analog, nano-photonic and neuromorphic computing. He has served on many boards and served as a consultant for organizations like CESDIS and RIACS at NASA GSFC and NASA ARC, IBM and ARSC. He has received his Ph.D. degree in Electrical and Computer Engineering from New Mexico State University in 1988. El-Ghazawi has published over 250 refereed research publications in his area and his work was funded by government and industry. His research was funded extensively by such government organizations like DARPA, NSF, AFOSR, NASA, DoD and industrial organizations such as Intel, AMD, HP, SGI. Dr. El-Ghazawi has served in many editorial roles including an Associate Editor for the IEEE Transactions Parallel and Distributed Computing and the IEEE Transaction on Computers. He has chaired and co-chaired many IEEE international conferences and symposia, including IEEE PGAS 2015, IEEE/ACM CCGrid2018, IEEE HPCC/SmartCity/DSS 2017 to name a few. Professor El-Ghazawi is a Fellow of the IEEE and selected as a Research Faculty Fellow of the IBM Center for Advanced Studies, Toronto. He was also awarded the Alexander von Humboldt Research Award, from the Humboldt Foundation in Germany (given yearly to 100 scientists across all areas from around the world), the Alexander Schwarzkopf Prize for Technical Innovation, and the GW SEAS Distinguished Researcher Award. El-Ghazawi has served as a senior U.S. Fulbright Scholar.
George Washington University
DVP term expires December 2020
Rebooting Computing—The Search for Post-Moore’s Law Breakthroughs
The field of high-performance computing (HPC) or supercomputing refers to the building and using computing systems that are orders of magnitude faster than our common systems. The top supercomputer, Summit, can perform 148,600 trillion calculations in one second (148.6 PF on LINPAC). The top two supercomputers are now in the USA followed by two Chinese supercomputers. Many countries are racing to break the record and build an ExaFLOP supercomputer that can perform more than one million trillion (quintillion) calculations per second. In fact the USA is planning two supercomputers in 2021 one of which, when fully operational (Frontier), will perform at 1.5 EF. Scientists however are concerned that we are reaching many physical limits and we need new innovative ideas to make it to the next generation of computing. This talk will consider where we stand and where we ae going with the current state of supercomputing with emphasis on future processors, and some of the ideas that scientists are looking at to re-invent computing. A comparative understanding of Nuromorphic and Brain-Inspired Computing, Quantum Computing and innovative computing paradigms will be provided along with an assessment of progress so far and the road ahead. Further, I will cover some of our own progress on Nanophotnonic PostMoore’s law processing efforts.
Exascale and the Convergence of HPC, Big Data, AI and IoT
The field of high-performance computing (HPC) or supercomputing refers to the building and using computing systems that are orders of magnitude faster than our common systems. The top supercomputer, Summit, can perform 148,600 trillion calculations in one second (148.6 PF on LINPAC). The top two supercomputers are now in the USA followed by two Chinese supercomputers. Many countries are racing to break the record and build an ExaFLOP supercomputer that can perform more than one million trillion (quintillion) calculations per second. In fact the USA is planning two supercomputers in 2021 one of which, when fully operational (Frontier), will perform at 1.5 EF. Incidentally, data volumes due social media and the internet of things (IoTs) have been exploding and AI has been a successful technique with advances in deep learning to leverage those large volumes of data. Those concurrent developments have thus resulted in what is seen as the Convergence of Big Data and HPC as processing massive data amounts become impractical without HPC. In this talk we examine the progress in HPC and potential applications and capabilities of such convergence as the basis for a future smart world.
Exploiting Hierarchical Locality for Productive Extreme Computing
Modern high-performance computers are characterized with massive hardware parallelism and deep hierarchies. Hierarchical levels may include cores, dies, chips, and nodes to name a few. Locality exploitation at all levels of the hierarchy is a must as the cost of data transfers can be high. Programmer’s knowledge and the expressivity of locality-aware programming models such as the Partitioned Global Address Space (PGAS) can be very useful. However, locality awareness can come at a high cost. In addition, asking programmers to worry about expressing locality relations at multiple architecture hierarchy levels is detrimental to productivity and systems and hardware must provide adequate support for exploiting hierarchical locality. In this talk I will discuss a framework for understanding and exploiting hierarchical locality in preparation for the next era of extreme computing. The role of system and hardware support will be highlighted will be stressed and examples will be shared.
- Rebooting Computing—The Search for Post-Moore’s Law Breakthroughs
- Exascale and the Convergence of HPC, Big Data, AI and IoT
- Exploiting Hierarchical Locality for Productive Extreme Computing