Dr. Anne C. Elster is a Professor of HPC in Computer Science ant NTNU and was the Co-founder and Co-director of Norwegian University of Science and Technology’s Computational Science and Visualization program and also established the IDI/NTNU HPC-Lab, a well- respected research lab in heterogeneous computing that regularly receives international visitors. She is also a Visiting Scientist at the University of Texas at Austin.
Her current research interests are in high-performance parallel computing, focusing on developing good models and tools for heterogeneous computing and parallel software environments. Methods that include applying machine learning for code optimization and image processing, and developing parallel scientific codes that interact visually with the users by taking advantage of the powers in modern GPUs. Her novel fast linear bit-reversal algorithm is still noteworthy.
She has been an active participant and committee member of ACM/IEEE SC (Supercomputing) since 1990, served on the MPI 1 & 2 Forums (1993–96), as well as several other professional committees. She is a Senior member of IEEE, Life member of AGU (American Geophysical Union, as well as a member of ACM, SIAM and Tekna. Funding partners/collaborators include EU H2020, The Research Council of Norway, AMD, ARM, NVIDIA, Statoil and Schlumberger.
She works very closely with her graduate students and has so far supervised over 75 masters theses, several of which have received prizes, has supervised several PhD and Post Docs, and served on PhD committees internationally, including in Denmark, Italy, Saudi Arabia, Sweden, and the United States. She is currently supervising 1 Post Doc, 3 PhD students + 2 to be hired in 2018/19 (main advisor) and is co-supervisor for 2 more PhD students, as well as several master students. She has published widely in the field of high-performance computing (HPC). She has been the main advisor for more than 75 master students.
Dr. Elster has also given many invited lectures throughout her career. Recent invited talks include: “AI for HPC: Experiences and Opportunities” given at the ASC Workshop in Nanchang, China, May 2018 and “Supercompting and AI: Impact and Opportunities,” to be given at Supercomputing Frontiers 2019, in Warsaw, Poland in March 2019. The version of the latter was also presented at MIT, and Stony Brook during her sabbatical at The University of Texas at Austin in Fall 2018.
Norwegian University of Science and Technology (NTNU-Trondheim)
Phone: +1-512-751-8962 / +47 981 02 638
DVP term expires December 2022
Parallel Computing and AI: Impact and Opportunities
Parallel Computing has for many years now been needed in order to be able to get the desired performance for computational intensive tasks, ranging from image processing to astro-physics and weather simulations. Traditionally, the HPC field drove companies like Cray, IBM and others to develop processors for supercomputing. However, the market forces in other fields have—since the proliferation of COTS (commercial off-the shelf) processors, including GPUs for gaming and now more recently AI—driven the innovation in processor design. This means that algorithms, tools and applications now should adapt and take advantage of tensor processor, Machine Learning techniques, and other related technology, rather than expecting that old computational models will hold true. In this talk, we will discuss these issues, including how this is also an opportunity to help develop better graph algorithms for AI, and apply some of the techniques from AI to HPC challenges.
AI for HPC: Experiences and Opportunities
This talk will be focused on how AI techniques can be used in the development of HPC environment and tools. As larger HPC systems become more and more heterogeneous by adding GPUs and other devices for performance and energy efficiency, they also become more complex to write and optimize the HPC applications for. For instance, both CPU and GPUs have several types of memories and caches that codes need to be optimized for. We show how AI techniques can help us pick among 10s of thousands of parameters one ends up needing to optimize for the best possible performance of some given complex applications. Ideas for future opportunities will also be discussed.
Optimizing for Energy Efficiency and Performance: From Embedded Systems to Supercomputers
Whether you are developing compute-intensive codes on embedded devices or supercomputers, you will have to use the parallel computing capabilities of the current architectures in order to get the best performance. In this talk, we will discuss how optimizing for performance is often the most efficient, since the task mostly boils down to the 3 main challenges for high performance computing: 1) Location of data 2) … and 3)… . The talk will describe all 3 and what tools are need to help application programmers get there.
Depending on interests, recent features from Open MP and MPI, as well as task scheduling can also be discussed.
- Parallel Computing and AI: Impact and Opportunities
- AI for HPC: Experiences and Opportunities
- Optimizing for Energy Efficiency and Performance: From Embedded Systems to Supercomputers