• Are there limits to manageable levels of parallelism? Are millions of threads tractable? What are the programming models that support application development within reasonable levels of effort, while allowing high performance and efficiency?
• Is there a limit to the number of cores that can be used for building a single computer? What is the significance of heterogeneity and hybrid designs in this respect?
• Are there fundamental limits to an increasing footprint of the interconnect? What are the performance/reliability tradeoffs?
• What are the factors that hinder high levels of sustained performance? What are the best ways to assess, model, and predict performance in extreme-scale regimes?
• What are the system software challenges, limitations, and opportunities? Can we develop system software that harnesses heterogeneity and asynchronous designs?
• What are design considerations for the I/O and storage subsystems given the vast amounts of data generated by such simulations?
• What are the main characteristics and challenges in providing high-level quality of service by current and future extreme-scale systems? Given the size and complexity of the systems enabling extreme-scale computing, can we overcome the intrinsic limitations in reliability and resilience?
• Is it inevitable that extreme-scale supercomputers will be delivered together with an associated power plant? Can we reduce as much as possible the power consumption to save energy for a greener planet but also enable the design of even faster computers?