"The times, they are a-changing." —Bob Dylan
• Massively parallel codes are more complex than sequential codes, and programming tools are rudimentary.
• Modifying existing applications to exploit these new computer architectures will be a substantial job and might not even be feasible as not all sequential algorithms can be made parallel.
• The architectures for massively parallel computers are still evolving rapidly, so programmers will be trying to hit a moving target.
• algorithms that can exploit parallel processing;
• new computing "stacks" (applications, programming languages, compilers, runtime/virtual machines, operating systems, and architectures) that execute parallel rather than sequential programs and effectively manage software parallelism, hardware parallelism, power, memory, and other resources;
• portable programming models that allow expert and typical programmers to express parallelism easily and allow software to be efficiently reused on multiple generations of evolving hardware;
• parallel-computing architectures driven by applications, including enhancements of chip multiprocessors, conventional data parallel architectures, application-specific architectures, and radically different architectures;
• open interface standards for parallel programming systems that promote cooperation and innovation to accelerate the transition to practical parallel computing systems; and
• engineering and computer science educational programs that incorporate an increased emphasis on parallelism and use a variety of methods and approaches to better prepare students for the types of computing resources that they'll encounter in their careers.