This Article 
 Bibliographic References 
 Add to: 
April 2008 (vol. 41 no. 4)
pp. 30-32
Ian Gorton, Pacific Northwest National Laboratory
Paul Greenfield, Australia's Commonwealth Scientific and Industrial Research Organisation
Alex Szalay, Johns Hopkins University
Roy Williams, Caltech
The deluge of data that future applications must process—in domains ranging from science to business informatics—creates a compelling argument for substantially increased R&D targeted at discovering scalable hardware and software solutions for data-intensive problems.

1. W. Johnston, "High-Speed, Wide Area, Data-Intensive Computing: A Ten-Year Retrospective," Proc. 7th IEEE Symp. High-Performance Distributed Computing, IEEE Press, 1998, pp. 280–291.
2. T. Hey and A. Trefethen, "The Data Deluge: An e-Science Perspective;" escidatadeluge.pdf.
3. G. Bell, J. Gray, and A. Szalay, "Petascale Computational Systems," Computer, Jan. 2006, pp. 110–112.
4. H.B. Newman, M.H. Ellisman, and J.A. Orcutt, "Data-Intensive E-Science Frontier Research," Comm. ACM, Nov. 2003, pp. 68–77.

Index Terms:
data-intensive computing, compute-intensive problems
Ian Gorton, Paul Greenfield, Alex Szalay, Roy Williams, "Data-Intensive Computing in the 21st Century," Computer, vol. 41, no. 4, pp. 30-32, April 2008, doi:10.1109/MC.2008.122
Usage of this product signifies your acceptance of the Terms of Use.