The Community for Technology Leaders
RSS Icon
Issue No.04 - April (2008 vol.41)
pp: 30-32
Paul Greenfield , Australia's Commonwealth Scientific and Industrial Research Organisation
Alex Szalay , Johns Hopkins University
Ian Gorton , Pacific Northwest National Laboratory
The deluge of data that future applications must process—in domains ranging from science to business informatics—creates a compelling argument for substantially increased R&D targeted at discovering scalable hardware and software solutions for data-intensive problems.
data-intensive computing, compute-intensive problems
Paul Greenfield, Alex Szalay, Ian Gorton, "Data-Intensive Computing in the 21st Century", Computer, vol.41, no. 4, pp. 30-32, April 2008, doi:10.1109/MC.2008.122
1. W. Johnston, "High-Speed, Wide Area, Data-Intensive Computing: A Ten-Year Retrospective," Proc. 7th IEEE Symp. High-Performance Distributed Computing, IEEE Press, 1998, pp. 280–291.
2. T. Hey and A. Trefethen, "The Data Deluge: An e-Science Perspective;" escidatadeluge.pdf.
3. G. Bell, J. Gray, and A. Szalay, "Petascale Computational Systems," Computer, Jan. 2006, pp. 110–112.
4. H.B. Newman, M.H. Ellisman, and J.A. Orcutt, "Data-Intensive E-Science Frontier Research," Comm. ACM, Nov. 2003, pp. 68–77.
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool