The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (2008 vol.41)
pp: 30-32
Ian Gorton , Pacific Northwest National Laboratory
Paul Greenfield , Australia's Commonwealth Scientific and Industrial Research Organisation
Alex Szalay , Johns Hopkins University
Roy Williams , Caltech
ABSTRACT
The deluge of data that future applications must process—in domains ranging from science to business informatics—creates a compelling argument for substantially increased R&D targeted at discovering scalable hardware and software solutions for data-intensive problems.
INDEX TERMS
data-intensive computing, compute-intensive problems
CITATION
Ian Gorton, Paul Greenfield, Alex Szalay, Roy Williams, "Data-Intensive Computing in the 21st Century", Computer, vol.41, no. 4, pp. 30-32, April 2008, doi:10.1109/MC.2008.122
REFERENCES
1. W. Johnston, "High-Speed, Wide Area, Data-Intensive Computing: A Ten-Year Retrospective," Proc. 7th IEEE Symp. High-Performance Distributed Computing, IEEE Press, 1998, pp. 280–291.
2. T. Hey and A. Trefethen, "The Data Deluge: An e-Science Perspective;" www.rcuk.ac.uk/cmsweb/downloads/rcuk/research/ escidatadeluge.pdf.
3. G. Bell, J. Gray, and A. Szalay, "Petascale Computational Systems," Computer, Jan. 2006, pp. 110–112.
4. H.B. Newman, M.H. Ellisman, and J.A. Orcutt, "Data-Intensive E-Science Frontier Research," Comm. ACM, Nov. 2003, pp. 68–77.
23 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool