The Community for Technology Leaders
2015 IEEE 8th International Conference on Cloud Computing (CLOUD) (2015)
New York City, NY, USA
June 27, 2015 to July 2, 2015
ISSN: 2159-6190
ISBN: 978-1-4673-7286-2
pp: 389-396
ABSTRACT
As the wealth of information available on the web keeps growing, being able to harvest massive amounts of data has become a major challenge. Web crawlers are the core components to retrieve such vast collections of publicly available data. The key limiting factor of any crawler architecture is however its large infrastructure cost. To reduce this cost, and in particular the high upfront investments, we present in this paper a geo-distributed crawler solution, UniCrawl. UniCrawl orchestrates several geographically distributed sites. Each site operates an independent crawler and relies on well-established techniques for fetching and parsing the content of the web. UniCrawl splits the crawled domain space across the sites and federates their storage and computing resources, while minimizing thee inter-site communication cost. To assess our design choices, we evaluate UniCrawl in a controlled environment using the ClueWeb12 dataset, and in the wild when deployed over several remote locations. We conducted several experiments over 3 sites spread across Germany. When compared to a centralized architecture with a crawler simply stretched over several locations, UniCrawl shows a performance improvement of 93.6% in terms of network bandwidth consumption, and a speedup factor of 1.75.
INDEX TERMS
Crawlers, Uniform resource locators, Computer architecture, Distributed databases, Internet, Web pages
CITATION

D. L. Quoc, C. Fetzer, P. Felber, E. Riviere, V. Schiavoni and P. Sutra, "UniCrawl: A Practical Geographically Distributed Web Crawler," 2015 IEEE 8th International Conference on Cloud Computing (CLOUD), New York City, NY, USA, 2015, pp. 389-396.
doi:10.1109/CLOUD.2015.59
207 ms
(Ver 3.3 (11022016))