The Community for Technology Leaders
2013 IEEE Seventh International Conference on Semantic Computing (2013)
Irvine, CA, USA USA
Sept. 16, 2013 to Sept. 18, 2013
pp: 228-235
ABSTRACT
In the last years an increasing number of structured data was published on the Web as Linked Open Data (LOD). Despite recent advances, consuming and using Linked Open Data within an organization is still a substantial challenge. Many of the LOD datasets are quite large and despite progress in RDF data management their loading and querying within a triple store is extremely time-consuming and resource-demanding. To overcome this consumption obstacle, we propose a process inspired by the classical Extract-Transform-Load (ETL) paradigm. In this article, we focus particularly on the selection and extraction steps of this process. We devise a fragment of SPARQL dubbed SliceSPARQL, which enables the selection of well-defined slices of datasets fulfilling typical information needs. SliceSPARQL supports graph patterns for which each connected sub graph pattern involves a maximum of one variable or IRI in its join conditions. This restriction guarantees the efficient processing of the query against a sequential dataset dump stream. As a result our evaluation shows that dataset slices can be generated an order of magnitude faster than by using the conventional approach of loading the whole dataset into a triple store and retrieving the slice by executing the query against the triple store's SPARQL endpoint.
INDEX TERMS
CITATION

E. Marx, S. Shekarpour, S. Auer and A. N. Ngomo, "Large-Scale RDF Dataset Slicing," 2013 IEEE Seventh International Conference on Semantic Computing(ICSC), Irvine, CA, USA USA, 2013, pp. 228-235.
doi:10.1109/ICSC.2013.47
185 ms
(Ver 3.3 (11022016))