2018 IEEE 34th International Conference on Data Engineering (ICDE) (2018)
Apr 16, 2018 to Apr 19, 2018
To fully use the advanced resources of a main memory database cluster, we take independent parallelism into account to parallelize multiple pipelines of one query. However, scheduling resources to multiple pipelines is an intractable problem. Traditional static approaches to this problem may lead to a serious waste of resources and suboptimal execution order of pipelines, because it is hard to predict the actual data distribution and fluctuating workloads at compile time. In response, we propose a dynamic scheduling algorithm, List with Filling and Preemption (LFPS), based on two techniques. (1) Adaptive filling improves resource utilization by issuing more extra pipelines to adaptively fill idle resource "holes" during execution. (2) Cost-based preemption strictly guarantees scheduling the pipelines on a critical path first at run time. We implement LFPS in our prototype database system. Under the workloads of TPC-H, experiments show our work improves the finish time of parallelizable pipelines from one query up to 2.3X than a static approach and 1.7X than a serialized execution.
data handling, database management systems, query processing, resource allocation, scheduling, storage management
Z. Fang, C. Weng, L. Wang and A. Zhou, "Parallelizing Multiple Pipelines of One Query in a Main Memory Database Cluster," 2018 IEEE 34th International Conference on Data Engineering (ICDE), Paris, France, 2018, pp. 1252-1255.