The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (1995 vol.6)
pp: 388-399
ABSTRACT
<p><it>Abstract—</it>We attempt a new variant of the scheduling problem by investigating the scalability of the schedule length with the required number of processors, by performing scheduling partially at compile time and partially at run time. </p><p>Assuming infinite number of processors, the compile time schedule is found using a new concept of the <b><it>threshold</it></b> of a task that quantifies a trade-off between the schedule-length and the degree of parallelism. The schedule is found to minimize either the schedule length or the number of required processors and it satisfies: <bullet-list><item-bullet><b><it>A feasibility condition</it></b> which guarantees that the schedule delay of a task from its earliest start time is below the threshold, and</item-bullet><item-bullet><b><it>An optimality condition</it></b> which uses a merit function to decide the best task—processor match for a set of tasks competing for a given processor.</item-bullet></bullet-list></p><p>At run time, the tasks are merged producing a schedule for a smaller number of available processors. This allows the program to be scaled down to the processors actually available at run time. Usefulness of this scheduling heuristic has been demonstrated by incorporating the scheduler in the compiler backend for targeting Sisal (Streams and Iterations in a Single Assignment Language) on iPSC/860.</p><p><it>Index Terms—</it>Compile time scheduling, dataflow graphs, distributed memory multiprocessors, functional parallelism, runtime scheduling, scaling, schedule length.</p>
CITATION
Santosh Pande, Dharma P. Agrawal, Jon Mauney, "A Scalable Scheduling Scheme for Functional Parallelism on Distributed Memory Multiprocessor Systems", IEEE Transactions on Parallel & Distributed Systems, vol.6, no. 4, pp. 388-399, April 1995, doi:10.1109/71.372792
22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool