The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—We consider a real-time task model where a task receives a "reward" that depends on the amount of service received prior to its deadline. The reward of the task is assumed to be an increasing function of the amount of service that it receives, i.e., the task has the property that it receives <it>increasing reward with increasing service (IRIS)</it>. We focus on the problem of on-line scheduling of a random arrival sequence of IRIS tasks on a single processor with the goal of maximizing the average reward accrued per task and per unit time. We describe and evaluate several policies for this system through simulation and through a comparison with an unachievable upper bound. We observe that the best performance is exhibited by a two-level policy where the top-level algorithm is responsible for allocating the amount of service to tasks and the bottom-level algorithm, using the earliest deadline first (EDF) rule, is responsible for determining the order in which tasks are executed. Furthermore, the performance of this policy approaches the theoretical upper bound in many cases. We also show that the average number of preemptions of a task under this two-level policy is very small.</p>
Real-time systems, on-line scheduling, deadline based scheduling, priority scheduling, reward functions for tasks, maximizing reward rates.

J. K. Dey, J. Kurose and D. Towsley, "On-Line Scheduling Policies for a Class of IRIS (Increasing Reward with Increasing Service) Real-Time Tasks," in IEEE Transactions on Computers, vol. 45, no. , pp. 802-813, 1996.
87 ms
(Ver 3.3 (11022016))