This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Framework for Reinforcement-Based Scheduling in Parallel Processor Systems
March 1998 (vol. 9 no. 3)
pp. 249-260

Abstract—Task scheduling is important for the proper functioning of parallel processor systems. The static scheduling of tasks onto networks of parallel processors is well-defined and documented in the literature. However, in many practical situations a priori information about the tasks that need to be scheduled is not available. In such situations, tasks usually arrive dynamically and the scheduling should be performed on-line or "on the fly." In this paper, we present a framework based on stochastic reinforcement learning, which is usually used to solve optimization problems in a simple and efficient way. The use of reinforcement learning reduces the dynamic scheduling problem to that of learning a stochastic approximation of an unknown average error surface. The main advantage of the proposed approach is that no prior information is required about the parallel processor system under consideration. The learning system develops an association between the best action (schedule) and the current state of the environment (parallel system). The performance of reinforcement learning is demonstrated by solving several dynamic scheduling problems. The conditions under which reinforcement learning can used to efficiently solve the dynamic scheduling problem are highlighted.

[1] V. Balachandran, J.W. McCredie, and V.I. Mikhail, "Models of the Job Allocation Problem in Computer Networks," Digest of Papers COMPCON Fall 76, pp. 211-214, 1976.
[2] A.G. Barto, R.S. Sutton, and P.S. Brouwer, "Associative Search Network: A Reinforcement Learning Associative Memory," Biological Cybernetics, vol. 40, pp. 201-211, 1981.
[3] A.G. Barto, R.S. Sutton, and C.W. Anderson, "Neuronlike Adaptive Elements That Can Solve Difficult Control Problems," IEEE Trans. Systems, Man, and Cybernetics, vol. 13, no. 5, pp. 834-846, 1983.
[4] A.G. Barto, "Learning by Statistical Cooperation of Self-Interested Neuron-Like Computing Elements," Human Neurobiology, vol. 4, pp. 229-256, 1985.
[5] S. Bataineh and B. Al-Asir, “An Efficient Scheduling Algorithm for Divisible and Indivisible Tasks in Loosely Coupled Multiprocessor Systems,” Software Eng. J., vol. 9, no. 1, pp. 13-18, 1994.
[6] Y. Chow and W.H. Kohler, "Models for Dynamic Load Balancing in a Heterogeneous Multiple Processor System," IEEE Trans. Computers, vol. 28, no. 5, pp. 354-361, May 1979.
[7] W.W. Chu, L. J. Holloway, M.T. Lan, and K. Efe, "Task Allocation in Distributed Data Processing," Computer, vol. 13, no. 11, pp. 57-69, Nov. 1990.
[8] P.R. Cohen and E.A. Feigenbaum, The Handbook of Artificial Intelligence, vol. 3. Los Altos, Calif.: Kauffman, 1982.
[9] J.E. Dayhoff, Neural Network Architectures.New York: Van Nostrand Reinhold, 1990.
[10] H. El-Rewini, T.G. Lewis, and H.H. Ali, Task Scheduling in Parallel and Distributed Systems. Prentice Hall, 1994.
[11] D.J. Evans and H.Y.Y. Sanossian, "Parallel Simulation of Artificial Neural Networks," Parallel Computing: Paradigms and Applications, A.Y. Zomaya, ed., pp. 578-611.London: Int'l Thomson Computer Press, 1996.
[12] S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan College Press, New York, 1994.
[13] B.J. Hellstrom and L.N. Kanal, "Asymmetric Mean-Field Neural Networks for Multiprocessor Scheduling," Neural Networks, vol. 5, pp. 671-686, 1992.
[14] J.J. Hopfield and D.W. Tanks, "Neural Computation of Decisions in Optimisation Problems," Biological Cybernetics, vol. 52, pp. 141-152, 1985.
[15] E.S. Hou, N. Ansari, and H. Ren, A Genetic Algorithm for Multiprocessor Scheduling IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 2, pp. 113-120, 1994.
[16] H. Kasahara and S. Narita, "Practical Multiprocessing Scheduling Algorithms for Efficient Parallel Processing," IEEE Trans. Computers, vol. 33, no. 11, pp. 1,023-1,029, Nov. 1984.
[17] L. Ljung, "Analysis of Recursive Stochastic Algorithms," IEEE Trans. Automatic Control, vol. 22, no. 4, pp. 551-575, 1977.
[18] B.S. Macey and A.Y. Zomaya, "A Comparison of List Scheduling Heuristics for Communication Intensive Task Graphs," J. Cybernetics and Systems, vol. 28, pp. 535-546, 1997.
[19] P.M. Mills, A.Y. Zomaya, and M.O. Tadé, Neuro-Adaptive Process Control: A Practical Approach.New York: Wiley, 1996.
[20] A.K. Mok and M.L. Dertouzos, "Multiprocessor Scheduling in a Hard Real-Time Environment," Proc. Seventh Texas Conf. Computer Systems, Nov. 1978.
[21] T.M. Nabhan and A.Y. Zomaya, "A Parallel Simulated Annealing Algorithm with Low Communication Overhead," IEEE Trans. Parallel and Distributed Systems, Vol. 6, No. 12, Dec. 1995, pp. 1226-1233.
[22] K.S. Narendra and M.A.L. Thathachar, "Learning Automata—A Survey," IEEE Trans. Systems, Man, and Cybernetics, vol. 4, pp. 323-334, 1974.
[23] K.S. Narendra and S. Lakshmivarahen, "Learning Automata—A Critique," J. Cybernetics and Information Science, vol. 1, pp. 53-65, 1977.
[24] D.B. Parker, "Learning Logic," Invention Report S81-64, File 1, Office of Technology Licensing, Stanford Univ., 1982.
[25] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning Internal Representations by Error Propagation," Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations, D.E. Rumelhart and J.L. McClelland et al., eds., chapter 8, pp. 318-362.Cambridge, Mass.: MIT Press, 1986.
[26] S. Salleh and A.Y. Zomaya, "Using Fuzzy Logic for Task Scheduling in Multiprocessor Systems," Proc. Eighth ISCA Conf. Parallel and Distributed Computing Systems (PDCS-95),Orlando, Fla., pp. 45-51, Sept.21-23, 1995.
[27] P.J. Werbos, "Beyond Regression: New Tools for Prediction and Analysis in the Behavioural Sciences," PhD thesis, Harvard Univ., 1974.
[28] B. Widrow, N.K. Gupta, and S. Maitra, "Punish/Reward Learning with a Critic in Adaptive Threshold Systems," IEEE Trans. Systems, Man, and Cybernetics, vol. 3, pp. 455-465, 1973.
[29] T. Yang and A. Gerasoulis, “DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors,” IEEE Trans. Parallel and Distributed Systems, vol. 5, pp. 951-967, 1994.
[30] A.Y. Zomaya, "Reinforcement Learning for the Adaptive Control of Non-Linear Systems," IEEE Trans. Systems, Man, and Cybernetics, vol. 24, no. 2, pp. 357-363, 1994.
[31] A.Y. Zomaya, "Parallel Processing For Real-Time Simulation: A Case Study," IEEE Parallel and Distributed Technology, pp. 49-56, June 1996.
[32] Parallel and Distributed Computing Handbook, A.Y. Zomaya, ed., New York: McGraw-Hill, 1996.
[33] A.Y. Zomaya, M. Clements, and S. Olariu, "Reinforcement-Learning Techniques for Scheduling in Parallel Computing Environments," Technical Report 96-PCRL-01, Parallel Computing Research Laboratory, Dept. of Electrical and Electronic Eng., Univ. of Western Australia, 1996.

Index Terms:
Neural networks, parallel processing, randomization, reinforcement learning, scheduling, task allocation.
Citation:
Albert Y. Zomaya, Matthew Clements, Stephan Olariu, "A Framework for Reinforcement-Based Scheduling in Parallel Processor Systems," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 3, pp. 249-260, March 1998, doi:10.1109/71.674317
Usage of this product signifies your acceptance of the Terms of Use.