The Community for Technology Leaders
RSS Icon
Issue No.07 - July (2008 vol.57)
pp: 865-875
The success of different computing models, performance analysis and load balancing and algorithms depends on the processor availability information because there is a strong relationship between a process response time and the processor time available for its execution. Therefore, predicting the processor availability for a new process or task in a computer system is a basic problem that arises in in many important contexts. Unfortunately, making such predictions is not easy because of the dynamic nature of current computer systems and their workload, which can vary drastically in a short interval of time. This paper presents two new availability prediction models. The first, called SPAP (Static Process Assignment Prediction) model, is capable of predicting the CPU availability for a new task on a computer system having information about the tasks in its run queue. The second, called DYPAP (DYnamic Process Assignment Prediction) model, is an improvement of the SPAP model capable of making these predictions from real-time measurements provided by a monitoring tool, without any kind of information about the tasks in the run queue. Furthermore, the implementation of this monitoring tool for Linux workstations is presented.
Modeling and prediction, Monitors, Performance measures
Marta Beltrán, Antonio Guzmán, Jose L. Bosque, "A New CPU Availability Prediction Model for Time-Shared Systems", IEEE Transactions on Computers, vol.57, no. 7, pp. 865-875, July 2008, doi:10.1109/TC.2008.24
[1] K. Benmohammed-Mahieddine, P. Dew, and M. Kara, “A Periodic Symmetrically-Initiated Load Balancing Algorithm for Distributed Systems,” Proc. 14th Int'l Conf. Distributed Computing Systems, 1994.
[2] G.-H. Lee, W.-D. Woo, and B.-N. Yoon, “An Adaptive Load Balancing Algorithm Using Simple Prediction Mechanism,” Proc. Ninth Int'l Workshop Database and Expert Systems Applications, pp.496-501, 1998.
[3] K. Shen, T. Yang, and L. Chu, “Cluster Load Balancing for Fine-Grain Network Services,” Proc. Int'l Parallel and Distributed Processing Symp., pp. 51-58, 2002.
[4] A.S. Tanenbaum, Distributed Operating Systems. Prentice Hall, 1995.
[5] M. Beltrán and J.L. Bosque, “Predicting the Response Time of a New Task on a Beowulf Cluster,” Lecture Notes in Computer Science, R.Wyrzykowski, J. Dongarra, M. Paprzycki, and J. Wasniewski, eds., vol. 3019, pp. 145-152, Springer, 2003.
[6] N.T. Spring and R. Wolski, “Application Level Scheduling of Gene Sequence Comparison on Metacomputers,” Proc. Int'l Conf. Supercomputing, pp. 141-148, 1998.
[7] M. Beltrán and J.L. Bosque, “Estimating a Workstation CPU Assignment with the DYPAP Monitor,” Proc. Third IEEE Int'l Symp. Parallel and Distributed Computing, pp. 64-70, 2004.
[8] U. Chandra and M. Harmon, “Predictability of Program Execution Times on Superscalar Pipelined Architectures,” Proc. Third Workshop Parallel and Distributed Real Time Systems, pp. 104-113, 1995.
[9] C.Y. Park and A.C. Shaw, “A Source-Level Tool for Predicting Deterministic Execution Times of Program,” Technical Report 89-09-12, Univ. of Washington, 1989.
[10] M. Iverson, F. Ozguner, and G. Follen, “Run-Time Statistical Estimation of Task Execution Times for Heterogeneous Distributed Computing,” Proc. High Performance Distributed Computing Conf., pp. 263-270, 1996.
[11] R. Wolski, N. Spring, and J. Hayes, “Predicting the CPU Availability of Time-Shared Unix Systems on the Computational Grid,” Proc. Eighth Int'l Symp. High Performance Distributed Computing, pp. 105-112, 1999.
[12] P. Mehra and B.W. Wah, “Automated Learning of Workload Measures for Load Balancing on a Distributed System,” Proc. Int'l Conf. Parallel Processing, Volume 3: Algorithms and Applications, pp.263-270, 1993.
[13] P.A. Dinda, “Online Prediction of the Running Time of Tasks,” Proc. 10th IEEE Int'l Symp. High Performance Distributed Computing, pp. 336-337, 2001.
[14] D. Ferrari and S. Zhou, “An Empirical Investigation of Load Indices for Load Balancing Applications,” Proc. 12th IFIP Int'l Symp. Computer Performance Modelling, Measurement, and Evaluation, 1987.
[15] M.L. Massie, B.N. Chun, and D.E. Culler, “The Ganglia Distributed Monitoring System: Design, Implementation and Experience,” Parallel Computing, vol. 30, no. 7, 2004.
[16] R. Buyya, “PARMON: A Portable and Scalable Monitoring System for Clusters,” Int'l J. Software: Practice and Experience, pp. 723-729, 2000.
[17] R. Buyya, B. Koshy, and R. Mudlapur, “GARDMON: A Javabased Monitoring Tool for Gardens Non-Dedicated Cluster Computing,” Proc. Workshop Cluster Computing—Technologies, Environments and Applications, 1999.
[18] Z. Liang, Y. Sun, and C. Wang, “ClusterProbe: An Open, Flexible and Scalable Cluster Monitoring Tool,” Proc. First Int'l Workshop Cluster Computing, pp. 261-268, 1999.
[19] C. Roder, T. Ludwig, and A. Bode, “Flexible Status Measurement in Heterogeneous Environments,” Proc. Fifth Int'l Conf. Parallel and Distributed Processing Techniques and Applications, 1998.
[20] B. Tierney, B. Cowley, D. Gunter, M. Holding, J. Lee, and M. Thompson, “White Paper: A Grid Monitoring Service Architecture,” Proc. Global Grid Forum, 2001.
[21] E. Anderson and D. Patterson, “Extensible, Scalable Monitoring for Clusters of Computers,” Proc. 11th Systems Administration Conf., 1997.
[22] P. Uthapoyas, J. Maneesilp, and P. Ingongnam, “SCMS: An Integrated Cluster Management Tool for Beowulf Cluster System,” Proc. Int'l Conf. Parallel and Distributed Processing Techniques and Applications, 2000.
[23] K. Czajkowski, S. Fitzgerald, I. Foster, and C. Kesselman, “Grid Information Services for Distributed Resource Sharing,” Proc. 10th IEEE Int'l Symp. High Performance Distributed Computing, 2001.
[24] “DataGrid,” /, 2008.
[25] J. Aas, “Understanding the Linux CPU Scheduler,” Silicon Graphics Int'l, cpu_scheduler.pdf , 2005.
[26] “LibGTop Library,” 2.0, 2008.
18 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool