This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Clock Trees: Logical Clocks for Programs with Nested Parallelism
October 1997 (vol. 23 no. 10)
pp. 646-658

Abstract—A vector clock is a valuable tool for maintaining run-time concurrency information in parallel programs. In this paper a new method is presented for modifying vector clocks to make them suitable for programs with nested fork-join parallelism (having a variable number of tasks). The resulting kind of clock is called a clock tree, due to its tree structure. The clock tree method compares favorably with other timestamping methods for variable parallelism: task identifier reuse and task recycling. The worst case space requirements of clock trees equals the best case for the latter two methods, and the average size of a clock tree is much smaller than the size of a vector with task recycling. Furthermore, the algorithm for maintaining clock trees does not require a shared data structure and thus avoids the serialization bottleneck that task recycling suffers from.

[1] M. Ahamad et al., "Causal Memory: Definitions, Implementation and Programming," Distributed Computing, vol. 9, no. 1, pp. 37-49, 1995.
[2] M. Ahuja, T. Carlson, and A. Gahlot, "Passive-Space and Time View: Vector Clocks for Achieving Higher Performance, Program Correction and Distributed Computing," IEEE Trans. Software Eng., vol. 19, no. 9, 1993.
[3] C. Amza, A.L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel, “TreadMarks: Shared Memory Computing on Networks of Workstations,” Computer, vol. 29, no. 2, Feb. 1996.
[4] K. Audenaert and L. Levrouw, "Space Efficient Data Race Detection for Parallel Programs with Series-Parallel Task Graphs," Proc. 3rd Euromicro Workshop Parallel and Distributed Processing, pp. 508-515, 1995.
[5] K. Birman, "The Process Group Approach to Reliable Distributed Computing," Comm. ACM, vol. 36, no. 12, pp. 37-53, 1993.
[6] B. Charron-Bost, "Concerning the Size of Logical Clocks in Distributed Systems," Information Processing Letters, vol. 39, pp. 11-16, 1991.
[7] R. Cooper and K. Marzullo, "Consistent Detection of Global Predicates," in Proc. Workshop Parallel and Distributed Debugging, ACM Press, New York, pp. 163-173.
[8] A. Dinning and E. Shonberg,“An empirical comparison of monitoring algorithms for access anomalydetection,” Second ACM SIGPLAN Symp. on Principles and Practice of ParallelProgramming, 1990.
[9] A. Dinning and E. Schonberg, "An Evaluation of Monitoring Algorithms for Access Anomaly Detection," Technical Report Ultracomputer Note #163, New York Univ., 1989.
[10] C.J. Fidge, "Partial Orders for Parallel Debugging, ACM SIGPLAN Notices, Jan. 1989, pp. 183-194.
[11] C.J. Fidge, "Logical Time in Distributed Computing Systems," Computer, pp. 28-33, Aug. 1991.
[12] J. Fowler and W. Zwaenepoel, "Causal Distributed Breakpoints," Proc. 10th Int'l Conf. Distributed Computing Systems, pp. 134-141, 1990.
[13] D. Haban and W. Weigel, "Global Events and Global Breakpoints in Distributed Systems," Proc. 21st Hawaii Int'l Conf. Systems Sciences, pp. 166-175, 1989.
[14] L. Lamport, "Time, clocks and the ordering of events in a distributed system," Comm. ACM, vol. 21, no. 7, pp. 558-565, July 1978.
[15] J.M. Mellor-Crummey, "On-the-Fly Detection of Data Races for Programs with Nested Fork-Join Parallelism," Proc. Supercomputing Debugging Workshop '91, pp. 24-33, 1991.
[16] Y-K. Jun and K. Koh, "On-the-Fly Detection of Access Anomalies in Nested Parallel Loops," Proc. ACM/ONR Workshop Parallel and Distributed Debugging, pp. 107-117, May 1993.
[17] L. Levrouw, K. Audenaert, and J. Van Campenhout, "Execution Replay with Compact Logs for Shared-Memory Programs," C. Girault, Applications in Parallel and Distributed Computing, pp. 125-134, IFIP, 1994.
[18] L. Levrouw and K. Audenaert, "Minimizing the Log Size for Execution Replay of Shared-Memory Programs," Parallel Processing: CONPAR 94-VAPP VI, Lecture Notes in Computer Science 854, B. Buchberger and J. Volkert, eds. pp. 76-87, 1994.
[19] F. Mattern, "Virtual Time and Global States of Distributed Systems," Parallel and Distributed Algorithms, M. Cosnard et al. eds., pp. 215-226, North-Holland, 1989.
[20] F. Mattern, "Algorithms for Distributed Termination Detection," Distributed Computing, vol. 2, pp. 161-175, 1987.
[21] R.H. Möhring, "Algorithmic Aspects of Comparability Graphs and Interval Graphs," Graphs and Order (NATO ASI C147), I. Rival, ed., pp. 41-101, D. Reidel, 1985.
[22] R. Netzer and B. Miller, "Detecting Data Races in Parallel Program Executions," Advances in Languages and Compilers for Parallel Processing, Nicolau, Gelernter, Gross, and Padua, eds., pp. 109-130, MIT Press, 1990.
[23] R. Netzer and B. Miller, "Optimal Tracing and Replay for Debugging Message-Passing Parallel Programs, Proc. Supercomputing '92, pp. 502-511, 1992.
[24] O. Ore, Theory of Graphs, vol. 38, AMS Colloq. Publishing, AMS, Providence, 1962.
[25] E. Schonberg, "On-the-Fly Detection of Access Anomalies," Proc. ACM/SIGPLAN '89 Conf. Programming Language Design and Implementation, pp. 285-297, 1989.
[26] R. Schwarz and F. Mattern, "Detecting Causal Relationships in Distributed Computations: In Search of the Holy Grail," Distributed Computing, vol. 7, pp. 149-174, 1994.
[27] M. Singhal and A. Kshemkalyani, "An Efficient Implementation of Vector Clocks," Information Processing Letters, vol. 43, pp. 47-52, 1992.
[28] J. Valdes, R. Tarjan, and E. Lawler, "The Recognition of Series Parallel Digraphs," SIAM J. Computing, vol. 12, no. 2, pp. 298-313, 1982.
[29] R. van Renesse, "Causal Controversy at Le Mont St.-Michel," ACM Operating Systems Review, vol. 27, no. 2, pp. 44-53, 1993.

Index Terms:
Logical time, vector clocks, Lamport clocks, nested fork-join parallelism, event labeling.
Citation:
Koenraad Audenaert, "Clock Trees: Logical Clocks for Programs with Nested Parallelism," IEEE Transactions on Software Engineering, vol. 23, no. 10, pp. 646-658, Oct. 1997, doi:10.1109/32.637147
Usage of this product signifies your acceptance of the Terms of Use.