
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
S. Ha, E.A. Lee, "CompileTime Scheduling and Assignment of DataFlow Program Graphs with DataDependent Iteration," IEEE Transactions on Computers, vol. 40, no. 11, pp. 12251238, November, 1991.  
BibTex  x  
@article{ 10.1109/12.102826, author = {S. Ha and E.A. Lee}, title = {CompileTime Scheduling and Assignment of DataFlow Program Graphs with DataDependent Iteration}, journal ={IEEE Transactions on Computers}, volume = {40}, number = {11}, issn = {00189340}, year = {1991}, pages = {12251238}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.102826}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Computers TI  CompileTime Scheduling and Assignment of DataFlow Program Graphs with DataDependent Iteration IS  11 SN  00189340 SP1225 EP1238 EPD  12251238 A1  S. Ha, A1  E.A. Lee, PY  1991 KW  compile time scheduling; assignment; dataflow program graphs; datadependent iteration; parallel processors; fully dynamic; staticassignment; selftimed; fully static; probability mass function; idle time; programming; parallel programming; program compilers; program processors; scheduling. VL  40 JA  IEEE Transactions on Computers ER   
Four scheduling strategies for dataflow graphs onto parallel processors are classified: (1) fully dynamic, (2) staticassignment, (3) selftimed, and (4) fully static. Scheduling techniques valid for strategies (2), (3), and (4) are proposed. The focus is on dataflow graphs representing datadependent iteration. A known probability mass function for the number of cycles in the datadependent iteration is assumed, and how a compiletime decision about assignment and/or ordering as well as timing can be made is shown. The criterion used is to minimize the expected total idle time caused by the iteration. In certain cases, this will also minimize the expected makespan of the schedule. How to determine the number of processors that should be assigned to the datadependent iteration is shown. The method is illustrated with a practical programming example.
[1] W. B. Ackerman, "Data flow languages,"IEEE Computer, vol. 15, no. 2, pp. 1525, Feb. 1982.
[2] Arvind and K. P. Gostelow, "The UInterpreter,"IEEE Computer, vol. 15, no. 2, Feb. 1982.
[3] J. Backus, "Can programming be liberated from the von Neumann style? A functional style and its algebra of programs,"Commun. ACM, vol. 21, no. 8, pp. 613641, Aug. 1978.
[4] F. W. Burton and M. R. Sleep, "Executing functional programs on a virtual tree of processors," inProc. ACM Conf. Functional Programming Lang. Comput. Arch.1981, pp. 187194.
[5] M. Chase, "A pipelined data flow architecture for signal processing: the NEC uPd7281," inVLSI Signal Processing, New York, IEEE Press, (1984).
[6] W. W. Chu, L. J. Holloway, L. M.T. Lan, and K. Efe, "Task allocation in distributed data processing,"IEEE Computer, pp. 5769, Nov. 1980.
[7] W. W. Chu and L. M.T. Lan, "Task allocation and precedence relations for distributed realtime systems,"IEEE Trans. Comput., vol. C36, pp. 667679, June 1987.
[8] M. Cornish, D. W. Hogan, and J. C. Jensen, "The Texas Instruments distributed data processor," inProc. Louisiana Computer ExpositionLafayette, LA, Mar. 1979, pp. 189193.
[9] A. L. Davis, "The architecture and system method of DDM1: A recursively structured data driven machine," inProc. Fifth Annu. Symp. Comput. Architecture, Apr. 1978, pp. 210215.
[10] J. B. Dennis, "Data flow supercomputers,"IEEE Computer, vol. 13, no. 11, Nov. 1980.
[11] J. B. Dennis and G. R. Gao, "An efficient pipelined dataflow processor architecture," inProc. ACM SIGARCH Conf. Supercomputing, Nov. 1988.
[12] K. Efe, "Heuristic models of task assignment scheduling in distributed systems,"IEEE Computer, pp. 5056, June 1982.
[13] J. A. Fisher, "The VLIW machine: A multiprocessor for compiling scientific code,"IEEE Computer, vol. 17, no. 7, July 1984.
[14] G. R. Gao, "A pipeline code mapping scheme for static dataflow computers," Ph.D. dissertation, Laboratory for Computer Science, MIT, Cambridge, MA, (1983).
[15] G. R. Gao, R. Tio, and H. H. J. Hum, "Design of an efficient dataflow architecture without data flow," inProc. Int. Conf. Fifth Generation Comput. Syst., 1988.
[16] J. L. Gaudiot, "Datadriven multicomputers in digital signal processing,"Proc. IEEE, vol. 75, pp. 12201234, Sept. 1987.
[17] E. F. Girczyc, "Loop windingA data flow approach to functional pipelining," inISCAS, 1987, pp. 382385.
[18] M. Gransky, I. Koren, and G. M. Silberman, "The effect of operation scheduling on the performance of a data flow computer,"IEEE Trans. Comput., vol. C36, pp. 10191029, Sept. 1987.
[19] C. A. R. Hoare, "Communicating sequential processes,"Commun. ACM, vol. 21, pp. 666677, 1978.
[20] T. C. Hu, "Parallel sequencing and assembly line problems,"Oper. Res., vol. 9, no. 6, pp. 841848.
[21] M. A. Iqbal, J. H. Saltz, and S. H. Bokhari, "A comparative analysis of static and dynamic load balancing strategies," inProc. Int. Conf. Parallel Processing, 1986, pp. 10401045.
[22] R. M. Keller, F. C. H. Lin, and J. Tanaka, "Rediflow multiprocessing," inProc. IEEE COMPCON, Feb. 1984, pp. 410417.
[23] J. Kunkel, "Parallelism in COSSAP,"Internal Memorandum, Aachen University of Technology, Fed. Rep. of Germany, 1987.
[24] S.Y. Kung,VLSI Array Processors, Prentice Hall, Englewood Cliffs, N.J. 1988.
[25] E. A. Lee and D. G. Messerschmitt, "Static scheduling of synchronous data flow programs for digital signal processing,"IEEE Trans. Comput., vol. C36, no. 1, pp. 2435, Jan. 1987.
[26] E. A. Lee and D. G. Messerschmitt, "Synchronous data flow,"Proc. IEEE, vol. 75, Sept. 1987.
[27] E. A. Lee, "Recurrences, iteration, and conditionals in statically scheduled data flow," submitted toIEEE Trans. Comput.
[28] E. A. Lee, W.H. Ho, E. Goei, J. Bier, and S. Bhattacharyya, "Gabriel: A design environment for DSP,"IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, Nov. 1989.
[29] C. E. Leiserson, "Optimizing synchronous circuitry by retiming," presented at Third Caltech Conf. VLSI, Pasadena, CA, Mar. 1983.
[30] H. Lu and M. J. Carey, "Loadbalanced task allocation in locally distributed computer systems," inProc. Int. Conf. Parallel Processing, 1986, pp. 10371039.
[31] P. R. Ma, E. Y. S. Lee, and M. Tsuchiya, "A task allocation model for distributed computing systems,"IEEE Trans. Comput., vol. C31, pp. 4147, Jan. 1982.
[32] D. F. Martin and G. Estrin, "Path length computations on graph models of computations,"IEEE Trans. Comput.vol. C18, pp. 530536, June 1969.
[33] H. Muhlenbeim, M. GorgesSchleuter, and O. Kramer, "New solutions to the mapping problem of parallel systems: The evolution approach,"Parallel Comput., vol. 4, pp. 269279, 1987.
[34] G. M. Papadopoulos, "Implementation of a general purpose dataflow multiprocessor," Dep. Elec. Eng. and Comput. Sci., MIT, Ph.D. thesis, Aug. 1988.
[35] A. Plaset al., "LAU system architecture: A parallel datadriven processor based on single assignment," inProc. 1976 Int. Conf. Parallel Processing, pp. 293302.
[36] D. A. Schwartz and T. P. Barnwell III, "Cyclostatic solutions: Optimal multiprocessor realizations of recursive algorithms," inVLSI Signal Processing, New York: IEEE Press, 1986.
[37] M. E. Kopache and E. P. Glinert, "C2: A mixed textual/graphical environment for C," inProc. IEEE Workshop Visual Languages. 1988, pp. 231238.
[38] S. R. Vegdahl, "A survey of proposed architectures for the execution of functional languages,"IEEE Trans. Comput., vol. C33, pp. 10501071, Dec. 1984.
[39] I. Watson and J. Gurd, "A practical data flow computer,"IEEE Computer, vol. 15, Feb. 1982.
[40] M. A. Zissman and G. G. O'Leary, "A block diagram compiler for a digital signal processing MIMD computer," inProc. IEEE Int. Conf. ASSP, 1987, pp. 18671870.