This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Randomized Algorithms: A System-Level, Poly-Time Analysis of Robust Computation
July 2002 (vol. 51 no. 7)
pp. 740-749

This paper provides a methodology for analyzing the performance degradation of a computation once affected by perturbations. The suggested methodology, by relaxing all assumptions made in the related literature, provides design guidelines for the subsequent implementation of complex computations in physical devices. Implementation issues, such as finite precision representation, fluctuations of the production parameters, and aging effects, can be studied directly at system level, independently from any technological aspect and quantization technique. Only the behavioral description of the computational flow, which is assumed to be Lebesgue measurable and the architecture to be investigated are needed. The suggested analysis is based on the recent theory of Randomized Algorithms, which transforms the computationally intractable problem of robustness investigation in a poly-time algorithm by resorting to probability.

[1] C.V. Stewart, “Bias in Robust Regression Caused by Discontinuities and Multiple Structures,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 816-833, Aug. 1997.
[2] S. Dutt and F.T. Assaad, “Mantissa-Preserving Operations and Robust Algorithm-Based Fault Tolerance for Matrix Computations,” IEEE Trans. Computers, vol. 45, no. 4, Apr. 1996.
[3] Y.G. Saab, ”A Fast and Robust Network Bisection Algorithm,” IEEE Trans. Computers, vol. 44, no. 7, July 1995.
[4] C. Alippi, “Some Guidelines to Enhance Application-Level Robustness in Linear Computation,” Proc. IEEE Midwestern Symp. Circuits and Systems (MWSCAS), Aug. 2000.
[5] C. Alippi and F. Balordi, “A Perturbation Size-Independent Analysis of Robustness in Feedforward Neural Networks by Randomized Algorithms,” Proc. Int'l. Conf. Modelling, Control, and Automation (CIMCA), July 2001.
[6] P. Koopman, “Embedded Systems Design Issues (the Rest of the Story),” Proc. IEEE Int'l Conf. Computer Design (ICCD), 1996.
[7] C. Alippi and L. Briozzo, “Accuracy vs. Precision in Digital VLSI Architectures for Signal Processing,” IEEE Trans. Computers, vol. 47, no. 4, Apr. 1998.
[8] S. Piché, "The Selection of Weights Accuracies for Madalines," IEEE Trans. Neural Networks, vol. 6, no. 2, Mar. 1995.
[9] G. Duundar and K. Rose, "The Efects of Quantization on Multilayer Neural Networks," IEEE Trans. Neural Networks, vol. 6, no. 6, Nov. 1995.
[10] M. Stevenson, R. Winter, and B. Widrow, "Sensitivity of Feedforward Neural Networks to Weights Errors," IEEE Trans. Neural Networks, vol. 1, no 1, Mar. 1990.
[11] J. Holt and J. Hwang, "Finite Precision Error Analysis of Neural Network Hardware Implementations," IEEE Trans. Computers, vol. 42, no. 3, Mar. 1993.
[12] C. Alippi, V. Piuri, and M. Sami, "Sensitivity to Errors in Artificial Neural Networks: A Behavioural Approach," IEEE Trans. Circuits and Systems-I, vol. 42, no 6, June 1995.
[13] F. Zhou and P. Kornerup, “High Speed DCT/IDCT Using a Pipelined CORDIC Algorithm,” Proc. 12th Symp. Computer Arithmetic (ARITH-12), 1995.
[14] M. Wosnitza, M. Cavadini, M. Thaler, and G. Troster, “A High Precision 1024-Point FFT Processor for 2D Convolution,” Proc. IEEE Int'l Solid-State Circuits Conf., 1998.
[15] E.D. Sontag, “VC Dimension of Neural Networks,” Neural Networks and Machine Learning, 1998.
[16] M. Vidyasagar, “An Overview of Computational Learning Theory and Its Applications to Neural Network Training” Identification, Adaptation, Learning, NATO ASI Series F, vol. 153, pp. 400-422, 1996.
[17] M. Vidyasagar, “Statistical Learning Theory and Randomized Algorithms for Control,” IEEE Control Systems, pp. 69-85, Dec. 1998.
[18] R. Tempo and F. Dabbene, “Probabilistic Robustness Analysis and Design of Uncertain Systems,” Progress in Systems and Control Theory, vol. 25, pp. 263-282, 1999.
[19] S. Raudys, “On Dimensionality, Sample Size, and Classification Error of Nonparametric Linear Classification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 6, pp. 667-671, June 1997.
[20] M. Vidyasagar, A Theory of Learning and Generalization. Springer, 1997.
[21] E. Bai, R. Tempo, and M. Fu, “Worst-Case Properties of the Uniform Distribution and Randomized Algorithms for Robustness Analysis,” Proc. IEEE American Control Conf., pp. 861-865, 1997.
[22] G. Calafiore, F. Dabbene, and R. Tempo, “Uniform Sample Generation of Vectors in$\big. \rm l_p\bigr.$Balls for Probabilistic Robustness Analysis,” Recent Advances in Control, Apr. 1999.
[23] X. Chen and K. Zhou, “On the Probabilistic Characterization of Model Uncertainty and Robustness,” Proc. IEEE 36th Conf. Decision and Control, pp. 3816-3821, 1997.
[24] P. Djavdan, H. Tulleken, M. Voetter, H. Verbruggen, and G. Olsder, “Probabilistic Robust Controller Design,” Proc. IEEE 28th Conf. Decision and Control, pp. 2144-2172, 1989.
[25] W.K. Pratt, Digital Image Processing, John Wiley&Sons, New York, 1978.
[26] C. Alippi, “FPE-Based Criteria to Dimension Feedforward Neural Networks” IEEE Trans. Circuits and Systems—Part 1, vol. 46, no. 8, Aug. 1999.
[27] L. Ljung, System Identification: Theory for the User. Prentice Hall, 1987.
[28] S. Geman, E. Bienenstock, and R. Doursat, ”Neural Networks and the Bias/Variance Dilemma,” Neural Computation, vol. 4, pp. 1–58, 1992.
[29] C. Alippi, S. Ferrari, V. Piuri, M. Sami, and F. Scotti, “New Trends in Intelligent System Design for Embedded and Measurement Applications” IEEE I&M Magazine, vol. 2, no. 2, June 1999.
[30] J. Hertz, A. Krogh, and R.G. Palmer, Introduction to the Theory of Neural Computation. Addison-Wesley, 1991.
[31] M.H. Hassoun, Fundametals of Artificial Neural Networks. MIT Press, 1995.
[32] Y. Mallet, D. Coomans, J. Kautsky, and O. De Vel, "Classification Using Adaptive Wavelets for Feature Extraction," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, pp. 1,058-1,066, Oct. 1997.
[33] H. Dawid and H. Meyr, “The Differential CORDIC Algorithm: Constant Scale Factor Redundant Implementation without Correcting Iterations,” IEEE Trans. Computers, vol. 45, no. 3, Mar. 1996.
[34] Y.H. Hu and H.H.M. Chern, “A Novel Implementation of CORDIC Algorithm Using Backward Angle Recoding (BAR),” IEEE Trans. Computers, vol. 45, no. 12, Dec. 1996.
[35] M.J. Buckingham, Noise in Electronic Devices and Systems. Chichester: Horwood, 1983.
[36] Hardware/Software Co-Design, Nato ASI Series, Series E: Applied Sciences, vol. 310,G. De Micheli and M. Sami, eds., Kluwer Academic, 1996.
[37] S. Amari, “Statistical and Information-Geometrical Aspects of Neural Learning,” Computational Intelligence: A Dynamic Perspective, pp. 71-82, 1995.
[38] J. Dugundji, Topology. Boston, Mass.: Allyn and Bacon, 1966.
[39] H. Chernoff, “A Measue of Asymptotic Efficiency for Tests of a Hypothesis Based on the Sum of Observations,” Annals Math. Statistics, vol. 23, pp. 493-507, 1952.

Index Terms:
Embedded system design, finite precision error analysis, randomized algorithms, sensitivity analysis, system level design.
Citation:
Cesare Alippi, "Randomized Algorithms: A System-Level, Poly-Time Analysis of Robust Computation," IEEE Transactions on Computers, vol. 51, no. 7, pp. 740-749, July 2002, doi:10.1109/TC.2002.1017694
Usage of this product signifies your acceptance of the Terms of Use.