This Article 
 Bibliographic References 
 Add to: 
Anomaly Detection in Embedded Systems
February 2002 (vol. 51 no. 2)
pp. 108-120

By employing fault tolerance, embedded systems can withstand both intentional and unintentional faults. Many fault-tolerance mechanisms are invoked only after a fault has been detected by whatever fault-detection mechanism is used, hence, the process of fault detection must itself be dependable if the system is expected to be fault tolerant. Many faults are detectable only indirectly as a result of performance disorders that manifest as anomalies in monitored system or sensor data. Anomaly detection, therefore, is often the primary means of providing early indications of faults. As with any other kind of detector, one seeks full coverage of the detection space with the anomaly detector being used. Even if coverage of a particular anomaly detector falls short of 100 percent, detectors can be composed to effect broader coverage, once their respective sweet spots and blind regions are known. This paper provides a framework and a fault-injection methodology for mapping an anomaly detector's effective operating space and shows that two detectors, each designed to detect the same phenomenon, may not perform similarly, even when the event to be detected is unequivocally anomalous and should be detected by either detector. Both synthetic and real-world data are used.

[1] S. Cass, “Little Linuxes,” IEEE Spectrum, vol. 38, no. 3, pp. 23–25 Mar. 2001.
[2] P.A. Lee and T. Anderson, Fault Tolerance: Principles and Practice, second ed. Vienna, Austria: Springer–Verlag, 1990.
[3] B. Randell, “System Structure for Software Fault Tolerance,” IEEE Trans. Software Eng., vol. 1, no. 2, pp. 220-232, June 1975.
[4] R.A. Maxion and F.E. Feather, "A Case Study of Ethernet Anomalies in a Distributed Computing Environment," IEEE Trans. Reliability, vol. 39, no. 4, pp. 433-443, Oct. 1990.
[5] M.M. Tsao, “Trend Analysis and Fault Prediction,” PhD thesis, Computer Science Dept., Carnegie Mellon Univ., Pittsburgh, PA, May 1983.
[6] R.A. Maxion and D.P. Siewiorek, “Symptom Based Diagnosis,” Proc. IEEE Int'l Conf. Computer Design (ICCD-85), pp. 294-297, Oct. 1985.
[7] M.F. Buckley, “Computer Event Monitoring and Analysis,” PhD thesis, Dept. of Electrical and Computer Eng., Carnegie Mellon Univ., Pittsburgh, PA, May 1992.
[8] T.F. Lunt, “A Survey of Intrusion Detection Techniques,” Computers&Security, vol. 12, no. 4, pp. 405–418, June 1993.
[9] S.S. Stevens, “On the Theory of Scales of Measurement,” Science, vol. 103, no. 2684, pp. 677-680, June 1946.
[10] B.A. Schroeder, “On-Line Monitoring: A Tutorial,” Computer, vol. 28, no. 6, pp. 72-78, June 1995.
[11] T.F. Arnold, “The Concept of Coverage and Its Effect on the Reliability Model of a Repairable System,” IEEE Trans. Computers, vol. 22, no. 3, pp. 251-254, Mar. 1973.
[12] D. Siewiorek and R. Swarz, Reliable Computer Systems: Design and Evaluation. Digital Press, 1992.
[13] M. Hsueh, T. Tsai, and R. Iyer, “Fault Injection Techniques and Tools,” Computer, pp. 75–82, Apr. 1997.
[14] J.A. Clark and D.K. Pradhan, "Fault Injection: A Method for Validating Computer-System Dependability," Computer, June 1995, pp. 47-56.
[15] D.R. Cox and H.D. Miller, The Theory of Stochastic Processes. New York: Wiley, 1965.
[16] J.D. Hamilton, Time Series Analysis. Princeton, N.J.: Princeton Univ. Press, 1994.
[17] S. Jha, K.M.C. Tan, and R.A. Maxion, “Markov Chains, Classifiers, and Intrusion Detection,” Proc. 14th IEEE Computer Security Foundations Workshop, pp. 206-219, June 2001.
[18] S. Forrest, S.A. Hofmeyer, and A. Somayaji, “Computer Immunology,” Comm. ACM, vol. 40, no. 10, pp. 88-96, Oct. 1997.
[19] S. Forrest, S.A. Hofmeyr, A. Somayaji, and T.A. Longstaff, A Sense of Self for Unix Processes Proc. 1996 IEEE Symp. Security and Privacy, pp. 120-128, May 1996.
[20] S. Hofmeyr, S. Forrest, and A. Somayaji, “Intrusion Detection Using Sequences of System Calls,” J. Computer Security, vol. 6, no. 3, pp. 151-180, 1998.
[21] C. Warrender, S. Forrest, and B. Pearlmutter, “Detecting Intrusions Using System Calls: Alternative Data Models,” Proc. 1999 IEEE Symp. Security and Privacy, pp. 133-145, May 1999.
[22] S. Forrest, A.S. Perelson, L. Allen, and R. Cherukuri, “Self-Nonself Discrimination in a Computer,” Proc. IEEE Symp. Research in Security and Privacy, pp. 202-212 May 1994.
[23] Sun Microsystems, “Sunshield Basic Security Module Guide,” Technical Report 805-2635-10, Sun Microsystems, Inc., Palo Alto, CA, Oct. 1998.
[24] R.A. Maxion and K.M.C. Tan, “Benchmarking Anomaly-Based Detection Systems,” Proc. Int'l Conf. Dependable Systems and Networks, pp. 623-630, June 2000.
[25] K.M.C. Tan, “Defining the Operational Limits of Anomaly Detection (working title),” PhD thesis, Dept. Computer Science, Melbourne Univ., Melbourne, Victoria, Australia, forthcoming.

Index Terms:
Anomaly, anomaly detection, coverage, dependability.
R.A. Maxion, K.M.C. Tan, "Anomaly Detection in Embedded Systems," IEEE Transactions on Computers, vol. 51, no. 2, pp. 108-120, Feb. 2002, doi:10.1109/12.980003
Usage of this product signifies your acceptance of the Terms of Use.