This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Method of Learning Implication Networks from Empirical Data: Algorithm and Monte-Carlo Simulation-Based Validation
November-December 1997 (vol. 9 no. 6)
pp. 990-1004

Abstract—This paper describes an algorithmic means for inducing implication networks from empirical data samples. The induced network enables efficient inferences about the values of network nodes if certain observations are made. This implication induction method is approximate in nature as probablistic network requirements are relaxed in the construction of dependence relationships based on statistical testing. In order to examine the effectiveness and validity of the induction method, several Monte-Carlo simulations were conducted, where theoretical Bayesian networks were used to generate empirical data samples—some of which were used to induce implication relations, whereas others were used to verify the results of evidential reasoning with the induced networks. The values in the implication networks were predicted by applying a modified version of the Dempster-Shafer belief updating scheme. The results of predictions were, furthermore, compared to the ones generated by Pearl's stochastic simulation method [21], a probabilistic reasoning method that operates directly on the theoretical Bayesian networks. The comparisons consistently show that the results of predictions based on the induced networks would be comparable to those generated by Pearl's method, when reasoning in a variety of uncertain knowledge domains—those that were simulated using the presumed theoretical probabilistic networks of different topologies. Moreover, our validation experiments also reveal that the comparable performance of the implication-network-based-reasoning method can be achieved with much less computational cost than Pearl's stochastic simulation method; specifically, in all our experiments, the ratio between the actual CPU time required by our method and that by Pearl's is approximately 1:100.

[1] B.G. Buchanan and E.H. Shortliffe, Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project.Reading, Mass.: Addison-Wesley, 1984.
[2] E. Charniak, “Bayesian Networks without Tears,” AI Magazine, pp. 50-63, 1991.
[3] H.L. Chin and G.F. Cooper, "Bayesian Belief Networks Inference Using Simulation," L.N. Kanal, T.S. Levitt, and J.F. Lemmer, eds., Uncertainty in Artificial Intelligence, pp. 129-147.North-Holland: Elsevier Science, 1989.
[4] G.F. Cooper, "NESTOR: A Computer-Based Medical Diagnostic Aid that Integrates Causal and Probabilistic Knowledge," PhD thesis, Computer Science Dept., Stanford Univ., 1984.
[5] G.F. Cooper and E. Herskovits, “A Bayesian Method for the Induction of Probabilistic Networks from Data,” Machine Learning, vol. 9, pp. 309–347, 1992.
[6] A.P. Dempster, "A Generalization of Bayesian Inference," J. Royal Statistical Soc., vol. 30, pp. 205-247, 1968.
[7] M.C. Desmarais, L. Giroux, and S. Larochelle, "Fondements Methodologiques et Empiriques Dun Systeme Consultant Actif pour l'Edition de Texte: Le Projet Edcoach, Technologies de l'information et Societe, vol. 4, no. 1, pp. 61-74, 1992.
[8] M.C. Desmarais, L. Giroux, S. Larochelle, and S. Leclerc, "Assessing the Structure of Knowledge in a Procedural Domain," Proc. Cognitive Science Soc., pp. 475-481,Montreal, Aug.14-17, 1988.
[9] M.C. Desmarais and J. Liu, "Knowledge Assessment Based on the Dempster-Shafer Belief Propagation Theory," Technical Report CRIM-92/09-06, Centre de Recherche Informatique de Montreal, 1992.
[10] M.C. Desmarais and J. Liu, "Exploring the Applications of User-Expertise Assessment for Intelligent Interfaces," Proc. Int'l CHI '93: Bridges Between Worlds,Amsterdam, 1993.
[11] M.C. Desmarais, A. Maluf, and J. Liu, "User-Expertise Modeling with Empirically Derived Probabilistic Implication Networks," Int'l J. User Modeling and User-Adapted Interaction, vol. 5, nos. 3-4, pp. 283-315, 1996.
[12] J. Gebhardt and R. Kruse, "Reasoning and Learning in Probabilistic and Possibilistic Networks: An Overview," N. Lavrac and S. Wrobel, eds., Machine Learning: ECML-95, Proc. Eighth European Conf. Machine Learning, pp. 3-16. Springer-Verlag, 1995.
[13] D. Geiger, "An Entropy-Based Learning Algorithm of Bayesian Conditional Trees," D. Dubois, M.P. Wellman, B.D'Ambrosio, and P. Smets, eds., Uncertainty in Artificial Intelligence, pp. 92-97.San Mateo, Calif.: Morgan Kaufmann, 1992.
[14] D. Geiger, A. Paz, and J. Pearl, "Learning Simple Causal Structures," Int'l J. Intelligent Systems, vol. 8, pp. 231-247, 1993.
[15] D. Hecherman, Probabilistic Similarity Networks, ACM doctoral dissertation award series. Cambridge, Mass.: MIT Press, 1991.
[16] M. Henrion, "Propagating Uncertainty in Bayesian Networks by Probabilistic Logic Sampling," J.F. Lemmer and L.N. Kanal, eds., Uncertainty in Artificial Intelligence, pp. 149-163.North-Holland: Elsevier Science, 1988.
[17] D.K. Hildebrand, J.D. Laing, and H. Rosenthal, Prediction Analysis of Cross Classifications.New York: John Wiley&Sons, 1977.
[18] H.E. Kyburg, "Bayesian and Non-Bayesian Evidential Updating," Artificial Intelligence, vol. 31, pp. 271-293, 1987.
[19] K.G. Olesen, S.L. Lauritzen, and F.V. Jensen, "aHUGiN: A System Creating Adaptive Causal Probabilistic Networks," D. Dubois, M.P. Wellman, B. D'Ambrosio, and P. Smets, eds., Uncertainty in Artificial Intelligence, pp. 223-229.San Mateo, Calif.: Morgan Kaufmann, 1992.
[20] J. Pearl, "A Constraint-Propagation Approach to Probabilistic Reasoning," L.N. Kanal and J.F. Lemmer, eds., Uncertainty in Artificial Intelligence, pp. 357-369.North-Holland: Elsevier Science, 1986.
[21] J. Pearl, Probabilistic Reasoning in Intelligent Systems. San Mateo, Calif.: Morgan Kaufman, 1988.
[22] I. Pitas, E. Milios, and A.N. Venetsanopoulos, "A Minimum Entropy Approach to Rule Learning from Examples." IEEE Trans. Systems, Man, and Cybernetics, vol. 22, no. 4, pp. 621-635, 1992.
[23] R.D. Shachter and M. Peot,"Simulation approaches to general probabilistic inference on belief networks," Uncertainty in Artificial Intelligence 5, M. Henrion, R.D. Shachter, L. Kanal, and J.F. Lemmer, eds., NorthHolland, Amsterdam, pp. 221-231, 1990.
[24] G. Shafer, A Mathematical Theory of Evidence.Princeton, N.J.: Princeton Univ. Press, 1976.
[25] B.P. Wise and M. Henrion, "A Framework for Comparing Uncertain Inference Systems to Probability," L.N. Kanal and J.F. Lemmer, eds., Uncertainty in Artificial Intelligence, pp. 69-83.North-Holland: Elsevier Science, 1986.

Index Terms:
Belief-network induction, probabilistic reasoning, learning algorithms, evidential reasoning, implication networks, implication-network induction, knowledge engineering, Monte-Carlo simulation, empirical validation.
Citation:
Jiming Liu, Michel C. Desmarais, "A Method of Learning Implication Networks from Empirical Data: Algorithm and Monte-Carlo Simulation-Based Validation," IEEE Transactions on Knowledge and Data Engineering, vol. 9, no. 6, pp. 990-1004, Nov.-Dec. 1997, doi:10.1109/69.649321
Usage of this product signifies your acceptance of the Terms of Use.