
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Christian W. Omlin, C. Lee Giles, "Rule Revision With Recurrent Neural Networks," IEEE Transactions on Knowledge and Data Engineering, vol. 8, no. 1, pp. 183188, February, 1996.  
BibTex  x  
@article{ 10.1109/69.485647, author = {Christian W. Omlin and C. Lee Giles}, title = {Rule Revision With Recurrent Neural Networks}, journal ={IEEE Transactions on Knowledge and Data Engineering}, volume = {8}, number = {1}, issn = {10414347}, year = {1996}, pages = {183188}, doi = {http://doi.ieeecomputersociety.org/10.1109/69.485647}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Knowledge and Data Engineering TI  Rule Revision With Recurrent Neural Networks IS  1 SN  10414347 SP183 EP188 EPD  183188 A1  Christian W. Omlin, A1  C. Lee Giles, PY  1996 KW  Deterministic finitestate automata KW  genuine and incorrect rules KW  knowledge insertion and extraction KW  recurrent neural networks KW  regular languages KW  rule revision. VL  8 JA  IEEE Transactions on Knowledge and Data Engineering ER   
AbstractRecurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finitestate automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform
[1] Y. AbuMostafa,"Learning from hints in neural networks," J. Complexity, vol. 6, pp. 192, 1990.
[2] K. AlMashouq and I. Reed,"Including hints in training neural nets," Neural Computation, vol. 3, no. 3, pp. 418427, 1991.
[3] H. Berenji,"Refinement of approximate reasoningbased controllers by reinforcement learning," Machine Learning, Proc. Eighth International Int'l Workshop,San Mateo, Calif, L. Birnbaum and G. Collins, eds., p. 475. Morgan Kaufmann Publishers, 1991.
[4] S. Das,C. Giles, and G. Sun,"Learning contextfree grammars: Limitations of a recurrent neural network with an external stack memory," Proc. 14th Ann. Conf. Cognitive Science Society,San Mateo, Calif., pp. 791795. Morgan Kaufmann Publishers, 1992.
[5] P. Frasconi,M. Gori,M. Maggini,, and G. Soda,“An unified approach for integrating explicit knowledge and learningby example in recurrent networks,” Proc. IEEEIJCNN91,Seattle, pp. 811816, 1991.
[6] P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Unified Integration of Explicit Knowledge and Learning by Example in Recurrent Networks,” IEEE Trans. Knowledge and Data Eng., vol. 7, no. 2, pp. 340–346, Apr. 1995.
[7] S. Chakrabarti and K. Yelick,“Implementing an irregular application on a distributed memorymultiprocessor,” Proc. of the Fourth ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPOPP), ACM SIGPLAN Notices, vol. 28, no. 7, May 1993.
[8] C. Giles and T. Maxwell,"Learning, invariance, and generalization in highorder neural networks," Applied Optics, vol. 26, no. 23, pp. 4,9724,978, 1987.
[9] C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, and Y.C. Lee, “Learning and Extracted Finite State Automata with SecondOrder Recurrent Neural Networks,” Neural Computation, vol. 4, no. 3, pp. 393–405, 1992.
[10] C. Giles and C. Omlin,"Inserting rules into recurrent neural networks," Neural Networks for Signal Processing II, Proc. 1992 IEEE Workshop, pp. 1322, S. Kung, F. Fallside, J.A. Sorenson, and C. Kamm, eds., Piscataway, N.J.: IEEE Press, 1992.
[11] A. Ginsberg,"Theory revision via prior operationalization," Proc. Sixth Nat'l Conf. Artificial Intelligence, p. 590, 1988.
[12] J.E. Hopcroft and J.D. Ullman, Introduction to Automata Theory, Languages and Computation. AddisonWesley, Apr. 1979.
[13] R. Maclin and J. Shavlik,"Refining algorithms with knowledgebased neural networks: Improving the ChouFasman algorithm for protein folding," Computational Learning Theory and Natural Learning Systems, S. Hanson, G. Drastal, and R. Rivest, eds., MIT Press, 1992.
[14] O. Nerrand,P. RousselRagot,G.D.L. Personnaz, and S. Marcos,"Neural networks and nonlinear adaptive filtering: Unifying concepts and new algorithms," Neural Computation, vol. 5, pp. 165197, 1993.
[15] C.W. Omlin and C.L. Giles, "Extraction of Rules from DiscreteTime Recurrent Neural Networks," Neural Networks, vol. 9, no. 1, pp. 4152, 1996.
[16] C. Omlin and C. Giles,"Training secondorder recurrent neural networks using hints," Proc. Ninth Int'l Conf. Machine Learning,San Mateo, Calif., D. Sleeman and P. Edwards, eds., pp. 363368, Morgan Kaufmann Publishers, 1992.
[17] C. Omlin,C. Giles, and C. Miller,"Heuristics for the extraction of rules from discretetime recurrent neural networks," Proc. Int'l Joint Conf. Neural Networks 1992, vol. I, pp. 3338, June 1992.
[18] D. Oursten and R. Mooney,"Changing rules: A comprehensive approach to theory refinement," Proc. Eighth National Conf. Artificial Intelligence, p. 815, 1990.
[19] M. Pazzani,"Detecting and correcting errors of omission after explanationbased learning," Proc. 11th Int'l Joint Conf. Artificial Intelligence, p. 713, 1989.
[20] S.J. Perantonis and P.J. Lisboa, "Translation, Rotation, and Scale Invariant Pattern Recognition by HighOrder Neural Networks and Moment Classifiers," IEEE Trans. Information Theory, vol. 3, pp. 241251, 1992.
[21] J. Pollack,"The induction of dynamical recognizers," Machine Learning, vol. 7, nos. 2/3, pp. 227252, 1991.
[22] L. Pratt,"Nonliteral transfer of information among inductive learners," Neural Netwoorks: Theory and Applications II, R. Mammone and Y. Zeevi, eds., Academic Press, 1992.
[23] J. W. Shavlik,"A framework of combining symbolic and neural learning," Machine Learning, vol. 14, no. 3, pp. 321331, 1994.
[24] E.I. Siegelmann and E. Sontag,"Turing computability with neural nets," Applied Mathematics Letters, vol. 4, no. 6, pp. 7780, 1991.
[25] S. Suddarth and A. Holden,"Symbolic neural systems and the use of hints for developing complex systems," Int'l J. ManMachine Studies, vol. 34, pp. 291311, 1991.
[26] G. Towell,M. Craven, and J. Shavlik,"Constructive induction using knowledgebased neural networks," Eighth Int'l Machine Learning Workshop, L. Birnbaum and G. Collins, eds., p. 213,San Mateo, Calif. Morgan Kaufmann Publishers, 1990.
[27] P. Tino and J. Sajda,"Learning and extracting initial mealy machines with a modular neural network model," Neural Computation, vol. 7, no. 4, pp. 882884, 1995.
[28] R. Watrous and G. Kuhn,"Induction of finitestate languages using secondorder recurrent networks," Neural Computation, vol. 4, no. 3, p. 406, 1992.
[29] R. Williams and D. Zipser,"A learning algorithm for continually running fully recurrent neural networks," Neural Computation, vol. 1, no. 2, pp. 270280, 1989.
[30] P. Manolios and R. Fanelli,"First order recurrent neural networks and deterministic finite state automata," Neural Computation, vol. 6, no. 6, pp. 1,1541,172, 1994.
[31] C.W. Omlin and C.L. Giles, “Stable Encoding of Large FiniteState Automata in Recurrent Neural Networks with sigmoid Discriminants,” Neural Computation, vol. 8, pp. 675–696, 1996.
[32] C.W. Omlin and C.L. Giles, “Constructing Deterministic FiniteState Automata in Recurrent Neural Networks,” J. ACM, vol. 43, no. 6, pp. 937–972, 1996.