Quantitative Evaluation of Systems, International Conference on (2011)
Sept. 5, 2011 to Sept. 8, 2011
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/QEST.2011.21
Obtaining accurate system models for verification is a hard and time consuming process, which is seen by industry as a hindrance to adopt otherwise powerful model driven development techniques and tools. In this paper we pursue an alternative approach where an accurate high-level model can be automatically constructed from observations of a given black-box embedded system. We adapt algorithms for learning finite probabilistic automata from observed system behaviors. We prove that in the limit of large sample sizes the learned model will be an accurate representation of the data-generating system. In particular, in the large sample limit, the learned model and the original system will define the same probabilities for linear temporal logic (LTL) properties. Thus, we can perform PLTL model-checking on the learned model to infer properties of the system. We perform experiments learning models from system observations at different levels of abstraction. The experimental results show the learned models provide very good approximations for relevant properties of the original system.
Model Checking, Learning, Probabilistic Automata, Probabilistic Linear Time Temporal Logic
Manfred Jaeger, Hua Mao, Kim G. Larsen, Brian Nielsen, Thomas D. Nielsen, Yingke Chen, "Learning Probabilistic Automata for Model Checking", Quantitative Evaluation of Systems, International Conference on, vol. 00, no. , pp. 111-120, 2011, doi:10.1109/QEST.2011.21