Issue No.10 - October (1990 vol.39)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/12.59855
<p>M.D. Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The authors generalize the method to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.</p>
performability analysis; vanishing states elimination; fast transient states elimination; semi-Markov reward processes; zero reward nonabsorbing states; probabilistic switch; reachability graph; stochastic Petri net; decomposition; stiff Markov chains; Markov processes; performance evaluation.
G. Ciardo, R.A. Marie, B. Sericola, K.S. Trivedi, "Performability Analysis Using Semi-Markov Reward Processes", IEEE Transactions on Computers, vol.39, no. 10, pp. 1251-1264, October 1990, doi:10.1109/12.59855