This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
On the Efficient Allocation of Resources for Hypothesis Evaluation: A Statistical Approach
July 1995 (vol. 17 no. 7)
pp. 652-665

Abstract—This paper considers the decision-making problem of selecting a strategy from a set of alternatives on the basis of incomplete information (e.g., a finite number of observations). At any time the system can adopt a particular strategy or decide to gather additional information at some cost. Balancing the expected utility of the new information against the cost of acquiring the information is the central problem we address. In our approach, the cost and utility of applying a particular strategy to a given problem are represented as random variables from a parametric distribution. By observing the performance of each strategy on a randomly selected sample of problems, we can use parameter estimation techniques to infer statistical models of performance on the general population of problems. These models can then be used to estimate: 1) the utility and cost of acquiring additional information and 2) the desirability of selecting a particular strategy from a set of choices. Empirical results are presented that demonstrate the effectiveness of the hypothesis evaluation techniques for tuning system parameters in a NASA antenna scheduling application.

[1] J. Gratch and G. DeJong,“COMPOSER: A probabilistic solution to the utility problem in speedup learning,” Proc. Nat’l Conf. Artificial Intelligence, pp. 235-240, 1992.
[2] R. Greiner and I. Jurisca,“A statistical approach to solving the EBL utility problem,” Proc. Nat’l Conf. Artificial Intelligence, pp. 241-248, 1992.
[3] U. Fayyad and K. Irani,“The attribute selection problem in decision tree generation,” Proc. 10th Nat’l Conf. Artificial Intelligence, pp. 104-110, 1990.
[4] R. Musick,J. Catlett, , and S. Russell,“Decision theoretic subsampling for induction on large databases,” Proc. 10th Int’l Conf. Machine Learning, pp. 212-219, 1993.
[5] A. Moore and M. Lee,“Efficient algorithms for minimizing cross validation error,” Proc. 11th Int’l Conf. Machine Learning, pp. 190-198, 1994.
[6] L.G. Valiant, “A Theory of the Learnable,” Comm. ACM, vol. 27, no. 11, pp. 1134-1142, Nov. 1984.
[7] S. Russell and E. Wefald,“On optimal game tree search using rational meta-reasoning,” Proc. 11th Int’l Joint Conf. Artificial Intelligence, pp. 334-340, 1989.
[8] S. Russell and E. Wefald,Do the Right Thing: Studies in Limited Rationality.Cambridge, Mass.: MIT Press, 1991.
[9] M. Iwamoto,“A planner with quality goal and its speed-up learning for optimization problem,” Proc. Second Int’l Conf. AI Planning Systems, pp. 281-286, 1994.
[10] S. Chien and J. Gratch,“Producing satisfying solutions to scheduling problems: An iterative constraint relaxation approach,” Proc. Second Int’l Conf. AI Planning Systems, pp. 213-218, 1994.
[11] M.A. Perez and J. Carbonell,“Control knowledge to improve plan quality,” Proc. Second Int’l Conf. AI Planning Systems, pp. 323-328, 1994.
[12] T.J. Santner and A.C. Tamhane,“Designing experiments for selecting a normal population with a large mean and a small variance,” Design of Experiments: Ranking and Selection, T.J. Santner and A.C. Tamhane, eds., Marcel Dekker, 1984.
[13] R.V. Hogg and A.T. Craig,Introduction to Mathematical Statistics. New York: Macmillan Publishing, 1978
[14] E. Kreysig,Introductory Mathematical Statistics: Principles and Methods. New York,: John Wiley and Sons, 1970.
[15] H. Buringer,H. Martin, , and K. Schriever,Nonparametric Sequential Selection Procedures.Boston, Mass. : Birkhauser, 1980.
[16] W. Yang and B. Nelson,“Using common random numbers and control variates in multiple-comparison procedures,” Operations Research, vol. 39, no. 4, pp. 583-591, 1991.
[17] O. Maron and A. Moore,“Hoeffding races: Accelerating model selection search for classification and function approximation,” Advances in Neural Information Processing Systems 6.San Francisco: Morgan Kaufmann, 1994.
[18] R.E. Bechhofer,“A single-sample multiple decision procedure for ranking means of normal populations with known variances,” Annals of Mathematical Statistics, vol. 25, no. 1, pp. 16-39, 1954.
[19] E. Paulson,“A sequential procedure for selecting the population with the largest mean from k normal populations,” Annals of Mathematical Statistics, vol. 35, pp. 174-180, 1964.
[20] J. Gratch,“COMPOSER: A decision-theoretic approach to adaptive problem-solving,” Technical Report UIUCDCS-R-93-1806, Dept. of Computer Science, Univ. of Illi nois, Urbana, Ill., May 1993.
[21] Turnbull and Weiss,“A class of sequential procedures for k-sample problems concerning normal means with unknown unequal variances,” Design of Experiments: Ranking and Selection, T.J. Santner and A.C. Tamhane, ed., Marcel Dekker, 1984.
[22] R.M. Haseeb,Modern Statistical Selection.Columbus, Ohio: American Sciences Press, 1985.
[23] J. Gratch,S. Chien, , and G. DeJong,“Learning search control knowledge for deep space network scheduling,” Proc. 10th Int’l Conf. Machine Learning, pp. 135-142, 1993.
[24] J. Gratch,S. Chien, , and G. DeJong,“Improving learning performance through rational resource allocation,” Proc. 12th Nat’l Conf. Artificial Intelligence, pp. 576-581, 1994.

Index Terms:
Machine learning, the utility problem, planning and scheduling, parameter estimation, adaptive problem-solving.
Citation:
Steve Chien, Jonathan Gratch, Michael Burl, "On the Efficient Allocation of Resources for Hypothesis Evaluation: A Statistical Approach," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 7, pp. 652-665, July 1995, doi:10.1109/34.391408
Usage of this product signifies your acceptance of the Terms of Use.