The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (1999 vol.21)
pp: 193-201
ABSTRACT
<p><b>Abstract</b>—As computing power has grown, the trend in experimental design has been from techniques requiring little computation towards techniques providing better, more general results at the cost of additional computation. This paper continues this trend presenting three new methods for designing experiments. A summary of previous work in experimental design is provided and used to show how these new methods generalize previous criteria and provide a more accurate analysis than prior methods. The first method generates experimental designs by maximizing the uncertainty of the experiment's result, while the remaining two methods minimize an approximation of the variance of a function of the parameters. The third method uses a computationally expensive discrete approximation to determine the variance. The methods are tested and compared using the logistic model and a Bayesian classifier. The results show that at the expense of greater computation, experimental designs more effective at reducing the uncertainty of the decision boundary of the Bayesian classifier can be generated.</p>
INDEX TERMS
Machine learning; Bayesian classifiers; experimental design.
CITATION
Robert Davis, Armand Prieditis, "Designing Optimal Sequential Experiments for a Bayesian Classifier", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.21, no. 3, pp. 193-201, March 1999, doi:10.1109/34.754585
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool