This Article 
 Bibliographic References 
 Add to: 
October 1969 (vol. 18 no. 10)
pp. 918-923
This paper discusses optimization and implementation of recognition networks using interconnections of a standard network element to form a classification network. The standard element has a nonlinear transfer function whose inputs may be weighted by selected resistors. It is assumed that a training set of samples to be accepted or rejected is available but neither the a priori probabilities or the probability density functions of the measurements that describe the samples are known. The discriminant functions are formed from a given topology with unknown sets of weighting resistors assigned to the elements that constitute the classification network. Computer optimization is done using a hill-climbing technique that maximizes a function related to the miss rate and false alarm rate but requires neither an estimate or exact description of the sample probability space. A particular advantage is the one-to-one correspondence between the results of the optimization program and physical realization of the optimal recognition network. Disadvantages are due to the fact that an optimum can be found only with respect to a given topology and that the optimization algorithm may prematurely terminate on a local maximum.
Index Terms:
Discriminant functions, false alarm rate, hill-climbing techniques, learning machines, miss rate, optimization, pattern recognition.
H. Drucker, "Computer Optimization of Recognition Networks," IEEE Transactions on Computers, vol. 18, no. 10, pp. 918-923, Oct. 1969, doi:10.1109/T-C.1969.222547
Usage of this product signifies your acceptance of the Terms of Use.