The Community for Technology Leaders
RSS Icon
Issue No.02 - March/April (2001 vol.13)
pp: 196-206
<p><b>Abstract</b>—The information theoretical learnability of folding networks, a very successful approach capable of dealing with tree structured inputs, is examined. We find bounds on the VC, pseudo-, and fat shattering dimension of folding networks with various activation functions. As a consequence, valid generalization of folding networks can be guaranteed. However, distribution independent bounds on the generalization error cannot exist in principle. We propose two approaches which take the specific distribution into account and allow us to derive explicit bounds on the deviation of the empirical error from the real error of a learning algorithm: The first approach requires the probability of large trees to be limited a priori and the second approach deals with situations where the maximum input height in a concrete learning example is restricted.</p>
Recurrent neural networks, folding networks, computational learning theory, VC dimension, UCED property, luckiness function.
Barbara Hammer, "Generalization Ability of Folding Networks", IEEE Transactions on Knowledge & Data Engineering, vol.13, no. 2, pp. 196-206, March/April 2001, doi:10.1109/69.917560
30 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool