next up previous contents index
Next: #goudreau94a###, Up: DTRNN based on threshold Previous: #Alo91a###:   Contents   Index


Horne and Hush (1996):

The authors of this more recent paper (http://www.dlsi.ua.es/~mlf/nnafmc/papers/horne96bounds.pdf) try to improve these bounds by using a different DTRNN architecture, which, instead of using a single-layer first-order feedforward neural network for the next-state function, uses a lower-triangular network, that is, a network in which a unit labeled $i$ receives inputs only from units with lower labels ($j<i$); lower triangular networks include layered feedforward networks as a special case. If no restrictions are imposed on weights, the lower and upper bounds are identical and the number of units is $\Theta(\vert Q\vert^{1/2})$ (the bound being better than the one obtained by Alon et al. (1991)). When thresholds and weights are restricted to the set $\{-1,1\}$, the lower and upper bounds are again the same, but better: $\Theta((\vert Q\vert\log \vert Q\vert)^{1/2})$. When limits are imposed on the fan-in, the result by Alon et al. (1991) is recovered: $\Theta(\vert Q\vert)$.



Debian User 2002-01-21