Model-free distributed learning
- 1 March 1990
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 1 (1), 58-70
- https://doi.org/10.1109/72.80205
Abstract
Model-free learning for synchronous and asynchronous quasi-static networks is presented. The network weights are continuously perturbed, while the time-varying performance index is measured and correlated with the perturbation signals; the correlation output determines the changes in the weights. The perturbation may be either via noise sources or orthogonal signals. The invariance to detailed network structure mitigates large variability between supposedly identical networks as well as implementation defects. This local, regular, and completely distributed mechanism requires no central control and involves only a few global signals. Thus, it allows for integrated, on-chip learning in large analog and optical networks.Keywords
This publication has 5 references indexed in Scilit:
- High-order absolutely stable neural networksIEEE Transactions on Circuits and Systems, 1991
- Absolute Stability of Global Pattern Formation and Parallel Memory Storage by Competitive Neural NetworksPublished by Elsevier ,1987
- Learning representations by back-propagating errorsNature, 1986
- Diffusions for Global OptimizationSIAM Journal on Control and Optimization, 1986
- Stochastic Approximation Methods for Constrained and Unconstrained SystemsPublished by Springer Nature ,1978