Dynamics of iterated-map neural networks
- 1 July 1989
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review A
- Vol. 40 (1), 501-504
- https://doi.org/10.1103/physreva.40.501
Abstract
We analyze a discrete-time neural network with continuous state variables updated in parallel. We show that for symmetric connections, the only attractors are fixed points and period-two-limit cycles. We also present a global stability criterion which guarantees only fixed-point attractors by placing limits on the gain (maximum slope) of the sigmoid nonlinearity. The iterated-map network has the same fixed points as a continuous-time analog electronic neural network and converges to an attractor after a small number of iterations of the map.Keywords
This publication has 12 references indexed in Scilit:
- Stability of analog neural networks with delayPhysical Review A, 1989
- The nature of attractors in an asymmetric spin glass with deterministic dynamicsJournal of Physics A: General Physics, 1988
- Nonlinear neural networks: Principles, mechanisms, and architecturesNeural Networks, 1988
- Optimization algorithms - Simulated annealing and neural network processingThe Astrophysical Journal, 1986
- Spin glasses: Experimental facts, theoretical concepts, and open questionsReviews of Modern Physics, 1986
- Physicality of the Little modelPhysical Review A, 1986
- The “Brain-State-in-a-Box” neural model is a gradient descent algorithmJournal of Mathematical Psychology, 1986
- Decreasing energy functions as a tool for studying threshold networksDiscrete Applied Mathematics, 1985
- Neurons with graded response have collective computational properties like those of two-state neurons.Proceedings of the National Academy of Sciences, 1984
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982