A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints
- 1 January 2000
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 11 (6), 1251-1262
- https://doi.org/10.1109/72.883412
Abstract
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.Keywords
This publication has 15 references indexed in Scilit:
- A general methodology for designing globally convergent optimization neural networksIEEE Transactions on Neural Networks, 1998
- Convergence analysis of a discrete-time recurrent neural network to perform quadratic real optimization with bound constraintsIEEE Transactions on Neural Networks, 1998
- Asymmetric Hopfield-type networks: Theory and applicationsNeural Networks, 1996
- A deterministic annealing neural network for convex programmingNeural Networks, 1994
- Neural network for quadratic optimization with bound constraintsIEEE Transactions on Neural Networks, 1993
- Linear and quadratic programming neural network analysisIEEE Transactions on Neural Networks, 1992
- Nonlinear switched capacitor 'neural' networks for optimization problemsIEEE Transactions on Circuits and Systems, 1990
- Optimal preconditioners of a given sparsity patternBIT Numerical Mathematics, 1989
- Neural networks for nonlinear programmingIEEE Transactions on Circuits and Systems, 1988
- Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuitIEEE Transactions on Circuits and Systems, 1986