Neural networks, error-correcting codes, and polynomials over the binary n-cube
- 1 September 1989
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Information Theory
- Vol. 35 (5), 976-987
- https://doi.org/10.1109/18.42215
Abstract
Several ways of relating the concept of error-correcting codes to the concept of neural networks are presented. Performing maximum-likelihood decoding in a linear block error-correcting code is shown to be equivalent to finding a global maximum of the energy function of a certain neural network. Given a linear block code, a neural network can be constructed in such a way that every codeword corresponds to a local maximum. The connection between maximization of polynomials over the n-cube and error-correcting codes is also investigated; the results suggest that decoding techniques can be a useful tool for solving such maximization problems. The results are generalized to both nonbinary and nonlinear codes.Keywords
This publication has 13 references indexed in Scilit:
- A generalized convergence theorem for neural networksIEEE Transactions on Information Theory, 1988
- A study on neural networksInternational Journal of Intelligent Systems, 1988
- Unimodular functionsDiscrete Applied Mathematics, 1986
- Decreasing energy functions as a tool for studying threshold networksDiscrete Applied Mathematics, 1985
- Roof duality, complementation and persistency in quadratic 0–1 optimizationMathematical Programming, 1984
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982
- Methods of Nonlinear 0-1 ProgrammingAnnals of Discrete Mathematics, 1979
- Minimum cuts and related problemsNetworks, 1975
- A Selection Problem of Shared Fixed Costs and Network FlowsManagement Science, 1970
- Cut-set matrices and linear codes (Corresp.)IEEE Transactions on Information Theory, 1965