Default generalisation in connectionist networks

Abstract
A potential problem for connectionist accounts of inflectional morphology is the need to learn a “default” inflection (Prasada & Pinker, 1993). The early connectionist work of Rumelhart and McClelland (1986) might be interpreted as suggesting that a network can learn to treat a given inflection as the “elsewhere” case only if it applies to a much larger class of items than any other inflection. This claim is true of Rumelhart and McClelland's (1986) model, which was a two-layer network subject to the computational limitations on networks. of that class (Minsky & Papert, 1969). However, it does not generabe to current models, which have available to them more sophisticated architectures and learning algorithms. In this paper, we explain the basis of the distinction, and demonstrate that given more appropriate architectural assumptions, connectionist models are perfectly capable of learning a default category and generalising as required, even in the absence of superior type frequency.