For part I, see ibid., vol. EX-2, no. 4, p. 10-11 (1987). The learning ability of neural networks and their ability to generalize and to abstract or generate ideals from an imperfect training set are examined. Their potential for multiprocessing is considered. A brief history of neural network research is followed by a discussion of their architectures and a presentation of several specific architectures and learning techniques. The Cauchy machine, which represents a possible solution to the local minima problem encountered with virtually every other neural network training algorithm, is described. The outlook for neural nets is briefly considered.