On the design of gradient algorithms for digitally implemented adaptive filters

Abstract
The effect of digital implementation on the gradient (steepest descent) algorithm commonly used in the mean-square adaptive equalization of pulse-amplitude modulated data signals is considered. It is shown that digitally implemented adaptive gradient algorithms can exhibit effects which are significantly different from those encountered in analog (infinite precision) algorithms. This is illustrated by considering the often quoted result of stochastic approximation that to achieve the optimum rate of convergence in an adaptive algorithm the step size should be proportional to1/n, wherenis the number of iterations. On closer examination one finds that this result applies only whennis large and is relevant only for analog algorithms. It is shown that as the number of iterations becomes large one should not continually decrease the step size in a digital gradient algorithm. This result is a manifestation of the quantization inherent in any digitally implemented system. A surprising result is that these effects produce a digital residual mean-square error that is minimized by making the step size as large as possible. Since the analog residual error is minimized by taking small step sizes, the optimum step-size sequence reflects a compromise between these competing goals. The performance of a time-varying gain sequence suggested by stochastic approximation is contrasted with the performance of a constant step-size sequence. It is shown that in a digital environment the latter sequence is capable of attaining a smaller residual error.

This publication has 4 references indexed in Scilit: