Abstract
A predictive vector quantizer (PVQ) is a vector extension of a predictive quantizer. It consists of two parts: a conventional memoryless vector quantizer (VQ) and a vector predictor. Two gradient algorithms for designing a PVQ are developed in this paper: the steepest descent (SD) algorithm and the stochastic gradient (SG) algorithm. Both have the property of improving the quantizer and the predictor in the sense of minimizing the distortion as measured by the average mean-squared error. The differences between the two design approaches are the period and the step size used in each iteration to update the codebook and predictor. The SG algorithm updates once for each input training vector and uses a small step size, while the SD updates only once for a long period, possibly one pass over the entire training sequence, and uses a relatively large step size. Code designs and tests are simulated for both Gauss-Markov sources and for sampled speech waveforms, and the results are compared to codes designed using techniques that attempt to optimize only the quantizer for the predictor and not vice versa.

This publication has 11 references indexed in Scilit: