Lip modeling for visual speech recognition

Abstract
In this paper, we describe an algorithm for modeling the shape of the mouth, and extracting meaningful dimensions for use by automatic lipreading systems. One advantage of this technique lies in the ability to normalize the model to compensate for scale and rotation. An error function is defined which relates the model to the image, and minimization of the error yields the best fit model. This is similar to deformable templates, but we attempt to perform the minimization in closed form. Visual only recognition was performed with features extracted from the model, and the recognition system achieved 85% accuracy on a two word discrimination task.

This publication has 3 references indexed in Scilit: