Abstract
Spectrographic analysis though a useful tool for research has not provided a basis for 'visible speech' as originally hoped. On the other hand lip reading provides a successful mode of visible speech at least for some people. Consideration of models for speech perception suggests that the ear brain transforms the acoustic signal into a representation of articulator motion prior to perception as speech and that this representation may be supplemented by information provided by lip reading. An experiment has been conducted in which the measure of the formant frequencies F-j and F2 from a Speech Analyser is represented in the height and width respectively of a loop simulating a mouth. In addition |s| and |f| are represented by simulated teeth and mouth shape respectively. Preliminary tests suggest that a lip reader can quickly learn to recognise a small vocabulary presented in this way. It is hoped that opportunity will arise for more extensive testing as a method for communication or as a feedback device for teaching purposes; There may also be applications for entertainment particularly as it is easy to curve the mouth to express pleasure or displeasure. An animated cartoon film that could be read by lip readers might have possibilities both for educating and entertaining the deaf or there might even be wider applications for entertainment.

This publication has 5 references indexed in Scilit: