Automated Lip-Synch and Speech Synthesis for Character Animation
J. P. Lewis and
F. I. Parke,
``Automated Lip-Synch and Speech Synthesis for Character Animation,''
Proceedings of ACM CHI+GI'87 Conference on Human Factors in Computing
Systems and Graphics Interface,
Animation, 1987,
pp. 143-147.
Abstract
An automated method of synchronizing facial animation to recorded speech
is described. In this method, a common speech synthesis method (linear
prediction) is adapted to provide simple and accurate phoneme recognition.
The recognized phonemes are then associated with mouth positions to provide
keyframes for computer animation of speech using a parametric model of the
human face.
The linear prediction software, once implemented, can also be used for
speech resynthesis. The synthesis retains intelligibility and natural
speech rhythm while achieving a "synthetic realism" consistent with
computer animation. Speech synthesis also enables certain useful
manipulations for the purpose of computer character animation.
This file was generated by bib v1.1 on 06.12.93.
OMN