Tuesday, May 22, 2007

COMPUTER FACIAL ANIMATION








Computer facial animation is primarily an area of computer graphics that encapsulates models and techniques for generating and animating images of the human head and face. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

The pioneering work on facial animation was done by Frederic I. Parke in the 1970s. Renewed interested in this topic in the mid-1980s included the muscle model approach to facial expression of Keith Waters. Parke and Waters have written the definitive text on Computer Facial Animation.

Talking 3D synthetic faces are now used in many applications involving human-computer interaction. The lip synchronization of the faces are mostly done mechanically by computer animators. Although there is some work done on automated lip synchronized facial animation, these studies are mostly based on text input. Speech is used to generate lip synchronized facial animation. Speakers' recorded voice is converted into lip shape classes on the 3D model. Voice is analyzed and classified using a training set. Lip animation is facilitated using facial muscles and the jaw. Facial muscles are modelled onto the facial model. For more realistic facial animation, facial tissue is modelled as well, and the interactions between epidermis, subcutaneous layer and bone are taken into account. Natural-looking facial animation is achieved in real-time on a personal computer.

No comments: