Next: Workshop Themes
Animating every subtle facial action by hand is an extremely tedious
task and requires the skill of a good animator.
Automatic lip synchronization is included into animation systems by
a hierarchical approach  or by adding speech parameters
. A correspondence between each speech unit
and a basic lip shape is established. Some coarticulation effects are
Three philosophical approaches have been taken to produce automatic
facial animation: rules, analysis, and performance.
- Rule-based approach:
Based on linguistics and psychological studies, a rule-based approach
proposes automatic generation of facial animation. A set of rules
describes the link between the intonation, emotion and facial
expressions . Multi-layer structures allows one to
synchronous effects (lip movements, conversational signals, emotions
and so on) .
- Analysis-based approach:
This technique consists of extracting information from a live-video sequence
and giving it as input to the animation
system . Such information
corresponds to muscle contractions or determination of FACS Action
Units. Automatic extraction of facial parameters are difficult due
to the subtlety and complexity of facial deformations and muscle correlations,
but is a subject of much study in the computer vision community as
reported by the Planning Workshop on Facial Expression Understanding.
- Performance-based approach:
By tracking various points on a live actor's face and texture-mapping
images onto an underlying polygon model, real-time facial animation synthesis
can be achieved with little computational cost and no analysis 
[[[plus recent system by SimGraphics]]].