A significant component of facial animation concerns the development of controls supporting the widest possible range of facial expression and conformation. Ideally, it should be possible to build a single framework for facial animation control where the handles are intuitive and natural to use. Therefore this section describes some of the fundamental problems in achieving this goal.
Though not usually done, the development of facial animation could be viewed as two independent activities: the development of control schemes, and the development of control implementation techniques. The control schemes may be viewed as control parameterizations, in which case animation becomes the process of specifying and controlling parameter set values as functions over time.
However, the animator is usually not interested in the specifics of an algorithmic implementation. In fact, from the animator's point of view, three key issues are relevant: (1) What are the control parameters? (2) How are these parameters manipulated? (3) Are the control parameters adequate and appropriate? From the animation system implementor's point of view, the three key issues are (1) Which algorithms should be used to accomplish the facial animation,? (2) What control parameters should be accessible to the user? (3) How should these control parameters be provided?
To date, most of the work in facial animation has concentrated on specific implementation techniques; little work has been done establishing optimal control and interface functionality. Consequently, the control features provided by each specific implementation have been heavily influenced by the characteristics of the particular implementation rather than attempting to fulfill a well understood set of functionality and interface goals. This is characteristic of the field's infancy; therefore, questions concerning useful, optimal, and complete control parameterizations remain mostly unanswered.
It seems important to provide a hierarchy of control. One way to think about this is as a set of ``nested'' black boxes, where each black box has a set of external control handles implemented by an internal mechanism. The ``black box'' at one level is the ``mechanism'' for the next level of abstraction. Several interesting questions are: What are the different levels of control and what are the mappings between control levels?
There are two major categories of control: expression control, and conformation control. Expression control is of course concerned with changes of facial expression. Conformation control is used to specify a particular individual's face from the universe of possible faces. In the ideal case, these two control categories should be orthogonal. Conformation should be independent of expression, and expression independent of conformation.