The nature of the model will to a certain extent dictate the format in which signals are passed from the control module to the model to generate animation. For certain types of control, it may also be crucial that the model generate response patterns which simulate sensory feedback (this would be required, for example, if one were to attempt a physically realistic simulation of chewing that took into account the texture of the food).
A very general and natural way to think of the relationship between model and control is to mimic the human nervous system. For muscle control, this would require an input variable for each motor neuron to each of the muscles. The value of the motor neuron variable at a given instant would describe how much stimulation that muscle would receive. Aside from the fact that this scheme leads to a very large number of control variables, there is also the issue that muscles respond to stimulation in a highly non-linear way (e.g. several motor neurons for a muscle fire in sequence to maintain a constant muscle force, and muscle fatigue leads to a variable force output for a given level of stimulation).
A higher-level control method that has been tried in practice is to abstract beyond the level of motor neurons and deal directly with the muscle forces. For several facial modeling systems, the control interface to the model is expressed as a time varying force value for each muscle in the model. This method may still provide too many control variables for easy animation, and a control package may provide mechanisms by which the animator may create macro behaviors to control several muscle forces at once. From the point of view of the model, it may be important to limit the actual class of muscle force input that is acceptable; muscles don't push and only pull up to certain physical limits. Also, there may be constraints in the nervous system that dictate which muscles must operate as groups and which may be independently articulated. Some of this grouping information is encoded in the FACS system, and thus parts of the FACS system may be considered part of the modeling domain as well as the control domain.
Even beyond the muscle control level, some implementations of a facial model may include very high level hooks for describing expression directly. These may take the form of very broad parameters to modify several regions of the face with a single variable (such as Parke's expression parameters) or they may even abstract the facial model to a state-based control (as in an image-based user interface agent that shows different pictures of a face for ``listening'', ``thinking'', or ``sleeping'' modes).
In human facial expressions, the element of time plays an important role in communication. This parameter should also be available to the control module for use in planning the sequence of commands sent to the model. Depending of the complexity of the facial model, commands may have delayed impact on the shape of the face (for example, the muscles in the model may have a maximum contraction speed). For certain applications, the exact timing of certain events may be a critical goal (e.g. to synchronize to speech or music). One possible solution to this problem is to provide hooks in the model through which the control module can query the expected result of a particular control signal without actually invoking the control. A second possible solution is to express the control signal in terms of the desired result and build intelligence into the model to determine the exact method by which the result will be achieved.