Study of faces divided into two main categories, though not without overlap. Category one was analysis. This included speech recognition and lip reading both with and without sound, psychology, plastic surgery, and anesthesiology. The last item stems from evidence that EMG traces of facial muscles indicate levels of pain and consciousness even when a patient has been given anesthetics and muscle relaxants. Category two was synthesis. Here we had augmented speech and dialog synthesis, synthetic actors, cartoon animation, and virtual presence. There was much more freedom in synthetic faces, including cartoon images, cats, Martians, caricatures, and distortions, in addition to realistic natural faces. In fact, one synthetic face was reduced to lips alone, for speech studies.
Much of both analysis and synthesis was motivated by an interest in communication. Data was presented that implied noisy speech was significantly easier to understand when either a real or synthetic speaker was visible. Human-computer interfaces may be able to use that fact to advantage, by adding face synthesis to voice synthesis and recognition interfaces. It is not yet possible, however, to predict what response users will have to talking heads.