Geometric feature-based matching uses a database with a model for each face (size and position of eyes, mouth, head outline, and relationships among these features). For each image all the inter-feature distances needed are calculated. The goal is to get a one-to-one correspondence between the stimulus (face to be recognized) and the stored representation (face in the database). Features extracted by vertical gradients are useful in detecting the head top, eyes, nose base and mouth. Horizontal gradients are useful for detecting left and right boundaries of the face and nose. For each face a vector of features must be calculated and then recognition is performed with a nearest neighbor classifier.
There is a lot of work on trying to automate the extraction of facial features. One method used to combine the curves obtained by edge detectors is a multiresolution approach . Knowledge of approximate positions of features at a given resolution is used to guide searches at a finer resolution.