Next: Importance of prototype-prototype similarity Up: Algorithm Previous: Document-keyword correspondence

Correspondence via co-embedding prototypes and video segments

Computing a correspondence relationship satisfying transitive-closure is in general a difficult problem. Therefore we restrict our search to an important sub-family of functions of , for which we can compute it efficiently. Let denote the co-embedding of the prototypes and video segments. The correspondence function satisfies -transitive closure with . By restricting our search for correspondence to , we can rewrite the optimization problem in (1):

 (2)

where is the standard deviation of vector (it removes an arbitrary scaling factor in the embedding and also prevents the trivial solution of ).

The co-embedding optimization problem in (2) can be visualized as finding a placement (embedding) vector for each prototype and video segment in a low dimensional space. We can imagine the co-occurrence relationships as springs that pull together (or apart) prototypes and video segments, the nodes of figure 6(a), resulting in the co-occurring elements being maximally aligned, as shown in figure 6(b). Maximal alignment in this case means that corresponding nodes are placed close to each other (on x axis). In order to arrange the prototypes in the -D space, we need to know the position of the video segments, and vice versa. Note that the chicken-egg nature of our unusual event detection is inherent in both the correspondence and the embedding problem. Luckily, we can break this chicken-egg deadlock in an optimal and computationally efficient way.

Figure 6: Co-embedding of email-word example. The top row represents emails, the bottom words, the edges are the co-occurrence between them. (a) random embedding, (b) optimal co-embedding using our method. The springs pull corresponding elements in alignment.
 (a) (b)

To compute the optimal co-embedding, we define a weighted graph . We take prototypes and video segments as nodes, . The edges of this graph consist of edges between prototypes and video segments which represent the co-occuring relationship (), and edges between the prototype nodes, which represent the similarity between the prototypes: . The edge weight matrix is defined as:

 (3)

where is a weighting factor. When , minimization of (2) is equivalent to minimization of
 (4)

Expanding the numerator of (4) we get , where is a diagonal matrix with . Using the fact that

 (5)

where is an estimate of the prior occurrence likelihood of each prototype or video segment, which can be estimated by their occurrance frequency: , with . Centering the embedding around zero (i.e. ), we get .

Putting all these together, we can rewrite (4) as

 (6)

The global energy minimum of this function is achieved by the eigenvector corresponding to the second smallest eigenvalue of
 (7)

Note that the first elements of vector contain the coordinates of the video segment nodes, and the next elements contain the coordinates of the prototype features. Incidentally, this is the same eigenvector used in a different context of image segmentation (Normalized Cuts, [12]).

Next: Importance of prototype-prototype similarity Up: Algorithm Previous: Document-keyword correspondence
Mirko Visontai 2004-05-13