Bidirectional Warping of Active Appearance Model

Bidirectional Warping of Active Appearance Model

Ali Mollahosseini and Mohammad H. Mahoor
Department of Electrical and Computer Engineering
University of Denver, Denver, CO 80210
ali.mollahosseini@du.edu and mmahoor@du.edu
Abstract

Active Appearance Model (AAM) is a commonly used method for facial image analysis with applications in face identification and facial expression recognition. This paper proposes a new approach based on image alignment for AAM fitting called bidirectional warping. Previous approaches warp either the input image or the appearance template. We propose to warp both the input image, using incremental update by an affine transformation, and the appearance template, using an inverse compositional approach. Our experimental results on Multi-PIE face database show that the bidirectional approach outperforms state-of-the-art inverse compositional fitting approaches in extracting landmark points of faces with shape and pose variations.

1 Introduction

Facial landmark point extraction is a key step in facial image representation and analysis. The Active Appearance Model (AAM) proposed by Cootes et al. [2] is a powerful object description method that is commonly used for facial landmark points extraction [2, 9], facial action unit extraction [8], medical image segmentation and analysis [3]. The idea behind AAM is to represent a visual object (e.g. facial image) using a linear model of shape and texture (appearance) eigenvectors obtained from a set of manually labeled training images. Then, the model is used to represent an instance of the object in a novel image. This process is often called AAM fitting.

AAM fitting is a non-linear optimization problem. Different optimization approaches have been proposed to find the best model parameters that result in minimum error between the synthesized appearance models obtained from the AAM and the input image. In general, due to variation of camera view angle, resolution and focal distance, facial images have different scaling, rotation, and translations. In order to remove global shape variations, all shapes are normalized and the modeling is only concerned with local shape deformation. Therefore, it is necessary to combine a global shape transformation with the normalized AAM. The global shape transformation is often a 2D similarity transformation. Finding optimal parameters of the global transformation improves the accuracy of fitting in representing novel facial images with different shape and pose variations.

Traditionally, the stochastic gradient descent algorithm or iteratively incremental additive techniques are used to update the AAM parameters to fit onto novel images [2]. The fitting problem can also be viewed as finding a model instance similar to the given facial image and therefore it can be considered as an image alignment problem. Baker and Matthews [1] have categorized these approaches into four classes: Forwards Additive, Forwards Compositional, Inverse Additive, and Inverse Compositional. They proposed the Projecting Out (PO) technique which is admittedly one of the fastest algorithms for AAM fitting [9]. They also proposed the Simultaneously Inverse Compositional (SIC) method that can handle images of subjects not included in the training better at the price of losing speed [4].

In the literature, there are some works [6, 10] on image alignment for applications, such as motion estimation [6], that take advantage of the gradients of both the template and target images. These approaches are called bidirectional image alignment. Bidirectional approaches work better than unidirectional image alignment approaches [10]. In this paper, we reformulate AAM fitting using a bidirectional image alignment scheme.

In our approach, we minimize the error between a warped image and the appearance template by iteratively solving a non-linear least square problem. The warping is a piecewise affine of a normalized AAM that is followed by a global transformation. In each iteration, shape parameters are optimized based on the trained appearance template using the Inverse Compositional Algorithm (ICA) [1], and global transformation is found based on the gradient of the input image using incremental update. We call this approach bidirectional warping. Moreover, we utilize affine transformation instead of 2D similarity to increase the generality of the global shape transformation, and apply a fitting constraint to prevent the algorithm from resulting in non-face shapes. We show that the proposed bidirectional approach can be applied to PO and SIC fitting methods. We study the performance of the proposed bidirectional PO and SIC methods in extracting facial landmark points, and examine and compare the effect of proposed affine transformation, and the fitting constraint on both bidirectional and the original PO and SIC fitting methods.

The rest of this paper is organized as follows. Section 2 briefly introduces AAM algorithms and particularly reviews image alignment-based AAM fitting. Section 3 describes the bidirectional warping method. Experimental results are given in Section 4, and Section 5 concludes the paper.

2 Background

AAM consists of a shape component and an appearance component obtained from a set of annotated landmark points in training images. Let’s assume we are given a training facial image set with annotated shapes defined as: . The training images are first normalized and aligned using iterative Procrustes analysis [3]. This step removes variations due to a chosen global shape normalization transformation so that the resulting model can efficiently consider local and non-rigid shape deformation. We then can combine the resulting AAM with a global transformation. Afterwards, Principal Component Analysis (PCA) is applied to the set of normalized training shapes and a shape model is defined as:

(1)

where the base shape is the mean shape and the vectors are eigenvectors corresponding to the largest eigenvalues. Then, all the training images are normalized by warping them into the base shape , using piecewise affine warp, and the appearance model is defined as:

(2)

where is the mean appearance and the vectors are the eigenvectors corresponding to the largest eigenvalues.

The goal of fitting is to find a model instance that can efficiently describe the object (e.g. face) in a given image. Thus, it can be considered as an image alignment problem. In other words, we want to find the model instance as similar as the image .

In general, facial images have different scaling, rotation, and translations. Therefore, it is necessary to combine a global shape transformation with the normalized AAM. If we consider the global shape transformation as , we want to minimize the error between the template and . Considering global shape transformation, the objective of the fitting process is to find p and q in order to minimize the error image as:

(3)

which is a non-linear least square problem. We can have different definitions for the global transformation . In [9], a set of 2D similarity transformations as a subset of piecewise affine warps is defined. Assuming the base mesh , we choose , , and , then global transformation is . This representation of is similar to and therefore similar analysis on the shape parameters p can be applied to q. If we assume that the two sets of shape vectors and are orthogonal to each other, we can add the four 2D similarity vectors to the beginning of AAM shape vectors  [9] and model any given shape as: . In practice, and are not quite orthogonal to each other. This can either be ignored when the size of is small or the complete set of and can be orthonormalized preferably.

In [1], Baker et al. relate AAM to the Lucas-Kanade algorithm. They proposed the Inverse Compositional Algorithm (ICA), in which they find shape variation on the template and compose the inverse of that with the current shape. Therefore, many computationally expensive tasks are precomputed.

In [9], appearance variation is considered in the fitting by finding shape parameters in a linear subspace where the appearance variation is ignored and then “projected out” to the full space with respect to the appearance eigenvectors. The method is more generic compared with the ICA, but the fitting is not accurate when applied to subjects that are not similar to subjects in the training set. The “projecting out” approach is called PO in the rest of this paper.

In [4], Simultaneously Inverse Compositional (SIC) method is introduced, which is more generic. In this method the fitting procedure minimizes the error between and , where are appearance eigenvectors correspond to the largest appearance eigenvalues, and are parameters of appearance that are found simultaneously with respect to the . As the appearance parameters are optimized in each iteration, both steepest descent and the Hessian matrix should be calculated in each iteration, and therefore the method is slower. In [4] the PO is compared with the SIC, and the SIC is reported more accurate in modeling unseen subjects.

3 Bidirectional Warping for AAM Fitting

In this paper, we optimize the global transformation’s parameters (q) based on , using an incremental update and the shape’s parameters (p) based on , using inverse compositional approach. If we assume p and q are known, reversing the role of W in and computing the incremental global warp N with respect to W in , we can solve the Equation (3) iteratively as:

(4)

Then to update the warping parameters, we use and . Assuming and are identity warps, first order Taylor series expansion of the Equation (4) on and gives:

(5)

where is the image gradient, and are the Jacobian of the warp evaluated at and current q respectively. By taking the derivative of the Equation (5), neglecting second order terms and optimizing for and , we obtain:

(6a)
(6b)

where

(7a)
(7b)

As is evaluated at , can be precomputed and saved in the memory, while depends on the current shape and the warped input image gradient, and therefore it should be computed in each iteration. Algorithm 1 shows the steps of the bidirectional warping for inverse compositional algorithm. We call this approach Bi-ICA in the rest of this paper.

0:  
  (3) Evaluate the gradient of the template
  (4) Evaluate the Jacobian at
  (5) Compute the steepest descent images
  (6) Compute the Hessian matrix using Equation (7a)
  
  (1) Warp with and to compute     
  (2) Compute
  (7) Evaluate the gradient
  (8) Evaluate the Jacobian
  (9) Compute the steepest descent images
  (10) Compute the Hessian matrix using Equation (7b)
  (11) Compute and using Equation (6a) and (6b)
  (12) Update        and
Algorithm 1 The Bidirectional Warping Algorithm

The “projecting out” technique can be applied to the bidirectional warping, i.e. instead of in the Equation (6a) and (7a), SD is calculated as:

(8)

Similar to the PO, the can be precomputed, but the dot product of the modified steepest descent images with the error image should be computed in each iteration. The bidirectional warping of the PO is called Bi-PO in the rest of this paper.

To have a more generic fitting, we can optimize the shape parameters on the full space of the appearance vectors. In this case, we need to optimize the appearance parameters as well as the shape parameters like the SIC method. The algorithm operates by iteratively minimizing:

(9)

simultaneously with respect to , and . Then we update the warp , and .

We define the concatenation parameter of the shape and the appearance , and the steepest-descent images as:

(10)

where . We can then compute the parameter update as:

(11)

where .

To find the parameter of the global transformation (q), we used incremental update as: , where . This approach is called Bi-SIC in the rest of this paper. In this case both and are calculated in each iteration. The extra computational load of Bi-SIC in comparison with SIC is to calculate the gradient of the warped image and in each iteration.

In addition to the introduced bidirectional approach, we also propose two modifications to AAM fitting as follows:

1-Affine Transformation: Image alignment techniques for AAM fitting usually consider a 2D set of similarity transform for the global transformation. Affine transformation can improve the performance of Active Shape Model for facial feature extraction [7]. In this paper, we apply an affine transformation with six degrees of freedom for AAM fitting. Assuming the base mesh is: . We choose , , , , and . The global affine transformation is defined as: . This transformation has more degrees of freedom and therefore results in a better modeling of the shape variation.

2- Fitting Constraint: Introduced approaches for AAM fitting still suffer from lack of generality for unseen faces. In addition, the result can differ significantly from trained shapes. One idea is to apply some constraints on fitting iterations. Defining a well constraint is not easy because of the complexity of the face shape, huge variation of the appearance due to different subjects, illuminations and expressions, and the existence of non-face areas (e.g. glasses). In this paper, we apply a simple constraint of Active Shape Models (ASM) [3], i.e. those shape parameters (p) are updated that , where are the eigenvalues of the trained shapes. This constraint will force the algorithm to result in shapes similar to trained shapes with a limited degree of freedom and therefore prevent it from resulting in non-face shapes.

4 Experimental Results

We implemented the PO [9], the SIC [4], and our proposed Bi-PO and Bi-SIC methods using Matlab platform. We also used the affine transformation for the global transformation instead of 2D similarity and applied the introduced constraint to the PO, SIC, Bi-PO, and Bi-SIC methods and called them PO-AC, SIC-AC, Bi-PO-AC, and Bi-SIC-AC, respectively.

We applied the aforementioned methods on CMU Multi-PIE face dataset [5]. The CMU Multi-PIE database contains more than 750,000 images of 337 people. Subjects were imaged under 15 view points and 19 illumination conditions. The image resolution is 640480, where the distance between the center of the eyes are approximately 80 pixels. Certain poses of a subset have 68 facial landmark points. We select a subset from the dataset containing 100 different subjects with the frontal head pose and with the same illumination. We also selected 50 images of left and right head poses that have 68 facial landmark points. Figure 1 shows images of sample subjects in frontal, left and right poses.

Figure 1: Some sample images of frontal, left and right poses from Multi-PIE dataset [5].

To initialize the shape model in AAM fitting, we selected two outer eye corners and the chin point (3 points) from the ground truth landmarks and perturbed them randomly by 5 pixels. Then we used the average shape obtained from training subjects as the initial shape and transformed it using similarity transformation obtained by those three perturbed points. Figure 2(a) shows the initial shape for a sample image.

(a) initial shape
(b) fitted shape
Figure 2: Initial and fitted shapes of a sample image.

We tested the performance of the PO, SIC, PO-AC, SIC-AC, Bi-PO, Bi-SIC, Bi-PO-AC, and Bi-SIC-AC methods when the number of images in the training sets varied using 10-fold cross validation. Particularly, we selected 10, 20, 30, 40, 50, 60, 70, 80 and 90 images randomly from the frontal subset and trained separate AAMs. For testing the generalization performance of the fitting methods, we fitted the trained models onto 10 images that are not included in the training sets and repeated this experiment 10 times for different test images. For comparing the fitting performance, we calculated the Root Mean Square Error (RMSE). The value of RMSE shows the distance between the fitted and the actual shape. Naturally, the smaller the RMSE, the better the fitting.

In our first experiment, we examined the effect of using affine transformation and constraint on both the PO and SIC method as well as the introduced bidirectional warping. Figure 3 shows the fitting RMSE value of the PO, Bi-PO, PO-AC, and Bi-PO-AC on the frontal subset. Figure 4 shows the fitting RMSE value of the SIC, Bi-SIC, SIC-AC, and Bi-SIC-AC on the frontal subset. In both experiments, using affine transformation and having constraint improved the fitting performance. When we have the constraint, it keeps the shape similar to the trained shapes (i.e. face) during the fitting process and prevents the algorithm from resulting non-face shapes. In addition, the affine transformation gives the algorithm more degrees of freedom, and therefore it fits better on unseen samples. It is also shown that bidirectional warping has a better fitting performances than unidirectional warping. Bi-PO and Bi-SIC both have comparative fitting performance and both fit better in comparison with the original unidirectional algorithms.

Figure 3: RMSE of fitting for variation of PO.
Figure 4: RMSE of fitting for variation of SIC.

There are no standard or established choices for the convergence criterion. In this paper, we visually inspected a number of results in the RMSE range of 0-20 and confirmed that those having RMSE less than 5 pixels seem successfully fitted. Figure 2(b) shows a sample fitted image having RMSE 4.02.

Figure 5 shows the percentage of fitted shapes for the frontal subset using PO, Bi-PO, PO-AC, and Bi-PO-AC. Figure 6 shows the percentage of fitted shapes for the frontal subset using SIC, Bi-SIC, SIC-AC and Bi-SIC-AC. As it shown, the bidirectional warping has better performance than the unidirectional method. Also applying the constraint and affine transformation result in a better modeling of unseen images and more convergence on both the PO and SIC. It should be mentioned that the percentage of fitting depends on the threshold value, but empirically both algorithms have more or less similar performance in comparison to each other in a reasonable range of threshold value.

Figure 5: Percentage of fitted images for variation of PO.
Figure 6: Percentage of fitted images for variation of SIC.

In another experiment, we tested the generalization performance of our proposed approach for different poses. We trained an AAM with 120 images (40 images of each frontal, left and right subsets). To test the generality of the fitting, we fitted the trained model onto the 10 other subjects from each pose. We repeated this experiment five times and averaged the fitting results of the SIC, Bi-SIC, SIC-AC, and Bi-SIC-AC. Initial shape was again the warped average shape obtained from training subjects. Table 1 shows the average RMSE of fitting for frontal, left and right poses. Similarly, we defined a threshold of RMSE less than 5 pixels as the fitted shape. Table 2 shows the percentage of fitted shapes for frontal, left and right pose subsets. Similar to the previous experiment, using affine transformation and applying constraints on SIC improve the fitting performance. The introduced bidirectional approach also improves the SIC performance significantly, especially when we have pose variations.

SIC SIC-AC Bi-SIC Bi-SIC-AC
left 6.99 8.60 8.43 8.76
right 4.07 3.40 4.02 3.37
frontal 3.78 3.46 4.02 3.38
Table 1: RMSE of fitting on the left and right poses.
SIC SIC-AC Bi-SIC Bi-SIC-AC
left 72 76 62 72
right 80 88 80 90
frontal 86 90 84 96
Table 2: Percentage of fitted on the left and right poses.

Computational Complexity: The bidirectional method introduces an extra computation in every iterations of fitting. If we assume is the number of warp parameters, is the number of pixels, and is the number of top appearance eigenvectors, the complexity of the PO and SIC methods per iteration are and , respectively [1]. In the bidirectional approach, we have parameters for the chosen global transformation, and in every iterations we need to compute: the gradient of the image (step 7) with the complexity of ; the Jacobian (step 8) with the complexity of ; the steepest descent images (step 9) with the complexity of ; the Hessian matrix and invert it (step 10) with the complexity of ; and with complexity of .

The complexity overload of the bidirectional approach is . The numbers and depend on the size of the training set and the model dimensionalities. In most AAM implementations, the dimensionalities of the shape and appearance models are chosen by retaining a fixed percentage (typically 95%) of the variance in the eigenvalues [4]. In our experimental results, depending on the size of the training set, varies between and varies between . For the affine transformation, is 6. Hence, the complexity of Bi-PO is at least two times greater than PO, and the complexity of Bi-SIC is greater than SIC. However, this is based on the assumption of having the same constant factor for all steps.

We implemented all algorithms using Matlab on a windows platform. We executed them on a PC with Intel core Duo 3.00 GHz CPU having 4 GB of RAM, where both implementations have the same termination condition, i.e. the algorithm terminates if the shape does not change or continues for 50 iterations at maximum. In practice, the implemented PO and SIC methods take 3 and 8 seconds for each frame, while the execution of the Bi-PO and Bi-SIC-AC methods take 20 and 27 seconds, respectively,

5 Discussion and Conclusions

In summary, unlike previous image alignment approaches for AAM fitting that warp either the input image (e.g. Lucas-Kanade method) or the appearance template (e.g. inverse compositional algorithm), we warp both the input image for the global transformation and the template for the shape parameters in the fitting process. Warping both the input image and the appearance template causes the AAM to consider more appearance variations, and therefore it can fit better on images with different poses and appearances. We showed that the introduced bidirectional approach can be applied on the “projected out” and the “simultaneously inverse compositional” approaches for AAM fitting. We also proposed using affine transformation with six degrees of freedom instead of 2D similarity and applying a simple constraint to prevent the fitting algorithm from resulting in shapes far from face geometry.

We tested the performance of the proposed approach on Mutli-PIE dataset. We compared the accuracy of our proposed fitting approach with the PO and SIC methods. First, we trained the AAM with different number of training images and tested the fitting accuracy on unseen images. In another experiment, we then compared the accuracy of fitting on images with different poses. Our experimental results showed that warping both the image and the template makes the AAM fitting more generic. In addition, applying affine transformation gives the algorithm more degrees of freedom to model new face instances and the proposed constraint in the fitting iterations prevents resulting in non-face shapes. In conclusion, our method is promising for modeling and tracking facial images of unseen subjects (i.e. generic model) and also when the accuracy of AAM fitting has priority to the execution speed.

References

  • [1] S. Baker and I. Matthews. Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3):221–255, 2004.
  • [2] T. Cootes, G. Edwards, and C. Taylor. Active appearance models. Computer Vision—ECCV’98, pages 484–498, 1998.
  • [3] T. Cootes, C. Taylor, et al. Statistical models of appearance for computer vision. World Wide Web Publication, February, 2001.
  • [4] R. Gross, I. Matthews, and S. Baker. Generic vs. person specific active appearance models. Image Vision Comput., 23(12):1080–1093, 2005.
  • [5] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. Multi-PIE. In Automatic Face Gesture Recognition, 2008. FG ’08. 8th IEEE International Conference on, pages 1–8, 2008.
  • [6] Y. Keller and A. Averbuch. Fast motion estimation using bidirectional gradient methods. Image Processing, IEEE Transactions on, 13(8):1042–1054, 2004.
  • [7] M. H. Mahoor, M. Abdel-Mottaleb, A.-N. Ansari, et al. Improved active shape model for facial feature extraction in color images. Journal of multimedia, 1(4):21–28, 2006.
  • [8] M. H. Mahoor, S. Cadavid, D. S. Messinger, and J. F. Cohn. A framework for automated measurement of the intensity of non-posed facial action units. In Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops 2009, pages 74–80. IEEE, 2009.
  • [9] I. Matthews and S. Baker. Active appearance models revisited. International Journal of Computer Vision, 60(2):135–164, 2004.
  • [10] R. Mégret, J. Authesserre, and Y. Berthoumieu. Bidirectional Composition on Lie Groups for Gradient-Based Image Alignment. Image Processing, IEEE Transactions on, 19(9):2369–2381, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
31674
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description