Convolutional neural network architecture for geometric matching

Convolutional neural network architecture for geometric matching

Ignacio Rocco   Relja Arandjelović    Josef Sivic
DI ENS     INRIA    CIIRC
Département d’informatique de l’ENS, École normale supérieure, CNRS, PSL Research University, 75005 Paris, France.Czech Institute of Informatics, Robotics and Cybernetics at the Czech Technical University in Prague.Now at DeepMind.
Abstract

We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.

1 Introduction

Estimating correspondences between images is one of the fundamental problems in computer vision [forsyth2002computer, hartley2003multiple] with applications ranging from large-scale 3D reconstruction [agarwal2009building] to image manipulation [HaCohen11] and semantic segmentation [rubinstein2013unsupervised]. Traditionally, correspondences consistent with a geometric model such as epipolar geometry or planar affine transformation, are computed by detecting and matching local features (such as SIFT [lowe2004distinctive] or HOG [Dalal05, ham2016]), followed by pruning incorrect matches using local geometric constraints [schmid1997local, Sivic03] and robust estimation of a global geometric transformation using algorithms such as RANSAC [Fischler81] or Hough transform [Lamdan88, Leibe08, lowe2004distinctive]. This approach works well in many cases but fails in situations that exhibit (i) large changes of depicted appearance due to e.g. intra-class variation [ham2016], or (ii) large changes of scene layout or non-rigid deformations that require complex geometric models with many parameters which are hard to estimate in a manner robust to outliers.

In this work we build on the traditional approach and develop a convolutional neural network (CNN) architecture that mimics the standard matching process. First, we replace the standard local features with powerful trainable convolutional neural network features [Krizhevsky12, Simonyan15], which allows us to handle large changes of appearance between the matched images. Second, we develop trainable matching and transformation estimation layers that can cope with noisy and incorrect matches in a robust way, mimicking the good practices in feature matching such as the second nearest neighbor test [lowe2004distinctive], neighborhood consensus [schmid1997local, Sivic03] and Hough transform-like estimation [Lamdan88, Leibe08, lowe2004distinctive].

The outcome is a convolutional neural network architecture trainable for the end task of geometric matching, which can handle large appearance changes, and is therefore suitable for both instance-level and category-level matching problems.

Figure 1: Our trained geometry estimation network automatically aligns two images with substantial appearance differences. It is able to estimate large deformable transformations robustly in the presence of clutter.

2 Related work

The classical approach for finding correspondences involves identifying interest points and computing local descriptors around these points [harris1988combined, schmid1997local, lowe1999object, mikolajczyk2002affine, lowe2004distinctive, Berg05, bay2006surf]. While this approach performs relatively well for instance-level matching, the feature detectors and descriptors lack the generalization ability for category-level matching.

Recently, convolutional neural networks have been used to learn powerful feature descriptors which are more robust to appearance changes than the classical descriptors [jahrer2008learned, simo2015discriminative, han2015matchnet, zagoruyko2015learning, balntas2016pn]. However, these works still divide the image into a set of local patches and extract a descriptor individually from each patch. Extracted descriptors are then compared with an appropriate distance measure [jahrer2008learned, simo2015discriminative, balntas2016pn], by directly outputting a similarity score [han2015matchnet, zagoruyko2015learning], or even by directly outputting a binary matching/non-matching decision [altwaijry2016learning].

In this work, we take a different approach, treating the image as a whole, instead of a set of patches. Our approach has the advantage of capturing the interaction of the different parts of the image in a greater extent, which is not possible when the image is divided into a set of local regions.

Related are also network architectures for estimating inter-frame motion in video [weinzaepfel2013deepflow, fischer2015flownet, thewlis16fully-trainable] or instance-level homography estimation [detone2016deep], however their goal is very different from ours, targeting high-precision correspondence with very limited appearance variation and background clutter. Closer to us is the network architecture of [kanazawa2016warpnet] which, however, tackles a different problem of fine-grained category-level matching (different species of birds) with limited background clutter and small translations and scale changes, as their objects are largely centered in the image. In addition, their architecture is based on a different matching layer, which we show not to perform as well as the matching layer used in our work.

Some works, such as [Berg05, liu2011sift, Duchenne11, Kim13, long2014convnets, ham2016], have addressed the hard problem of category-level matching, but rely on traditional non-trainable optimization for matching [Berg05, liu2011sift, Duchenne11, Kim13, long2014convnets], or guide the matching using object proposals [ham2016]. On the contrary, our approach is fully trainable in an end-to-end manner and does not require any optimization procedure at evaluation time, or guidance by object proposals.

Others [learned-miller2006, shokrollahi2015unsupervised, Zhou15] have addressed the problems of instance and category-level correspondence by performing joint image alignment. However, these methods differ from ours as they: (i) require class labels; (ii) don’t use CNN features; (iii) jointly align a large set of images, while we align image pairs; and (iv) don’t use a trainable CNN architecture for alignment as we do.

3 Architecture for geometric matching

In this section, we introduce a new convolutional neural network architecture for estimating parameters of a geometric transformation between two input images. The architecture is designed to mimic the classical computer vision pipeline (e.g[philbin2007object]), while using differentiable modules so that it is trainable end-to-end for the geometry estimation task. The classical approach consists of the following stages: (i) local descriptors (e.g. SIFT) are extracted from both input images, (ii) the descriptors are matched across images to form a set of tentative correspondences, which are then used to (iii) robustly estimate the parameters of the geometric model using RANSAC or Hough voting.

Figure 2: Diagram of the proposed architecture. Images and are passed through feature extraction networks which have tied parameters , followed by a matching network which matches the descriptors. The output of the matching network is passed through a regression network which outputs the parameters of the geometric transformation.

Our architecture, illustrated in Fig. 2, mimics this process by: (i) passing input images and through a siamese architecture consisting of convolutional layers, thus extracting feature maps and which are analogous to dense local descriptors, (ii) matching the feature maps (“descriptors”) across images into a tentative correspondence map , followed by a (iii) regression network which directly outputs the parameters of the geometric model, , in a robust manner. The inputs to the network are the two images, and the outputs are the parameters of the chosen geometric model, e.g. a 6-D vector for an affine transformation.

In the following, we describe each of the three stages in detail.

3.1 Feature extraction

The first stage of the pipeline is feature extraction, for which we use a standard CNN architecture. A CNN without fully connected layers takes an input image and produces a feature map , which can be interpreted as a dense spatial grid of -dimensional local descriptors. A similar interpretation has been used previously in instance retrieval [Azizpour14, Babenko15, Gong14, arandjelovic2015netvlad] demonstrating high discriminative power of CNN-based descriptors. Thus, for feature extraction we use the VGG-16 network [Simonyan15], cropped at the pool4 layer (before the ReLU unit), followed by per-feature L2-normalization. We use a pre-trained model, originally trained on ImageNet [deng2009imagenet] for the task of image classification. As shown in Fig. 2, the feature extraction network is duplicated and arranged in a siamese configuration such that the two input images are passed through two identical networks which share parameters.

3.2 Matching network

The image features produced by the feature extraction networks should be combined into a single tensor as input to the regressor network to estimate the geometric transformation. We first describe the classical approach for generating tentative correspondences, and then present our matching layer which mimics this process.

Tentative matches in classical geometry estimation.

Classical methods start by computing similarities between all pairs of descriptors across the two images. From this point on, the original descriptors are discarded as all the necessary information for geometry estimation is contained in the pairwise descriptor similarities and their spatial locations. Secondly, the pairs are pruned by either thresholding the similarity values, or, more commonly, only keeping the matches which involve the nearest (most similar) neighbors. Furthermore, the second nearest neighbor test [lowe2004distinctive] prunes the matches further by requiring that the match strength is significantly stronger than the second best match involving the same descriptor, which is very effective at discarding ambiguous matches.

Matching layer.

Our matching layer applies a similar procedure. Analogously to the classical approach, only descriptor similarities and their spatial locations should be considered for geometry estimation, and not the original descriptors themselves.

To achieve this, we propose to use a correlation layer followed by normalization. Firstly, all pairs of similarities between descriptors are computed in the correlation layer. Secondly, similarity scores are processed and normalized such that ambiguous matches are strongly down-weighted.

Figure 3: Correlation map computation with CNN features. The correlation map contains all pairwise similarities between individual features and . At a particular spatial location the correlation map output contains all the similarities between and all .

In more detail, given L2-normalized dense feature maps , the correlation map outputted by the correlation layer contains at each position the scalar product of a pair of individual descriptors and , as detailed in Eq. (1).

(1)

where and indicate the individual feature positions in the dense feature maps, and is an auxiliary indexing variable for .

A diagram of the correlation layer is presented in Fig. 3. Note that at a particular position , the correlation map contains the similarities between at that position and all the features of .

As is done in the classical methods for tentative correspondence estimation, it is important to postprocess the pairwise similarity scores to remove ambiguous matches. To this end, we apply a channel-wise normalization of the correlation map at each spatial location to produce the final tentative correspondence map . The normalization is performed by ReLU, to zero out negative correlations, followed by L2-normalization, which has two desirable effects. First, let us consider the case when descriptor correlates well with only a single feature in . In this case, the normalization will amplify the score of the match, akin to the nearest neighbor matching in classical geometry estimation. Second, in the case of the descriptor matching multiple features in due to the existence of clutter or repetitive patterns, matching scores will be down-weighted similarly to the second nearest neighbor test [lowe2004distinctive]. However, note that both the correlation and the normalization operations are differentiable with respect to the input descriptors, which facilitates backpropagation thus enabling end-to-end learning.

Discussion.

The first step of our matching layer, namely the correlation layer, is somewhat similar to layers used in DeepMatching [weinzaepfel2013deepflow] and FlowNet [fischer2015flownet]. However, DeepMatching [weinzaepfel2013deepflow] only uses deep RGB patches and no part of their architecture is trainable. FlowNet [fischer2015flownet] uses a spatially constrained correlation layer such that similarities are are only computed in a restricted spatial neighborhood thus limiting the range of geometric transformations that can be captured. This is acceptable for their task of learning to estimate optical flow, but is inappropriate for larger transformations that we consider in this work. Furthermore, neither of these methods performs score normalization, which we find to be crucial in dealing with cluttered scenes.

Previous works have used other matching layers to combine descriptors across images, namely simple concatenation of descriptors along the channel dimension [detone2016deep] or subtraction [kanazawa2016warpnet]. However, these approaches suffer from two problems. First, as following layers are typically convolutional, these methods also struggle to handle large transformations as they are unable to detect long-range matches. Second, when concatenating or subtracting descriptors, instead of computing pairwise descriptor similarities as is commonly done in classical geometry estimation and mimicked by the correlation layer, image content information is directly outputted. To further illustrate why this can be problematic, consider two pairs of images that are related with the same geometric transformation – the concatenation and subtraction strategies will produce different outputs for the two cases, making it hard for the regressor to deduce the geometric transformation. In contrast, the correlation layer output is likely to produce similar correlation maps for the two cases, regardless of the image content, thus simplifying the problem for the regressor. In line with this intuition, in Sec. LABEL:sec:generalization we show that the concatenation and subtraction methods indeed have difficulties generalizing beyond the training set, while our correlation layer achieves generalization yielding superior results.

3.3 Regression network

The normalized correlation map is passed through a regression network which directly estimates parameters of the geometric transformation relating the two input images. In classical geometry estimation, this step consists of robustly estimating the transformation from the list of tentative correspondences. Local geometric constraints are often used to further prune the list of tentative matches [schmid1997local, Sivic03] by only retaining matches which are consistent with other matches in their spatial neighborhood. Final geometry estimation is done by RANSAC [Fischler81] or Hough voting [Lamdan88, Leibe08, lowe2004distinctive].

We again mimic the classical approach using a neural network, where we stack two blocks of convolutional layers, followed by batch normalization [ioffe2015batch] and the ReLU non-linearity, and add a final fully connected layer which regresses to the parameters of the transformation, as shown in Fig. 4. The intuition behind this architecture is that the estimation is performed in a bottom-up manner somewhat like Hough voting, where early convolutional layers vote for candidate transformations, and these are then processed by the later layers to aggregate the votes. The first convolutional layers can also enforce local neighborhood consensus [schmid1997local, Sivic03] by learning filters which only fire if nearby descriptors in image A are matched to nearby descriptors in image B, and we show qualitative evidence in Sec. LABEL:sec:whatislearned that this indeed does happen.

Figure 4: Architecture of the regression network. It is composed of two convolutional layers without padding and stride equal to 1, followed by batch normalization and ReLU, and a final fully connected layer which regresses to the transformation parameters.

Discussion.

A potential alternative to a convolutional regression network is to use fully connected layers. However, as the input correlation map size is quadratic in the number of image features, such a network would be hard to train due to a large number of parameters that would need to be learned, and it would not be scalable due to occupying too much memory and being too slow to use. It should be noted that even though the layers in our architecture are convolutional, the regressor can learn to estimate large transformations. This is because one spatial location in the correlation map contains similarity scores between the corresponding feature in image B and all the features in image A (c.f. equation (1)), and not just the local neighborhood as in [fischer2015flownet].

3.4 Hierarchy of transformations

Another commonly used approach when estimating image to image transformations is to start by estimating a simple transformation and then progressively increase the model complexity, refining the estimates along the way [lowe1999object, Berg05, philbin2007object]. The motivation behind this method is that estimating a very complex transformation could be hard and computationally inefficient in the presence of clutter, so a robust and fast rough estimate of a simpler transformation can be used as a starting point, also regularizing the subsequent estimation of the more complex transformation.

We follow the same good practice and start by estimating an affine transformation, which is a 6 degree of freedom linear transformation capable of modeling translation, rotation, non-isotropic scaling and shear. The estimated affine transformation is then used to align image B to image A using an image resampling layer [Jaderberg15]. The aligned images are then passed through a second geometry estimation network which estimates 18 parameters of a thin-plate spline transformation. The final estimate of the geometric transformation is then obtained by composing the two transformations, which is also a thin-plate spline. The process is illustrated in Fig. 5.

Figure 5: Estimating progressively more complex geometric transformations. Images A and B are passed through a network which estimates an affine transformation with parameters (see Fig. 2). Image A is then warped using this transformation to roughly align with B, and passed along with B through a second network which estimates a thin-plate spline (TPS) transformation that refines the alignment.

4 Training

In order to train the parameters of our geometric matching CNN, it is necessary to design the appropriate loss function, and to use suitable training data. We address these two important points next.

4.1 Loss function

We assume a fully supervised setting, where the training data consists of pairs of images and the desired outputs in the form of the parameters of the ground-truth geometric transformation. The loss function is designed to compare the estimated transformation with the ground-truth transformation and, more importantly, compute the gradient of the loss function with respect to the estimates . This gradient is then used in a standard manner to learn the network parameters which minimize the loss function by using backpropagation and Stochastic Gradient Descent.

It is desired for the loss to be general and not specific to a particular type of geometric model, so that it can be used for estimating affine, homography, thin-plate spline or any other geometric transformation. Furthermore, the loss should be independent of the parametrization of the transformation and thus should not directly operate on the parameter values themselves. We address all these design constraints by measuring loss on an imaginary grid of points which is being deformed by the transformation. Namely, we construct a grid of points in image A, transform it using the ground truth and neural network estimated transformations and with parameters and , respectively, and measure the discrepancy between the two transformed grids by summing the squared distances between the corresponding grid points:

(2)

where is the uniform grid used, and . We define the grid as having , that is to say, each coordinate belongs to a partition of in equally spaced subintervals of steps . Note that we construct the coordinate system such that the center of the image is at and that the width and height of the image are equal to , i.e. the bottom left and top right corners have coordinates and , respectively.

The gradient of the loss function with respect to the transformation parameters, needed to perform backpropagation in order to learn network weights, can be computed easily if the location of the transformed grid points is differentiable with respect to . This is commonly the case, for example, when is an affine transformation, is linear in parameters and therefore the loss can be differentiated in a straightforward manner.

4.2 Training from synthetic transformations

Our training procedure requires fully supervised training data consisting of image pairs and a known geometric relation. Training CNNs usually requires a lot of data, and no public datasets exist that contain many image pairs annotated with their geometric transformation. Therefore, we opt for training from synthetically generated data, which gives us the flexibility to gather as many training examples as needed, for any 2-D geometric transformation of interest. We generate each training pair , by sampling from a public image dataset, and generating by applying a random transformation to . More precisely, is created from the central crop of the original image, while is created by transforming the original image with added symmetrical padding in order to avoid border artifacts; the procedure is shown in Fig. 6.

Figure 6: Synthetic image generation. Symmetric padding is added to the original image to enlarge the sampling region, its central crop is used as image A, and image B is created by performing a randomly sampled transformation .

5 Experimental results

In this section we describe our datasets, give implementation details, and compare our method to baselines and the state-of-the-art. We also provide further insights into the components of our architecture.

5.1 Evaluation dataset and performance measure

Quantitative evaluation of our method is performed on the Proposal Flow dataset of Ham et al. [ham2016]. The dataset contains 900 image pairs depicting different instances of the same class, such as ducks and cars, but with large intra-class variations, e.g. the cars are often of different make, or the ducks can be of different subspecies. Furthermore, the images contain significant background clutter, as can be seen in Fig. LABEL:fig:qualitative. The task is to predict the locations of predefined keypoints from image A in image B. We do so by estimating a geometric transformation that warps image A into image B, and applying the same transformation to the keypoint locations. We follow the standard evaluation metric used for this benchmark, i.e. the average probability of correct keypoint (PCK) [Yang13], being the proportion of keypoints that are correctly matched. A keypoint is considered to be matched correctly if its predicted location is within a distance of of the target keypoint position, where and and are the height and width of the object bounding box, respectively.

5.2 Training data

Two different training datasets for the affine and thin-plate spline stages, dubbed StreetView-synth-aff and StreetView-synth-tps respectively, were generated by applying synthetic transformations to images from the Tokyo Time Machine dataset [arandjelovic2015netvlad] which contains Google Street View images of Tokyo.

Each synthetically generated dataset contains 40k images, divided into 20k for training and 20k for validation. The ground truth transformation parameters were sampled independently from reasonable ranges, e.g. for the affine transformation we sample the relative scale change of up to , while for thin-plate spline we randomly jitter a grid of control points by independently translating each point by up to one quarter of the image size in all directions.

In addition, a second training dataset for the affine stage was generated, created from the training set of Pascal VOC 2011 [pascal-voc-2011] which we dubbed Pascal-synth-aff. In Sec. LABEL:sec:generalization, we compare the performance of networks trained with StreetView-synth-aff and Pascal-synth-aff and demonstrate the generalization capabilities of our approach.

5.3 Implementation details

We use the MatConvNet library [vedaldi15matconvnet] and train the networks with stochastic gradient descent, with learning rate , momentum 0.9, no weight decay and batch size of 16. There is no need for jittering as instead of data augmentation we can simply generate more synthetic training data. Input images are resized to producing feature maps that are passed into the matching layer. The affine and thin-plate spline stages are trained independently with the StreetView-synth-aff and StreetView-synth-tps datasets, respectively. Both stages are trained until convergence which typically occurs after 10 epochs, and takes 12 hours on a single GPU. Our final method for estimating affine transformations uses an ensemble of two networks that independently regress the parameters, which are then averaged to produce the final affine estimate. The two networks were trained on different ranges of affine transformations. As in Fig. 5, the estimated affine transformation is used to warp image A and pass it together with image B to a second network which estimates the thin-plate spline transformation. All training and evaluation code, as well as our trained networks, are online at [website].

5.4 Comparison to state-of-the-art

We compare our method against SIFT Flow [liu2011sift], Graph-matching kernels (GMK) [Duchenne11], Deformable spatial pyramid matching (DSP) [Kim13], DeepFlow [revaud2015deepmatching], and all three variants of Proposal Flow (NAM, PHM, LOM) [ham2016]. As shown in Tab. LABEL:tab:pck, our method outperforms all others and sets the new state-of-the-art on this data. The best competing methods are based on Proposal Flow and make use of object proposals, which enables them to guide the matching towards regions of images that contain objects. Their performance varies significantly with the choice of the object proposal method, illustrating the importance of this guided matching. On the contrary, our method does not use any guiding, but it still manages to outperform even the best Proposal Flow and object proposal combination.

Furthermore, we also compare to affine transformations estimated with RANSAC using the same descriptors as our method (VGG-16 pool4). The parameters of this baseline have been tuned extensively to obtain the best result by adjusting the thresholds for the second nearest neighbor test and by pruning proposal transformations which are outside of the range of likely transformations. Our affine estimator outperforms the RANSAC baseline on this task with 49% (ours) compared to 47% (RANSAC).

Methods PCK (%)
DeepFlow [revaud2015deepmatching] 20
GMK [Duchenne11] 27
SIFT Flow [liu2011sift] 38
DSP [Kim13] 29
Proposal Flow NAM [ham2016] 53
Proposal Flow PHM [ham2016] 55
Proposal Flow LOM [ham2016] 56
RANSAC with our features (affine) 47
Ours (affine) 49
Ours (affine + thin-plate spline) 56
Ours (affine ensemble + thin-plate spline) 57
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
22670
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description