End-to-End Integration of a
Convolutional Network, Deformable Parts Model
and Non-Maximum Suppression
Deformable Parts Models and Convolutional Networks each have achieved notable performance in object detection. Yet these two approaches find their strengths in complementary areas: DPMs are well-versed in object composition, modeling fine-grained spatial relationships between parts; likewise, ConvNets are adept at producing powerful image features, having been discriminatively trained directly on the pixels. In this paper, we propose a new model that combines these two approaches, obtaining the advantages of each. We train this model using a new structured loss function that considers all bounding boxes within an image, rather than isolated object instances. This enables the non-maximal suppression (NMS) operation, previously treated as a separate post-processing stage, to be integrated into the model. This allows for discriminative training of our combined Convnet + DPM + NMS model in end-to-end fashion. We evaluate our system on PASCAL VOC 2007 and 2011 datasets, achieving competitive results on both benchmarks.
Object detection has been addressed using a variety of approaches, including sliding-window Deformable Parts Models [5, 19, 6], region proposal with classification [7, 16], and location regression with deep learning [13, 14]. Each of these methods have their own advantages, yet are by no means mutually exclusive. In particular, structured parts models capture the composition of individual objects from component parts, yet often use rudimentary features like HoG  that throw away much of the discriminative information in the image. By contrast, deep learning approaches [9, 18, 13], based on Convolutional Networks , extract strong image features, but do not explicitly model object composition. Instead, they rely on pooling and large fully connected layers to combine information from spatially disparate regions; these operations can throw away useful fine-grained spatial relationships important for detection.
In this paper, we propose a framework (shown in Fig. 1) that combines these two approaches, fusing together structured learning and deep learning to obtain the advantages of each. We use a DPM for detection, but replace the HoG features with features learned by a convolutional network. This allows the use of complex image features, but still preserves the spatial relationships between object parts during inference.
An often overlooked aspect of many detection systems is the non-maximal suppression stage, used to winnow multiple high scoring bounding boxes around an object instance down to a single detection. Typically, this is a post-processing operation applied to the set of bounding boxes produced by the object detector. As such, it is not part of the loss function used to train the model and any parameters must be tuned by hand. However, as demonstrated by Parikh and Zitnick , NMS can be a major performance bottleneck (see Fig. 2). We introduce a new type of image-level loss function for training that takes into consideration of all bounding boxes within an image. This differs with the losses used in existing frameworks that consider single cropped object instances. Our new loss function enables the NMS operation trained as part of the model, jointly with the Convnet and DPM components.
2 Related Work
Most closely related is the concurrent work of Girshick et al. , who also combine a DPM with ConvNet features in a model called DeepPyramid DPM (DP-DPM). Their work, however, is limited to integrating fixed pretrained ConvNet features with a DPM. We independently corroborate the conclusion that using ConvNet features in place of HoG greatly boosts the performance of DPMs. Furthermore, we show how using a post-NMS online training loss improves response ordering and addresses errors from the NMS stage. We also perform joint end-to-end training of the entire system.
The basic building blocks of our model architecture come from the DPMs of Felzenszwalb et al.  and Zhu et al. , and the ConvNet of Krizhevsky et al. . We make crucial modifications in their integration that enables the resulting model to achieve competitive object detection performance. In particular, we develop ways to transfer the ConvNet from classification to the detection environment, as well as changes to the learning procedure to enable joint training of all parts.
The first system to combine structured learning with a ConvNet is LeCun et al. , who train a ConvNet to classify individual digits, then train with hand written strings of digits discriminatively. Very recently, Tompson et al.  trained a model for human pose estimation that combines body part location estimates into a convolutional network, in effect integrating an MRF-like model with a ConvNet. Their system, however, requires annotated body part locations and is applied to pose estimation, whereas our system does not require annotated parts and is applied to object detection.
Some recent works have applied ConvNets to object detection directly: Sermanet et al.  train a network to regress on object bounding box coordinates at different strides and scales, then merge predicted boxes across the image. Szegedy et al.  regress to a bitmap with the bounding box target, which they then apply to strided windows. Both of these approaches directly regress to the bounding box from the convolutional network features, potentially ignoring many important spatial relationships. By contrast, we use the ConvNet features as input to a DPM. In this way, we can include a model of the spatial relationships between object parts.
In the R-CNN model, Girshick et al.  take a different approach in the use of ConvNets. Instead of integrating a location regressor into the network, they instead produce candidate region proposals with a separate mechanism, then use the ConvNet to classify each region. However, this explicitly resizes each region to the classifier field of view (fixed size), performing significant distortions to the input, and requires the entire network stack to be recomputed for each region. Instead, our integration runs the features in a convolutional bottom-up fashion over the whole image, preserving the true aspect ratios and requiring only one computational pass.
End-to-end training of a multi-class detector and post-processing has also been discussed in Desai et al. . Their approach reformulates NMS as a contextual relationship between locations. They replace NMS, which removes duplicate detections, with a greedy search that adds detection results using an object class-pairs context model. Whereas their system handles interactions between different types of objects, our system integrates NMS in a way that creates an ordering of results both of different classes and the same class but different views. In addition, we further integrate this into a full end-to-end system including ConvNet feature generation.
3 Model Architecture
The architecture of our model is shown in Fig. 3. For a given input image , we first construct an image pyramid (a) with five intervals over one octave111We use as many octaves as required to make the smallest dimension 48 pixels in size. We apply the ConvNet (b) at each scale to generate feature maps . These are then passed to the DPM (c) for each class; as we describe in Section 3.2, the DPM may also be formulated as a series of neural network layers. At training time, the loss is computed using the final detection output obtained after NMS (d), and this is then back-propagated end-to-end through the entire system, including NMS, DPM and ConvNet.
3.1 Convolutional Network
We generate appearance features using the first five layers of a Convolutional Network pre-trained for the ImageNet Classification task. We first train an eight layer classification model, which is composed of five convolutional feature extraction layers, plus three fully-connected classification layers222The fully connected layers have 4096 - 4096 - 1000 output units each, with dropout applied to the two hidden layers. We use the basic model from , which trains the network using random 224x224 crops from the center 256x256 region of each training image, rescaled so the shortest side has length 256. This model achieves a top-5 error rate of 18.1% on the ILSVRC2012 validation set, voting with 2 flips and 5 translations.. After this network has been trained, we throw away the three fully-connected layers, replacing them instead with the DPM. The five convolutional layers are then used to extract appearance features.
Note that for detection, we apply the convolutional layers to images of arbitrary size (as opposed to ConvNet training, which uses fixed-size inputs). Each layer of the network is applied in a bottom-up fashion over the entire spatial extent of the image, so that the total computation performed is still proportional to the image size. This stands in contrast to , who apply the ConvNet with a fixed input size to different image regions, and is more similar to .
Applying the ImageNet classification model to PASCAL detection has two scale-related problems that must be addressed. The first is that there is a total of 16x subsampling between the input and the fifth layer; that is, each pixel in corresponds to 16 pixels of input — this is insufficient for detection, as it effectively constrains detected bounding boxes to a lie on a 16-pixel grid. The second is that the ImageNet classifier was trained on objects that are fairly large, taking up much of the 224x224 image area. By contrast, many target objects in PASCAL are significantly smaller.
To address these, we simply apply the first convolution layer with a stride of 1 instead of 4 when combining with the DPM (however, we also perform 2x2 pooling after the top ConvNet layer due to speed issues in training, making the net resolution increase only a factor of 2). This addresses both scale issues simultaneously. The feature resolution is automatically increased by elimination of the stride. Moreover, the scale of objects presented to the network at layers 2 and above is increased by a factor of 4, better aligning the PASCAL objects to the ImageNet expected size This is due to the fact that when the second layer is applied to the output of the stride-1 maps, their field of view is 4x smaller compared to stride-4, effectively increasing the size of input objects.
Note that changing the stride of the first layer is effectively the same as upsampling the input image, but preserves resolution in the convolutional filters (if the filters were downsampled, these would be equivalent operations; however we found this to work well without changing the filter size, as they are already just 11x11).
3.2 Deformable Parts Model
3.2.1 Part Responses
The first step in the DPM formulation is to convolve the appearance features with the root and parts filters, producing appearance responses. Each object view has both a root filter and nine parts filters; the parts are arranged on a 3x3 grid relative to the root, as illustrated in Fig. 4. (This is similar to , who find this works as well as the more complex placements used by ). Note that the number of root and parts filters is the same for all classes, but the size of each root and part may vary between classes and views.
Given appearance filters for each class and view , and filters for each part , the appearance scores are:
Part responses are then fed to the deformation layer.
3.2.2 Deformation Layer
The deformation layer finds the optimal part locations, accounting for both apperance and a deformation cost that models the spatial relation of the part to the root. Given appearance scores , part location relative to the root, and deformation parameters for each part, the deformed part responses are the following (input variables omitted):
where is the part response map shifted by spatial offset , and is the shape deformation feature. are the deformation weights.
Note the maximum in Eqn. 3 is taken independently at each output spatial location: i.e. for each output location, we find the max over possible deformations . In practice, searching globally is unnecessary, and we constrain to search over a window where is the spatial size of the part (in feature space). During training, we save the optimal at each output location found during forward-propagation to use during back-propagation.
The deformation layer extends standard max-pooling over with (i) a shift offset accounting for the part location, and (ii) deformation cost . Setting both of these to zero would result in standard max-pooling.
3.2.3 AND/OR Layer
Combining the scores of root, parts and object views is done using an AND-like accumulation over parts to form a score for each view , followed by an OR-like maximum over views to form the final object score :
is then the final score map for class at scale , given the image as shown in Fig. 5(left).
Right: Aligning a bounding box from DPM prediction through the convolutional network (see Section 3.3).
3.3 Bounding Box Prediction
After obtaining activation maps for each class at each scale of input, we trace the activation locations back to their corresponding bounding boxes in input space. Detection locations in are first projected back to boxes in , using the root and parts filter sizes and inferred parts offsets. These are then projected back into input space through the convolutional network. As shown in Fig. 5(right), each pixel in has a field of view of 35 pixels in the input, and moving by 1 pixel in moves by 8 pixels in the input (due to the 8x subsampling of the convolutional model). Each bounding box is obtained by choosing the input region that lies between the field of view centers of the box’s boundary. This means that 17 pixels on all sides of the input field of view are treated as context, and the bounding box is aligned to the interior region.
3.4 Non-Maximal Suppression (NMS)
The procedure above generates a list of label assignments for the image, where is a bounding box, and and are its associated class label and network response score, i.e. is equal to at the output location corresponding to box . is the set of all possible bounding boxes in the search.
The final detection result is a subset of this list, obtained by applying a modified version of non-maximal suppression derived from . If we label location as object type , some neighbors of might also have received a high scores, where the neighbors of are defined as . However, should not be labeled as to avoid duplicate detections. Applying this, we get a subset of as the final detection result; usually .
When calculating , we use a symmetric form when the bounding boxes are for different classes, but an asymmetric form when the boxes are both of the same class. For different-class boxes, , and threshold . For same-class boxes, e.g. boxes of different views or locations, and .
4 Final Prediction Loss
Our second main contribution is the use of a final-prediction loss that takes into account the NMS step used in inference. In contrast to bootstrapping with a hard negative pool, such as in  , we consider each image individually when determining positive and negative examples, accounting for NMS and the views present in the image itself. Consider the example in Fig. LABEL:fig:person: A person detector may fire on three object views: red, green, and blue. The blue (largest in this example) is closest to the ground truth, while green and red are incorrect predictions. However, we cannot simply add the green or red boxes to a set negative examples, since they are indeed present in other images as occluded people. This leads to a situation where the red view has a higher inference score than blue or green, i.e. and , because red is never labeled as negative in the bootstrapping process. After NMS, blue response will be suppressed by red, causing a NMS error. Such an error can only be avoided when we have a global view on each image: if , then we would have a correct final prediction.
4.2 Loss Function
Recall that the NMS stage produces a set of assignments predicted by the model from the set of all possible assignments. We compose the loss using two terms, and . The first, , measures the cost incurred by the assignment currently predicted by the model, while measures the cost incurred by an assignment close to the ground truth. The current prediction cost is:
where i.e. a squared hinge error. 333 where is an indicator function that equals 1 iff the condition holds is the set of all bounding boxes predicted to be in the background (): with . and are the set of positive predicted labels and the set of background labels, respectively.
The second term in the loss, , measures the cost incurred by the ground truth bounding boxes under the model. Let the ground truth bounding box set be . We construct a constrained inference assignment close to by choosing for each the box , where the is taken over all ; that is, the box with highest response out of those with sufficient overlap with the ground truth. ( in our experiments.) Similarly to before, the cost is:
Thus we measure two costs: that of the current model prediction, and that of an assignment close to the ground truth. The final discriminative training loss is difference between these two:
Note this loss is always greater than 0 because the constrained assignment always has cost at least as large as the unconstrained one, and when , i.e. when we produce detection results which are consistent with the ground truth .
4.3 Interpretation and Effect on NMS Ordering
As mentioned earlier, a key benefit to training on the final predictions as we describe is that our loss accounts for the NMS inference step. In our example in Fig. LABEL:fig:person, if the response , then and . Thus will decrease and increase . This ensures the responses are in an appropriate order when NMS is applied. Once , the mistake will be fixed.
The term in the loss is akin to an online version of hard negative mining, ensuring that the background is not detected as a positive example.
4.4 Soft Positive Assignments
When training jointly with the ConvNet, it is insufficient to measure the cost using only single positive instances, as the network can easily overfit to the individual examples. We address this using soft positive assignments in place of hard assignments; that is, we replace the definition of in Eqn. 6 used above with one using a weighted average of neighbors for each box in the assignment list:
where , and similarly for .
Note a similar strategy has been tried in the case of HoG features before, but was not found to be beneficial . By contrast, we found this to be important for integrating the ConvNet. We believe this is because the ConvNet has many more parameters than can easily overfit, whereas HoG is more constrained.
Our model is trained in online fashion with SGD, with each image being forward propagated through the model and then the resulting error backpropagated to update the parameters. During the fprop, the position of the parts in the DPM are computed and then used for the subsequent bprop. Training in standard DPM models  differs in two respects: (i) a fixed negative set is mined periodically (we have no such set, instead processing each image in turn) and (ii) part positions on this negative set are fixed for many subsequent parameter updates.
We first pretrain the DPM root and parts filters without any deformation, using a fixed set of 20K random negative examples for each class. Note that during this stage, the ConvNet weights are fixed to their initialization from ImageNet. Following this, we perform end-to-end joint training of the entire system, including ConvNet, DPM and NMS (via the final prediction loss). During joint training, we use inferred part locations in the deformation layer.
The joint training phase is outlined in Algorithm 1. For each training image sample, we build an image pyramid, and fprop each scale of the pyramid through the ConvNet and DPM to generate the assignment list . Note is represented using the output response maps. We then apply NMS to get the final assignments , as well as construct the ground-truth constrained assignments . Using the final prediction loss from Eqn. 9, we find the gradient and backpropagate through the network to update the model weights. We repeat this for 15 epochs through the training set with a learning rate , then another 15 using .
At test time, we simply forward-propagate the input pyramid through the network (ConvNet and DPM) and apply NMS.
We apply our model to the PASCAL VOC 2007 and VOC 2011/2012 object detection tasks . Table 1 shows how each component in our system improves performance on the PASCAL 2007 dataset. Our baseline implementation of HoG DPM with bootstrap training achieves 30.7 mAP. Switching HoG for a fixed pretrained ConvNet results in a large 32% relative performance gain to 40.8 mAP, corroborating the finding of  that such features greatly improve performance. On top of this, training using our online post-NMS procedure improves substantially improves performance to 43.3 mAP, and jointly training all components (ConvNet + DPM + NMS) further improves to 46.5 mAP.
|Bootstrap||NMS loss||NMS loss+FT|
In addition, we can train different models to produce detections for each class, or train all classes at once using a single model with shared ConvNet feature extractor (but different DPM components). Training all classes together further boosts performance to 46.9% mAP. Note that this allows the post-NMS loss to account for objects of different classes as well as locations and views within classes, and also makes inference faster due to the shared features. We call this model “conv-dpm+FT-all”, and the separate-class set of models “conv-dpm+FT”.
Comparisons with other systems are shown in Tables 2 (VOC 2007) and 3 (VOC 2011/2012). For VOC 2007 (Table 2), our results are very competitive, beating all other methods except the latest version of R-CNN trained on (“R-CNN(v4)FT ”). Notably, we outperform the DP-DPM method ( vs. our ), due to our integrated joint training and online NMS loss. In addition, our final model achieves comparible performance to R-CNN  with a similar feature extractor using features ( vs. ). Recent version of R-CNN achieve a better performance using a more complex network which includes fully connected layers (); extending our model to use deeper networks may also provide similar gains from better feature representations.
Finally, we provide examples of detections from our model in Figures 7, 8 and 9. Detection results are either show in green or red with ground truth bounding box in blue. Figure 9 illustrates training with our new loss function helps model fix problem for both inter-class and intra-class NMS. Our loss allows the larger view of the train to be selected in , rather than the more limited view that appears in more images. However, the gains are not limited to selecting larger views: In , we see a cat correctly selected at a smaller scale. Finally, there are also examples of inter-class correction in , e.g. “train” being selected over “bus”.
We have described an object detection system that integrates a Convolutional Network, Deformable Parts model and NMS loss in an end-to-end fashion. This fuses together aspects from both structured learning and deep learning: object structures are modeled by a composition of parts and views, while discriminative features are leveraged for appearance comparisons. Our evaluations show that our model achieves competitive performance on PASCAL VOC 2007 and 2011 datasets, and achieves substantial gains from integrating both ConvNet features as well as NMS, and training all parts jointly.
-  Y. Chen, L. Zhu, and A. L. Yuille. Active mask hierarchies for object detection. In ECCV 2010, volume 6315 of Lecture Notes in Computer Science, pages 43–56. Springer, 2010.
-  N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
-  C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. International Journal of Computer Vision, 2011.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell., 32(9):1627–1645, Sept. 2010.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645, 2010.
-  R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524, 2013.
-  R. B. Girshick, F. N. Iandola, T. Darrell, and J. Malik. Deformable part models are convolutional neural networks. CoRR, abs/1409.5403, 2014.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. pages 1106–1114, 2012.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 –2324, nov 1998.
-  D. Parikh and C. L. Zitnick. Human-debugging of machines. In In NIPS WCSSWC, 2011.
-  X. Ren and D. Ramanan. Histograms of sparse codes for object detection. 2013 IEEE Conference on Computer Vision and Pattern Recognition, 0:3246–3253, 2013.
-  P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. CoRR, abs/1312.6229, 2013.
-  C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. NIPS, 2013.
-  J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. CoRR, abs/1406.2984, 2014.
-  J. Uijlings, K. Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 104(2):154–171, 2013.
-  X. Wang, M. Yang, S. Zhu, and Y. Lin. Regionlets for generic object detection. IEEE 14th International Conf. on Computer Vision, 2013.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013.
-  L. L. Zhu, Y. Chen, A. Yuille, and W. Freeman. Latent hierarchical structural learning for object detection. 2010 IEEE Conference on Computer Vision and Pattern Recognition, 0:1062–1069, 2010.