Window-Object Relationship Guided Representation Learning for Generic Object Detections

Window-Object Relationship Guided Representation Learning for Generic Object Detections

Abstract

In existing works that learn representation for object detection, the relationship between a candidate window and the ground truth bounding box of an object is simplified by thresholding their overlap. This paper shows information loss in this simplification and picks up the relative location/size information discarded by thresholding. We propose a representation learning pipeline to use the relationship as supervision for improving the learned representation in object detection. Such relationship is not limited to object of the target category, but also includes surrounding objects of other categories. We show that image regions with multiple contexts and multiple rotations are effective in capturing such relationship during the representation learning process and in handling the semantic and visual variation caused by different window-object configurations. Experimental results show that the representation learned by our approach can improve the object detection accuracy by in mean average precision (mAP) on ILSVRC2014 [15]. On the challenging ILSVRC2014 test dataset [15], 48.6% mAP is achieved by our single model and it is the best among published results. On PASCAL VOC, it outperforms the state-of-the-art result of Fast RCNN [6] by 3.3% in absolute mAP.

1Introduction

Object detection is the task of finding the bounding boxes of objects from images. It is challenging due to variations in illumination, texture, color, size, aspect ratio, deformation, background clutter, and occlusion. In order to handle these variations, good features for robustly representing the discriminative information of objects are critical. Initially, researchers employed manually designed features [12]. Recent works [9] have demonstrated the power of learning features with deep neural networks from large-scale data. It advances the state-of-the-art of object detection substantially [7].

Figure 1:  Examples of candidate windows for detecting persons in an image. (a) Image with a man riding a motorbike. The yellow rectangle denotes the ground truth bounding box of a person. The red rectangles denote several candidate windows whose overlaps with the ground truth are larger than 0.5. Existing methods assign all these candidate windows to class person when learning feature representation, despite the large variation of visual cues and different semantic regions covered. (b) and (e) are candidate windows containing the upper body or the legs of the person. (c) and (f) are candidate windows with a smaller or larger size than the ground truth. (d) and (g) are candidate windows containing the left/right body of a person.
Figure 1: Examples of candidate windows for detecting persons in an image. (a) Image with a man riding a motorbike. The yellow rectangle denotes the ground truth bounding box of a person. The red rectangles denote several candidate windows whose overlaps with the ground truth are larger than . Existing methods assign all these candidate windows to class “person” when learning feature representation, despite the large variation of visual cues and different semantic regions covered. (b) and (e) are candidate windows containing the upper body or the legs of the person. (c) and (f) are candidate windows with a smaller or larger size than the ground truth. (d) and (g) are candidate windows containing the left/right body of a person.

Representation learning for object detection was considered as a multi-class problem [7], in which a candidate window is classified as containing an object of category or background, decided by thresholding the overlap between the candidate window and the ground truth bounding box.

In this paper, we show that representation learning for object detection is beyond a multi-class problem. The relationship between the candidate window and the ground truth bounding box of the object, which is called the window-object relationship in this paper, provides rich information to guide representation learning for object detection. However, such information is lost in existing representation learning frameworks which largely simplify the window-object relationship by threhsolding the overlap. Some examples of person detection are shown in Figure 1. The candidate windows in Figure 1(b)-(g) may contain the upper body (a) or the legs (e) of a person, the left (c) or right (f) body of the person, and may have a smaller (c) or larger (d) size than the ground truth. They are all labeled as the same class “person” in existing representation learning frameworks for object detection, because their overlaps with the ground truth bounding box are all above . However their visual content and semantic meanings have significant difference. If the deep neural network is required to classify all these candidate windows into the same class, it is easy for the model to get confused and it becomes difficult to learn representation capturing semantically meaningful visual patterns, since the supervision is weak. Such ambiguity can be can resolved by using the window-object relationship as supervision during training, which well reflect all types of variations mentioned above. Being aware of these variations in supervision, it is easier for the model to disentangle these variation factors in the learned representations.

The contributions of this work are summarized below. First, we propose a representation learning pipeline by using the window-object relationship as supervision so that the learned features are more sensitive to locations and sizes of objects. By distinguishing and predicting window-object relationship, the learned representation captures more semantically meaningful visual patterns of candidate windows on objects. Experimental results show that the representation learned by our approach can improve mAP of object detection by on ILSVRC2014.

Second, two objective functions are designed to encode the window-object relationship. Since the window-object relationship is complex, our experiments show that direction prediction on the relative translation and scale variation in a similar way as bounding box regression does not improve representation learning. Instead, under each object category, we cluster candidate windows into subclasses according to the window-object relationship. Both visual cues and window-object relationship of candidate windows in the same subclass have less variations. Given the cropped image region of a candidate window as input, the deep neural network predicts the subclass as well as the relative translation and scale variation under the subclass during representation learning. Different subclasses employ different regressors to estimate the relative translation and scale variation.

Figure 2:  Examples of candidate windows for detecting persons in an image. The neighbor is separated into three regions, and labels indicating other neighbor objects, i.e., motorbike in the dash rectangle, are utilized to help feature representation learning process. Best viewed in color.
Figure 2: Examples of candidate windows for detecting persons in an image. The neighbor is separated into three regions, and labels indicating other neighbor objects, i.e., motorbike in the dash rectangle, are utilized to help feature representation learning process. Best viewed in color.

Third, the idea is also extended to model the relationship between a candidate window and objects of other classes in its neighborhood, given the cropped image region of the candidate window. An illustration is shown in Figure 2. The learned feature representation can make such prediction because it captures the pose information indicating existence of neighbor objects from the cropped image region (e.g. the pose of the person in Figure 2 indicates that he rides on a motorbike) and the image region may include parts of neighbor objects. All these disturbing factors explained in Figure 1 and Figure 2 are nonlinearly coupled in the image region and deteriorate the detection accuracy. With window-object relationship as supervision, they are disentangled in the learned feature representation and can be better removed in the later fine-turning stage or by a SVM classifier.

Fourth, we show that the window-object relationship can be better modeled by taking image regions with multiple contexts and multiple rotations as input, which includes multiple types of contextual information. This is different from commonly used multi-scale deep models, which take the same image region of different resolutions as input. Compared with the baseline, the multi-context and multi-rotation input improves the mAP by on ILSVRC2014. By adding the supervision of window-object relationship on multi-context and multi-rotation, the mAP was further improved by on ILSVRC2014.

2Relative Work

RCNN [7] is a widely used object detection pipeline based on CNN [17]. It first pre-trains the representation by classifying million images from ImageNet into categories and then fine-tunes it by classifying object detection bounding boxes on the target detection dataset. People improved RCNN by proposing better structures of CNN [20]. Ouyang et al. [14] improved pre-training by classifying the bounding boxes of the images from ImageNet instead of the whole images. All these works posed representation learning as a multi-class problem without effort on exploring window-object relationship.

A group of works tried to solve detection with regression [21]. Given the whole image as input, Szegedy et al. [21] used DNN to regress the binary masks of an object bounding box and its subboxes. Szegedy et al. [19] used CNN to directly predict the coordinates of object bounding boxes. AttentionNet [23] initially treated the whole image as a bounding box, and iteratively refined it. They quantified the way of adjusting the bounding box into several directions, and made decision at each step. Since the locations and sizes of objects in images have large variations, direct prediction is challenging. Although some promising results were obtained on PASCAL VOC, these works have not reported state-of-the-art result on ImageNet yet, which includes a much larger number of object categories and test images. AttentionNet required training separate networks for different categories and is not scalable. It only reported the result of one category (i.e. “human”) on PASCAL VOC and its average precision is lower than ours by , while our learned representation is shared by a large number of categories. Different from these approaches, we explore window-object relationship to improve representation learning, while our test pipeline is similar as RCNN. Moreover, we observe that directly predicting the locations and sizes of candidate windows does not improve representation learning, since the window-object relationship is complex. Supervision needs to be carefully designed.

In RCNN, bounding box regression was used as the last step to refine the locations of candidate windows. However, it was not used to learn feature representation. The recently proposed Fast RCNN [6] jointly predicted object categories and locations of candidate windows as multi-task learning. meanAP improvement is observed on PASCAL 07 dataset. However, this multi-task learning only improves meanAP by point in the ILSVRC2014.

In this paper, multi-context and multi-rotation input is used. The related work [5] cropped multiple subregions as the input of CNN. Besides enriching the representation, our motivation of employing multi-context and multi-rotation input is to make CNN less confused about the relationship between candidate windows and objects. Details will be given in Section ?.

Figure 3: Object detection at the test stage. 1) For a given candidate window, images cropped with 2.1) different sizes \lambda and 2.2) rotation degrees r are warped into the same size. 2.3) For the cropped image of a given rotation degree and scale (r, \lambda), a CNN f(r, \lambda, *) is used for extracting features. 2.4) Features for multiple scales and rotations are concatenated and 3) used for classification.
Figure 3: Object detection at the test stage. 1) For a given candidate window, images cropped with 2.1) different sizes and 2.2) rotation degrees are warped into the same size. 2.3) For the cropped image of a given rotation degree and scale , a CNN is used for extracting features. 2.4) Features for multiple scales and rotations are concatenated and 3) used for classification.

3Method

In order to provide readers with a clear picture of the whole framework, we first explain the object detection pipeline at the test stage. The major contributions comes from representation learning, whose details are provided in Section 3.2 - Section Section 3.5.

3.1Object detection at the testing stage

As in Figure 3, the object detection pipeline is as follows:

  1. Selective search in [18] is adopted to obtain candidate windows.

  2. A candidate window is used to extract features as below:

    1. For a candidate window with size and center , crop images with sizes and center . The cropped images and the candidate window have the same center location . is the scale of a contextual region. The choice of the scale set is detailed in Section ?.

    2. Rotate the cropped image by degrees and pad it with surrounding context to obtain , .

    3. The cropped images with different sizes and rotations are warped into the same size and treated as the input of CNN for extracting their features, i.e. where denotes the CNN for extracting features from , denotes the vector of features extracted for rotation and scale . For the candidate window , there are six cropped images with being , and in our experiment. In the experiments, the structure of CNN is chosen as GoogleNet [20] for different settings of . And there are six branches of GoogleNets for the six settings of . The learned parameters for the six branches of GoogleNets are different.

    4. The extracted features are then concatenated into , where is the operation for concatenating features into a vector.

  3. Extracted features are used by binary-class SVM to classify each candidate window. The score of each SVM measures the confidence on the candidate window containing a specific object class.

The steps are similar to RCNN [7] except for multi-context and multi-rotation input. Our major novelties come from how to train the feature extractor in Step 2.3 of Figure 3.

3.2Representation learning pipeline

Our proposed pipeline is as follows and shown in Figure 5.

  1. Pretrain CNN using the ImageNet 1000-class classification and localization data.

  2. Use the CNN trained in the previous step for initialization. Train the CNN by estimating the window-object relationship. Details are given in Section 3.3.

  3. Use the CNN trained in the previous step for initialization. Train the CNN by estimating the window-multi-objects relationship. Details are given in Section 3.4.

  4. Use the CNN trained in the previous step for initialization. Train the CNN for +1-classification problem. is the number of object classes, plus 1 for background. for PASCAL VOC and for ILSVRC2014. Details are given in Section 3.5.

Since the pipeline above is used for learning feature representation, except for the difference in the output layer, the network structures are the same for all the training steps above. And the responses of the last CNN layer before the output layer are treated as feature representation.

Window-object relationship label preparation

3.3Learning the window-object relationship

Figure 4: Examples of window-object relationship clusters obtained. Yellow rectangles denotes ground truth. Red rectangle denotes a window-object relationship cluster (only showing the average of the clustered candidate windows). Best viewed in color.
Figure 4: Examples of window-object relationship clusters obtained. Yellow rectangles denotes ground truth. Red rectangle denotes a window-object relationship cluster (only showing the average of the clustered candidate windows). Best viewed in color.

The idea is to have CNN distinguish candidate windows containing different parts of the same object or having different sizes. For example, a candidate window containing the upper body of a person and another one containing the legs were classified as the same category (i.e. “person”) in existing works, but are considered as different configurations of window-object relationship in our approach.

Figure 5: Overview of the representation learning pipeline. Best view in color.
Figure 5: Overview of the representation learning pipeline. Best view in color.

To distinguish candidate windows of the same object class, we cluster training samples in each class into subsets with similar relative locations. Denote by the -th candidate window at the training stage with center and size . Its ground-truth bounding box is denoted by . Candidate windows at the training stage are from selective search [18] and ground truth bounding boxes. The relative location and size between the candidate window and the ground-truth bounding box (normalized by the size of candidate window) are:

The relative location and size above are used for describing the window-object relationship. With features , affinity propagation(AP) [4] is used to group candidate windows with similar window-object relationship into clusters. Denote the cluster label for the th candidate window by . Figure 4 shows some clustering results where each cluster corresponds to a specific visual pattern and relative location setting. In window-object relationship prediction, the labels for a given candidate window are the relative location and size , and the cluster label .

Loss function of window-object relationship

With the CNN parameters obtained from Step ? in Section 3.3 as initialization, we continue to train the CNN by predicting window-object relationship. The CNN’s 1000-way classification layer in Step ? is replaced by two fully connected (fc) layers. The layer that predicts the location and size for cluster , denoted by , is called the location prediction layer. The other layer that predicts the cluster label for the th candidate window is called the cluster prediction layer. Both layers use the last feature extraction layer of the CNN as input. The output dimension of the location prediction layer is and the output dimension of the layer that outputs is . Softmax is used for the cluster prediction layer. The following loss on window-object relationship is used:

is the loss on predicting the window-object relationship cluster, is the loss on predicting the relative location and size. With the loss in (Equation 2), CNN is required to distinguish different window-object relationship clusters and to recognize where the actual relative location is. For the th candidate window, the CNN outputs location prediction vectors, i.e. for so that the CNN learn the location prediction for different clusters separately. For the th candidate window, only the cluster is used for supervising the location prediction. Therefore, different window-object relationship clusters have their own parameters learned separately by CNN to prediction location. For example, the location bias for the cluster with window-above-object relationship can be different from the bias for the cluster with window-below-object relationship. Since the relative locations in the same cluster have less variation, this divide-and-conquer strategy makes prediction easier. With the loss function defined, CNN parameters are learned by BP and stochastic gradient descent.

Multi-context and multi-rotation

When the location and size of a candidate window is different from that of the ground truth bounding box, the candidate window only have partial visual content of the object. The limited view results in difficulty for CNN to figure out the visual difference between object classes. For example, it is hard to tell whether it is an ipod or a monitor if one can only see the screen, but it becomes much easier if the whole object and its contextual region is provided, as shown in Figure 6 (top row). When occlusion happens, the ground truth bounding boxes may contain different amount of object parts and thus have different sizes. Without a region larger than the ground truth as input, it is confusing for CNN to decide the bounding box size. In Figure 6 (bottom row), the ground truth box for a standing unoccluded person should cover more parts of human body than the one with legs occluded. When the image region cropped from a candidate window only covers the upper body of this person, it is difficult to predict whether the person’s legs are occluded or not. When predicting the relative location between the candidate window and the ground truth, CNN should output a smaller box if occluded, but a larger box otherwise. CNN can handle this difficulty when the input contains a larger region than the ground truth. On the other hand, if the region is much larger than the object, the resolution of the object may not be high enough after normalizing the cropped image region to a standard size as the input of CNN.

Figure 6:  It is hard to tell object class (top row) or where the ground truth bounding box is (bottom row) if the only the image region within the candidate window is provided. Best viewed in color.
Figure 6: It is hard to tell object class (top row) or where the ground truth bounding box is (bottom row) if the only the image region within the candidate window is provided. Best viewed in color.

To handle the problems above, we use multiple scales of contextual regions as the input for CNN. The feature learning procedure still focuses on predicting the window-object relationship. We use 4 scales for cropping images, 0.8, 1.2, 1.8, 2.7, which are linear in log scale. 1.2 is the only scale chosen in [7] and is set as default value in many existing works. In the supplementary material, we prove that the cropped image with scale 2.7 is sufficient to cover most of the ground-truth region when the overlap between the window and the object is greater than 0.5. Even if the overlap between the candidate window and ground truth bounding box is 0.37, the cropped image with scale 2.7 can cover more than 50% of the ground truth region. 1.8 is obtained by linear interpolation between 1.2 and 2.7 in log scale. 0.8 is chosen because some candidate windows can be larger than the ground truth bounding box, as shown by the first image in Figure 4. A cropped image with a smaller scale can help these windows to fit the actual scale of the object.

Object rotation results in drastic appearance variation. Rotation is adopted together with multiple scales for the cropped images to make the network more robust to appearance variations.

Regarding the relative locations labels are used in this training step, we choose to not merge training samples with different scales and rotations together and adopt multiple networks with each network is trained with one kind of training samples. All those networks share the same network structure but different parameters.

3.4Window-multi-object relationship prediction

The previous training step does not consider the coexistence of multiple object instances in the same image, which happens frequently and forms layout configurations. For the example in Figure 5, the person have a helmet on his head and a rugby ball in his arms. To further enrich the feature representation, we extend the training procedure by predicting the window-multi-objects relationship.

The window-multi-objects relationship can be formulated in answering three basic questions, whether other instances exist in neighborhood, where they are and what they are. We start with the relative location and size defined in (Equation 1) to describe the pairwise relationship between a candidate window and multiple objects. The for all ground truth bounding boxes are used as features to obtain clusters, which are used for describing the window-multi-objects layout. Given a candidate window, its surrounding ground truth objects are assigned to their closest clusters and labels are obtained with each label standing for what kind of object in the corresponding cluster. CNN has classification layers and each layer is a multi-class classifier for its corresponding configuration of window-object relationship. Therefore, CNN is required to predict the probability of what object exists in each location cluster.

We keep the loss function discussed in ? and add cross entropy loss terms. The weights of all losses are set to be 1. In this step, three kinds of labels are applied to each training sample, 1) the window-object cluster label, 2) the relative location between the window and the object, 3) and labels to represent window-multi-object relationship.

3.5Finetuning for +1-classification

Since the ultimate goal is to detect classes of objects, we use CNN obtained in the previous step as initialization and continue to train it for the classification problem. Cross entropy loss function is used. As discussed in section ?, several scales and rotation degrees are adopted, the networks for different rotations or scales are jointly learned. The features extracted from CNN for all scales and rotations are concatenated into a vector of features for the -class classification problem. Once features are learned, we fix the CNN parameters and learn 200 class-specific linear SVMs for object detection as in [7]. Be reminded that although multiple concepts, such as window-object relationship, clusters of locations and sizes, other object instances and window-multi-object-relationship, are proposed in the training stage, their goal is to improve the learning of feature representation in Figure 3 and none of them appears in test.

4Experimental results

4.1Experimental setting

The implementation of our framework adopts GoogleNet [20] as CNN structure. 1000-class pretraining is based on the ILSVRC2014 classification and localization dataset. The learned representation is evaluated on the two datasets below. Most evaluation on component analysis of our training pipeline is conducted on ILSVRC2014 since it is much larger in scale and contains more object categories. The overall results and comparison with the state-of-the-art are finally reported on both datasets.

The ILSVRC2014 object detection dataset

contains 200 object categories and is split into three subsets, i.e. train, validation and test data. The validation subset is split into val1 and val2 in [7]. We follow the same setting. In the training step ?, we use both train and val1 subsets, but in the training step ? and ?, we use only val1 subset. Because many positive samples are not labeled in the train subset and it may bring in label noise for window-object relationship and window-multi-object relationship.

The PASCAL VOC2007 dataset

contains 20 object categories. Following the most commonly used approach in [7], we finetune the network with the trainval set and evaluate the performance on the test set.

4.2Component analysis on the training pipeline

Comparison with baselines

In order to evaluate the effectiveness of our major innovations, several baselines are compared on ILSVRC2014 and the results are summarized in Table 1. (1) RCNN choosing GoogLeNet as the CNN structure. It is equivalent to removing step b) and c) in our training pipeline and only taking the single context of scale without rotation as input in test. (2) Since our method concatenates features from seven GoogLeNets, one may question the improvement comes from model averaging. This baseline randomly initializes seven GoogLeNets, concatenate their features to train SVM and follow the RCNN pipeline. (3) Take multi-context and multi-rotation input (i.e. using the test pipeline in Fig. ) without supervision of window-object relationship (i.e. removing step b) and c) from our training pipeline). (4) Our pipeline with single context of scale 1.2 and without rotation. (5) Our pipeline excluding window-multi-objects relationship (i.e. step c)) in training. (6) Our complete pipeline.

The result shows that increasing model complexity by model averaging bring marginal improvement. Multi-context and multi-rotation input improves the RCNN baseline by and adding supervision of window-object relationship to it further obtains the gain of in mAP, which is significant. Window-multi-objects relationship contributes gain in mAP. The gain of adding supervision of window-object relationship to multi-context and multi-rotation input is larger than that added to a single-context input. It indicates that multi-context and multi-rotation input helps CNN better predict window-object relationship.

Table 1: Comparison with several baselines on ILSVRC2014 val2. Find descriptions of methods (1)-(4) in Section .
(1) (2) (3) (4) (5) (6)
mean AP (%) 39.9 41.7 42.1 42.9 45.8 46.3
median AP (%) 39.7 41.0 42.5 42.0 45.7 45.9

Table 2: Effectiveness of clustering window-object relationship for learning CNN on ILSVRC2014 val2. Step + corresponds to the training pipeline of RCNN. Step ++ w.o. cluster corresponds to learning window-object relationship without clustering. Step ++ w. cluster corresponds to our approach in clustering window-object relationship.
Approach Step + Step ++ Step ++
w.o. cluster w. cluster
Mean AP (%) 39.9 40.1 41.1
Median AP (%) 39.7 39.9 41.9
Table 3: Influence of using multiple scales on ILSVRC2014 val2. Shared denotes the approach with network parameters shared for four scales (0.8+1.2+1.8+2.7).
Scale 1.2 1.2+0.8 1.2+1.8 1.2+2.7 0.8+1.2+1.8 0.8+1.2+1.8+2.7 shared
Mean AP (%) 41.1 42.1 43.6 44.2 44.7 45.5 42.1
Median AP (%) 41.9 40.9 43.1 44.1 44.8 45.9 42.5
Table 4: Influence of using multiple scale and multiple rotations on ILSVRC2014 val2. Anti-cloclwise rotation is used.
scale 1.2 1.2 0.8+1.2+1.8+2.7 0.8+1.2+1.8+2.7
rotation degree 0 0+45+90 0 0+45+90
val2 meanAP (%) 41.1 43.1 45.5 45.8
val2 median AP (%) 41.9 42.2 45.9 45.7
Table 5: Object detection mAP (%) on ILSVRC2014 for top ranked participants in ILSVRC 2014 with single model(sgl) and averaged model(avg).
approach Flair RCNN Berkeley Vision UvA-Euvision DeepInsight DeepID-Net GoogleNet ours
val2(sgl) n/a 31.0 33.4 n/a 40.1 38.5 38.8 49.1
val2(avg) n/a n/a n/a n/a 42 40.9 44.5
test(sgl) n/a 31.4 34.5 35.4 40.2 37.7 38.0 48.6
test(avg) 22.6 n/a n/a n/a 40.5 40.7 43.9

Clustering window-object relationship

Window-object relationship is clustered in our approach as introduced in Section 3.3. Its effectiveness is evaluated in this section. Steps ?, ?, and ? are used. The cropped image has only one setting of rotation and scale, i.e. , which is the standard setting used in [7]. If only steps ? and ? are used, this corresponds to the RCNN baseline. If window-object relationship clustering is not used, the relative location and object class labels are used for learning features, which is the scheme in Fast RCNN [6], the mAP improvement is . With clustering, the mAP improvement is . Step ? is less effective without clustering and only brings improvement alone. Without clustering, a single regressor is learned for each class, relative locations and sizes cannot be accurately predicted and the learned features are less effective. Under each cluster, the configurations of locations and sizes are much simplified.

Investigation on using multiple scales

Based on the training pipeline using steps ?+ ?+ ? with window-object relationship clustering, Table 3 shows the influence of using multiple scales. The network with four scales has mAP , obtaining improvement compared with single scale. Based on the scale 1.2, the mAP improvements brought by an extra scale in descending order are 2.7, 1.8, and 0.8. This fits commonsense: a larger contextual region is more helpful in eliminating visual similarity between candidate boxes of different categories. More scales provide better performance, which shows that feature representations learned with different scales are complementary to each other.

To figure out the effectiveness of employing multiple contextual scales for feature learning, we also run the configuration in which network parameters for all the four scales are shared and fixed to be that trained in scale 1.2. When using one shared network learned from scale 1.2, the employment of multiple contextual scales simply adds more visual cues, while training different networks for different scales, multiple contextual scales help feature learning through predicting window-object relationship which is our motivation. Compared with the network with shared network parameters, the networks with distinct parameters for different scales obtain mAP improvement. This shows that the use of multiple contextual scales is helpful to learn better features.

Investigation on rotation

Table 4 shows the experimental results on using multiple rotation degrees and scales. Table 4 demonstrates that the performance improves mAP by for single scale and for multiple scales with the help of rotation.

4.3Overall results

Ouyang et al. [14] showed that pre-training CNN with bounding boxes of objects instead of whole images in step ? could improve the detection accuracy significantly. It is also well known that using the bounding box regression [7] to refine the locations of candidate windows in the last step of the detection pipeline is effective. In order to compete with the state-of-the-art, we incorporate the two existing technologies into our framework to boost the performance in the final evaluation.

Table 5 summarizes the top ranked results on val2 and test datasets from ILSVRC2014 object challenge and demonstrates the effectiveness of our training pipeline. Flair [22] was the winner of ILSCRC2013. GoogleNet, DeepID-Net, DeepInsight, UvA-Euvision and Berkeley Vision were the top-ranked participants of ILSVRC2014 and GoogleNet was the winner.

Table ? reports the results on PASCAL VOC. Since the state-of-art approach Fast RCNN (FRCN) [6] reported their performance of models trained on both VOC07 trainval and VOC12 trainval, we also evaluate our approach with the same training strategy. It has significant improvement on sate-of-the-art. It also outperforms the approaches of directly predicting bounding box locations from images.

5Conclusion

This paper proposes a training pipeline that uses the window-object relationship for improving the representation learning. In order to help the CNN to estimate these relationships, multiple scales of contextual informations and rotations are utilized. Extensive component-wise experimental evaluation on ILSVRC14 object detection dataset validate the improvement from the proposed training pipeline. Our approach outperforms the sate-of-the-art on both ILSVRC14 and PASCAL VOC07 datasets.

References

  1. Return of the devil in the details: Delving deep into convolutional nets.
    K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. In BMVC, 2014.
  2. Histograms of oriented gradients for human detection.
    N. Dalal and B. Triggs. In CVPR, 2005.
  3. Decaf: A deep convolutional activation feature for generic visual recognition.
    J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. In ICML, pages 647–655, 2014.
  4. Clustering by passing messages between data points.
    B. J. Frey and D. Dueck. science, 315(5814):972–976, 2007.
  5. Object detection via a multi-region & semantic segmentation-aware cnn model.
    S. Gidaris and N. Komodakis. arXiv preprint arXiv:1505.01749, 2015.
  6. Fast r-cnn.
    R. Girshick. arXiv preprint arXiv:1504.08083, 2015.
  7. Rich feature hierarchies for accurate object detection and semantic segmentation.
    R. Girshick, J. Donahue, T. Darrell, and J. Malik. In CVPR, 2014.
  8. Spatial pyramid pooling in deep convolutional networks for visual recognition.
    K. He, X. Zhang, S. Ren, and J. Sun. In ECCV. 2014.
  9. Imagenet classification with deep convolutional neural networks.
    A. Krizhevsky, I. Sutskever, and G. Hinton. In NIPS, 2012.
  10. Building high-level features using large scale unsupervised learning.
    Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. In ICML, 2012.
  11. Network in network.
    M. Lin, Q. Chen, and S. Yan. ICLR, 2014.
  12. Distinctive image features from scale-invarian keypoints.
    D. Lowe. IJCV, 60(2):91–110, 2004.
  13. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns.
    T. Ojala, M. Pietikainen, and T. Maenpaa. IEEE Trans. PAMI, 24(7):971–987, 2002.
  14. Deepid-net: multi-stage and deformable deep convolutional neural networks for object detection.
    W. Ouyang, P. Luo, X. Zeng, S. Qiu, Y. Tian, H. Li, S. Yang, Z. Wang, Y. Xiong, C. Qian, et al. arXiv preprint arXiv:1409.3505, 2014.
  15. Imagenet large scale visual recognition challenge.
    O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. IJCV, 2015.
  16. Overfeat: Integrated recognition, localization and detection using convolutional networks.
    P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. arXiv preprint arXiv:1312.6229, 2013.
  17. Very deep convolutional networks for large-scale image recognition.
    K. Simonyan and A. Zisserman. arXiv preprint arXiv:1409.1556, 2014.
  18. Segmentation as selective search for object recognition.
    A. Smeulders, T. Gevers, N. Sebe, and C. Snoek. In ICCV, 2011.
  19. Object detection using deep neural networks, 2015.
    C. Szegedy, D. Erhan, and A. T. Toshev. US Patent 20,150,170,002.
  20. Going deeper with convolutions.
    C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. arXiv preprint arXiv:1409.4842, 2014.
  21. Deep neural networks for object detection.
    C. Szegedy, A. Toshev, and D. Erhan. In NIPS, 2013.
  22. Fisher and vlad with flair.
    K. E. A. van de Sande, C. G. M. Snoek, and A. W. M. Smeulders. In CVPR, 2014.
  23. Attentionnet: Aggregating weak directions for accurate object detection.
    D. Yoo, S. Park, J.-Y. Lee, A. Paek, and I. S. Kweon. arXiv preprint arXiv:1506.07704, 2015.
  24. Generic object detection with dense neural patterns and regionlets.
    W. Y. Zou, X. Wang, M. Sun, and Y. Lin. BMVC, 2014.
18578
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description