Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation

Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation

Abstract

Recent years have witnessed great progress in deep learning based object detection. However, due to the domain shift problem, applying off-the-shelf detectors to an unseen domain leads to significant performance drop. To address such an issue, this paper proposes a novel coarse-to-fine feature adaptation approach to cross-domain object detection. At the coarse-grained stage, different from the rough image-level or instance-level feature alignment used in the literature, foreground regions are extracted by adopting the attention mechanism, and aligned according to their marginal distributions via multi-layer adversarial learning in the common feature space. At the fine-grained stage, we conduct conditional distribution alignment of foregrounds by minimizing the distance of global prototypes with the same category but from different domains. Thanks to this coarse-to-fine feature adaptation, domain knowledge in foreground regions can be effectively transferred. Extensive experiments are carried out in various cross-domain detection scenarios. The results are state-of-the-art, which demonstrate the broad applicability and effectiveness of the proposed approach.

\cvprfinalcopy

1 Introduction

In the past few years, Convolutional Neural Networks (CNN) based methods have significantly improved the accuracies of plenty of computer vision tasks [21, 38, 49]. These remarkable gains often rely on large-scale benchmarks, such as ImageNet [11] and MS COCO [35]. Due to a phenomenon known as domain shift or dataset bias [59], current CNN models suffer from performance degradation when they are directly applied to novel scenes. In practice, we are able to alleviate such an impact by building a task-specific dataset that covers sufficiently diverse samples. Unfortunately, it is rather expensive and time-consuming to annotate a large number of high-quality ground truths.

To address this dilemma, one promising way is to introduce Unsupervised Domain Adaptation (UDA) to transfer essential knowledge from an off-the-shelf labeled domain (referred to as the source domain) to a related unseen but unlabeled one (the target domain[44]. Recently, UDA methods have been greatly advanced by deep learning techniques, and they mostly focus on generating domain-invariant deep representation by reducing cross-domain discrepancy (\egMaximum Mean Discrepancy [20] or -divergence [1]), which have proved very competent at image classification [40, 15, 52, 12, 27] and semantic segmentation [23, 42, 64, 7]. Compared to them, object detection is more complex, which is required to locate and classify all instances of different objects within images; therefore, how to effectively adapt a detector is indeed a challenging issue.

Figure 1: Illustration of the proposed coarse-to-fine feature adaptation approach. It consists of two components, \ie, Attention-based Region Transfer (ART) and Prototype-based Semantic Alignment (PSA). The ART module figures out foregrounds from entire images in different domains and then aligns the marginal distributions on them. Further, the PSA module builds the prototype for each category to achieve semantic alignment. (Best viewed in color.)

In the literature, there are many solutions tackling this problem, including Semi-Supervised Learning (SSL) based [4], pixel-level adaptation based [31, 25, 50], and feature-level adaptation based [8, 22, 68, 51, 74]. The SSL based method reduces the domain gap through consistency regularization in a teacher-student scheme. However, the teacher does not always convey more meaningful knowledge than the student [28], and the detector thus tends to accumulate errors, leading to deteriorated detection performance. Pixel-level based methods first conduct style transfer [73] to synthesize a target-like intermediate domain, aiming to limit visual shift, and then train detectors in a supervised manner. Nevertheless, it still remains a difficulty to guarantee the quality of generated images, especially in some extreme cases, which may hurt the adapted results. Alternatively, feature-level adaptation based methods mitigate the domain shift by aligning the features across domains. Such methods work more conveniently with competitive scores, making them dominate the existing community.

In this category, domain adaptive Faster R-CNN [8] is a pioneer. It incorporates both image-level and instance-level feature adaptation into the detection model. In [51], Strong-Weak feature adaptation is launched on image-level. This method mainly makes use of the focal loss to transfer hard-to-classify examples, since the knowledge in them is supposed to be more intrinsic for both domains. Although they deliver promising performance, image-level or instance-level feature adaptation is not so accurate as objects of interest locally distribute with diverse shapes. [74] introduces K-means clustering to mine transferable regions to optimize the adaptation quality. While attractive, this method highly depends on the pre-defined cluster number and the size of the grouped regions, which is not flexible, particularly to real-world applications. Furthermore, in the object detection task, there are generally multiple types of objects, and each has its own sample distribution. But these methods do not take such information into account and regard the distributions of different objects as a whole for adaptation, thereby leaving space for improvement.

In this paper, we present a coarse-to-fine feature adaptation framework for cross-domain object detection. The main idea is shown in Figure 1. Firstly, considering that foregrounds between different domains share more common features compared to backgrounds [30], we propose an Attention-based Region Transfer (ART) module to highlight the importance of foregrounds, which works in a class-agnostic coarse way. We extract foreground objects of interest by leveraging the attention mechanism in high-level features, and underline them during feature distribution alignment. Through multi-layer adversarial learning, effective domain confusion can be performed with the complex detection model. Secondly, category information of objects tends to further refine preceding feature adaptation, and in this case it is necessary to distinguish different kinds of foreground objects. Meanwhile, there is no guarantee that foregrounds of source and target images in the same batch have consistent categories, probably incurring object mis-matches in some mini-batch, making semantic alignment in UDA rather tough. Consequently, we propose a Prototype-based Semantic Alignment (PSA) module to build the global prototype for each category across domains. The prototypes are adaptively updated at each iteration, thus suppressing the negative influence of false pseudo-labels and class mis-matches.

In summary, the contributions are three-fold as follows:

  • A new coarse-to-fine adaptation approach is designed for cross-domain two-stage object detection, which progressively and accurately aligns deep features.

  • Two adaptation modules, \ie, Attention-based Region Transfer (ART) and Prototype-based Semantic Alignment (PSA), are proposed to learn domain knowledge in foreground regions with category information.

  • Extensive experiments are carried out in three major benchmarks in terms of some typical scenarios, and the results are state-of-the-art, demonstrating the effectiveness of the proposed approach.

2 Related Work

Object Detection.

Object detection is a fundamental step in computer vision and has received increasing attention during decades. Most of traditional methods [63, 10, 13] depend on handcrafted features and sophisticated pipelines. In the era of deep learning, object detection can be mainly split into the one-stage detectors [48, 37, 34, 36] and the two-stage ones [17, 18, 49, 33]. However, those generic detectors do not address the domain shift problem that hurts detection performance in real-world scenes.

Domain Adaptation.

Domain adaptation [2, 1] aims to boost performance in the target domain by leveraging common knowledge from the source domain, which has been widely studied in many visual tasks [67, 12, 72, 41, 6, 14]. With the advent of CNNs, many solutions reduce domain shift by learning domain-invariant features. Methods along this line can be divided into two streams: criterion-based [61, 39, 57] and adversarial learning-based [15, 60, 3, 46]. The former aligns the domain distributions by minimizing some statistical distances between deep features, and the latter introduces the domain classifier to construct minimax optimization with the feature extractor. Despite great success is achieved, the majority of them can only handle relatively simple tasks, such as image classification.

Figure 2: Overview of the proposed feature adaptation framework. We address the problem of domain shift on foreground regions by coarse-to-fine scheme with the ART and PSA modules. First, we utilize the attention map learned from the RPN module to localize foregrounds. Combined with multiple domain classifiers, the ART module puts more emphasis on aligning feature distributions of foreground regions, which achieves a coarse-grained adaptation in a category-agnostic way. Second, the PSA module makes use of ground truth labels (for source) and pseudo labels (for target) to maintain global prototypes for each category, and delivers fine-grained adaptation on foreground regions in a category-ware mode.

Cross-domain Object Detection.

A number of traditional studies [66, 62, 43, 70] focus on adapting a specific model (\eg, for pedestrian or vehicle detection) across domains. Later, [47] proposes the adaptive R-CNN by subspace alignment [19]. More Recently, the methods can be mainly grouped into four categories, including (1) Feature-level based: [8] presents domain adaptive Faster R-CNN to alleviate image-level and instance-level shifts, and [22, 68] extend this idea to multi-layer feature adaptation. [51] exploits strong-weak alignment components to attend strong matching in local features and weak matching in global features. [74] mines discriminative regions that contain objects of interest and aligns their features across domains. (2) SSL based: [4] integrates object relations into the measure of consistency cost with the mean teacher [58] model. (3) Pixel-level based: [25, 50] employ CycleGAN to translate the source domain to the target-like style. [31] uses domain diversification and multi-domain invariant representation learning to address the imperfect translation and source-biased problem. (4) Others: [29] establishes a robust learning framework that formulates the cross-domain detection problem as training with noisy labels. [30] introduces weak self-training and adversarial background score regularization for domain adaptive one-stage object detection. [71] minimizes the wasserstein distance to improve the stability of adaptation. [54] explores a gradient detach based stacked complementary loss to adapt detectors.

As mentioned, feature-level adaptation is the main branch in cross-domain object detection, and its performance is currently limited by inaccurate feature alignment. The proposed method concentrates on two-stage detectors and substantially improves the quality of feature alignment by a coarse-to-fine scheme, where the ART module learns the adapted importance of foreground areas and the PSA module encodes the distribution property of each class.

3 Method

3.1 Problem Formulation

In the task of cross-domain object detection, we are given a labeled source domain , where and denote the -th image and its corresponding labels, \ie, the coordinates of the bounding box and its associated category respectively. In addition, we have access to an unlabeled target domain . We assume that the source and target samples are drawn from different distributions (\ie, ) but the categories are exactly the same. The goal is to improve the detection performance in using the knowledge in .

3.2 Framework Overview

As shown in Figure 2, we introduce a feature adaptation framework for cross-domain object detection, which contains a detection network and two adaptation modules.

Detection Network.

We select the reputed and powerful Faster R-CNN [49] model as our base detector. Faster R-CNN is a two-stage detector that consists of three major components: 1) a backbone network that extracts image features, 2) a Region Proposal Network (RPN) that simultaneously predicts object bounds and objectness scores, and 3) a Region-of-Interest (RoI) head, including a bounding box regressor and a classifier for further refinement. The overall loss function of Faster R-CNN is defined as:

(1)

where , , and are the loss functions for the RPN, RoI based regressor and classifier, respectively.

Adaptation Modules.

Different from most of the existing studies which typically reduce domain shift in the entire feature space, we propose to conduct feature alignment on foregrounds that are supposed to share more common properties across domains. Meanwhile, in contrast to current methods that regard the samples of all objects as a whole, we argue that the category information contributes to this task and thus highlight the distribution of each category to further refine feature alignment. To this end, we design two adaptation modules, \ie, Attention-based Region Transfer (ART) and Prototype-based Semantic Alignment (PSA), to fulfill a coarse-to-fine knowledge transfer in foregrounds.

3.3 Attention-based Region Transfer

The ART module is designed to raise more attention to align the distributions across two domains within the regions of foregrounds. It is composed of two parts: the domain classifiers and the attention mechanism.

To align the feature distributions across domains, we integrate multiple domain classifiers into the last three convolution blocks in the backbone network , where a two-player minimax game is constructed. Specifically, the domain classifiers try to distinguish which domain the features come from, while the backbone network aims to confuse the classifiers. In practice, and are connected by the Gradient Reverse Layer (GRL) [15], which reverses the gradients that flow through . When the training process converges, tends to extract domain-invariant feature representation. Formally, the objective of adversarial learning in the -th convolution block can be written as:

(2)

where and are the parameters of and respectively. stands for the probability of the feature in location from the source domain.

Recall that the detection task is required to localize and classify objects, and RoIs are usually more important than backgrounds. However, the domain classifiers align all the spatial locations of the whole image without focus, which probably degrades adaptation performance. To deal with this problem, we propose an attention mechanism to achieve foreground-aware distribution alignment. As mentioned in [49], the RPN in Faster R-CNN serves as the attention to tell the detection model where to look, and we naturally utilize the high-level feature in RPN to generate the attention map, as shown in Figure 3. To be specific, given an image from an arbitrary domain, we denote as the output feature map of the convolutional layer in the RPN module, where and are the spatial dimensions and the number of channels of the feature map, respectively. Then, we construct a spatial attention map by averaging activation values across the channel dimension. Further, we filter out (set to zero) those values that are less than the given threshold, which are more likely to belong to the background regions. The attention map is formulated as:

(3)
(4)
(5)

where stands for the attention map before filtering. is the sigmoid function and is the indication function. represents the -th channel of the feature map. denotes the element-wise multiplication. Threshold is set to the mean value of .

Figure 3: Illustration of the attention mechanism. We first extract the feature map from the RPN module. Then, we construct a spatial attention map by averaging values across the channel dimension. At last, filtering is applied to suppress the noise.

As the size of the attention map is not compatible with the features in different convolution blocks, we adopt bilinear interpolation to perform up-sampling, thus producing the corresponding attention maps. Due to the fact that the attention map may not always be so accurate, if a foreground region is mistaken for background, its attention weight is set to zero and cannot contribute to adaptation. Inspired by the residual attention network in [65], we add a skip connection to the attention map to enhance its performance.

The total objective of the ART module is defined as:

(6)

where is the up-sampling operation and stands for the adversarial loss on pixel in the -th convolution block. Combining adversarial learning with the attention mechanism, the ART module aligns the feature distributions of foreground regions that are more transferable for the detection task.

3.4 Prototype-based Semantic Alignment

Since the attention map from RPN carries no information about classification, the ART module aligns the feature distributions of foregrounds in a category-agnostic way. To achieve class-aware semantic alignment, a straightforward method is to train domain classifiers for each category. Nevertheless, there are two main disadvantages: (1) training multiple class-specific classifiers is inefficient; (2) false pseudo-labels (\eg, backgrounds or misclassified foregrounds) occurred in the target domain may hurt the performance of semantic alignment.

Inspired by the prototype-based methods in few-shot learning [56] and cross-domain image classification [69, 5, 45], we propose the PSA module to handle the above problems. Instead of directly training classifiers, PSA tries to minimize the distance between the pair of prototypes with the same category across domains, thus maintaining the semantic consistency in the feature space. Formally, the prototypes can be defined as:

(7)
(8)

where and represent the prototypes of the -th category in the source and target domain respectively. denotes the feature of foreground region after the second fully-connected (FC) layer in the RoI head. We use the ground truth to extract the foreground regions in the source domain. Due to the absence of target annotations, we employ the provided by the RoI head module as the pseudo labels in the target domain. indicates the number of regions.

The benefits of prototypes are two-fold: (1) the prototypes have no extra trainable parameters and can be calculated in linear time; (2) the negative influence of false pseudo-labels can be suppressed by the correct ones whose number is much larger when generating the prototypes. It should be noted that the prototypes above are built over all samples. In the training process, the size of each mini-batch is usually small (\eg, 1 or 2) for the detection task, and the foreground objects of source and target images in the same batch may have inconsistent categories, making categorical alignment not practical for all classes at this batch. For example, two images (one for each domain) are randomly selected for training, but Car only appears in the source image. As a consequence, we cannot align the prototypes of Car across domains in this batch.

Input: Labeled source domain .
   Unlabeled target domain .
   Batch size . Category number .
Output: An adaptive detector .
1 Calculate the initial global prototypes and using the pretrained detector based on
2 for  to  do
3       , Sample(, )
4       Sample(, )
5       Supervised Learning:
6       Calculate according to Eq. (1)
7       Coarse-grained Adaptation:
8       Calculate and by Eq. (5)
9       Calculate by Eq. (6)
10       Fine-grained Adaptation:
11      
12       for  to  do
13             Calculate and by Eq. (7) and (8)
14             Update and by Eq. (10)
15            
16      Calculate according to Eq. (11)
17       Optimize the detection model by Eq. (12)
18      
Algorithm 1 The coarse-to-fine feature adaptation framework for cross-domain object detection.

To tackle the problem, we dynamically maintain global prototypes, which are adaptively updated by local prototypes at each mini-batch as follows:

(9)
(10)

where denotes the cosine similarity. represents the local prototypes of the -th category at -th iteration. It is worth noting that we calculate the initial global prototypes by Eq. (7) (for source) and Eq. (8) (for target) based on the pretrained model from the labeled source domain.

We do not directly align the local prototypes, but minimize the distance between the source global prototypes and the target global prototypes to achieve semantic alignment. The objective of the PSA module at -th iteration can be formulated as following:

(11)
Cityscapes FoggyCityscapes
Method Arch. Bus Bicycle Car Motor Person Rider Train Truck mAP mAP* Gain
MTOR [4] R 38.6 35.6 44.0 28.3 30.6 41.4 40.6 21.9 35.1 26.9 8.2
RLDA [29] I 45.3 36.0 49.2 26.9 35.1 42.2 27.0 30.0 36.5 31.9 4.6

DAF [8]
V 35.3 27.1 40.5 20.0 25.0 31.0 20.2 22.1 27.6 18.8 8.8
SCDA [74] V 39.0 33.6 48.5 28.0 33.5 38.0 23.3 26.5 33.8 26.2 7.6
MAF [22] V 39.9 33.9 43.9 29.2 28.2 39.5 33.3 23.8 34.0 18.8 15.2
SWDA [51] V 36.2 35.3 43.5 30.0 29.9 42.3 32.6 24.5 34.3 20.3 14.0
DD-MRL [31] V 38.4 32.2 44.3 28.4 30.8 40.5 34.5 27.2 34.6 17.9 16.7
MDA [68] V 41.8 36.5 44.8 30.5 33.2 44.2 28.7 28.2 36.0 22.8 13.2
PDA [25] V 44.1 35.9 54.4 29.1 36.0 45.5 25.8 24.3 36.9 19.6 17.3
Source Only V 25.0 26.8 30.6 15.5 24.1 29.4 4.6 10.6 20.8 - -
3DC (Baseline) V 37.9 37.1 51.6 33.1 32.9 45.6 27.9 28.6 36.8 20.8 16.0
Ours w/o ART V 41.6 35.4 51.5 36.9 33.5 45.2 26.6 28.2 37.4 20.8 16.6
Ours w/o PSA V 45.2 37.3 51.8 33.3 33.9 46.7 25.5 29.6 37.9 20.8 17.1
Ours V 43.2 37.4 52.1 34.7 34.0 46.9 29.9 30.8 38.6 20.8 17.8
Oracle V 49.5 37.0 52.7 36.0 36.1 47.1 56.0 32.1 43.3 - -
Table 1: Results (%) of different methods in the Normal-to-Foggy adaptation scenario. “V”, “R” and “I” represent the VGG16, ResNet50 and Inception-v2 backbones respectively. “Source Only” denotes the Faster R-CNN model trained on the source domain only. “3DC” stands for the Faster R-CNN model integrated with three domain classifiers, which is our baseline method. “Oracle” indicates the model trained on the labeled target domain. mAP* shows the result of “Source Only” for each method, and Gain displays its the improvement after adaptation. The best results are bolded and the second best results are underlined among the methods with the VGG16 backbone.

3.5 Network Optimization

The training procedure of our proposed framework integrates three major components, as shown in Algorithm 1.

  1. Supervised Learning. The supervised detection loss is only applied to the labeled source domain .

  2. Coarse-grained Adaptation. We utilize the attention mechanism to extract the foregrounds in images. Then, we focus on aligning the feature distributions of those regions by optimizing .

  3. Fine-grained Adaptation. At first, pseudo labels are predicted in the target domain. We further update the global prototypes for each category adaptively. Finally, semantic alignment on foreground objects is achieved by optimizing .

With the terms aforementioned, the overall objective is:

(12)

where and denote the trade-off factors for the ART module and the PSA module, respectively.

4 Experiments

4.1 Datasets and Scenarios

Datasets. Four datasets are used in evaluation. (1) Cityscapes [9] is a benchmark for semantic urban scene understanding. It contains 2,975 training images and 500 validation images with pixel-level annotations. Since it is not designed for the detection task, we follow [8] to use the tightest rectangle of an instance segmentation mask as the ground truth bounding box. (2) FoggyCityscapes [53] derives from Cityscapes by adding synthetic fog to the original images. Thus, the train/val split and annotations are the same as those in Cityscapes. (3) SIM10k [26] is a synthetic dataset containing 10,000 images, which is rendered from the video game Grand Theft Auto V (GTA5). (4) KITTI [16] is another popular dataset for autonomous driving. It consists of 7,481 labeled images for training.

Scenarios. Following [8], we evaluate the framework under three adaptation scenarios as follows:

(1) Normal-to-Foggy (Cityscapes FoggyCityscapes). It aims to perform adaptation across different weather conditions. During the training phase, we use the training set of Cityscapes and FoggyCityscapes as the source and target domain respectively. Results are reported in the validation set of FoggyCityscapes.

(2) Synthetic-to-Real (SIM10k Cityscapes). Synthetic images offer an alternative to alleviate the data annotation problem. To adapt the synthetic scenes to the real one, we utilize the entire SIM10k dataset as the source domain and the training set of Cityscapes as the target domain. Since only Car is annotated in both domains, we report the performance of Car in the validation set of Cityscapes.

(3) Cross-Camera (Cityscapes KITTI). Images captured by different devices or setups also incur the domain shift problem. To simulate this adaptation, we use the training set of Cityscapes as the source domain and the training set of KITTI as the target domain. Note that the classification standards of categories in the two domains are different, we follow [68] to classify {Car, Van} as Car, {Pedestrian, Person sitting} as Person, Tram as Train, Cyclist as Rider in KITTI. The results are reported in the training set of KITTI, which is the same as in [8, 68].

4.2 Implementation Details

In all experiments, we adopt the Faster R-CNN with the VGG16 [55] backbone pre-trained on ImageNet [11]. We resize the shorter sides of all images to 600 pixels. The batch size is set to 2, \ie, one image per domain. The detector is trained with SGD for 50k iterations with the learning rate of , and it is then dropped to for another 20k iterations. Domain classifiers are trained by the Adam optimizer [32] with the learning rate of . The factor is set at 1.0. Since prototypes in the target domain are unreliable at the beginning, the PSA module is employed after 50k iterations with set at 0.01. We report mAP with an IoU threshold of 0.5 for evaluation.

SIM10k Cityscapes
Method Arch. AP on Car AP* Gain
RLDA [29] I 42.6 31.1 11.5
MTOR [4] R 46.6 39.4 7.2

DAF [8]
V 39.0 30.1 8.9
MAF [22] V 41.1 30.1 11.0
SWDA [51] V 42.3 34.6 7.7
MDA [68] V 42.8 34.3 8.5
SCDA [74] V 43.0 34.0 9.0

Source Only
V 35.0 - -
3DC (Baseline) V 42.3 35.0 7.3
Ours w/o ART V 42.7 35.0 7.7
Ours w/o PSA V 43.4 35.0 8.4
Ours V 43.8 35.0 8.8

Oracle
V 59.9 - -
Table 2: Results (%) of the Synthetic-to-Real adaptation scenario.

4.3 Results

We conduct extensive experiments and make comparison to the state-of-the-art cross-domain object detection methods, including (1) Semi-Supervised Learning: MTOR [4], (2) Robust Learning: RLDA [29], (3) Feature-level adaptation: DAF [8], SCDA [74], MAF [22], SWDA [51] and MDA [68], and (4) Pixel-level adaptation + Feature-level adaptation: DD-MRL [31] and PDA [25]. Moreover, we also provide ablation studies to validate the effectiveness of each module. Our baseline method is referred as 3DC, which is the Faster R-CNN model integrated with three domain classifiers. We alternately remove the ART and PSA module from the entire framework and report the performance. Note that removing the ART means we only remove the attention map while domain classifiers are still kept.

Normal-to-Foggy.

As shown in Table 1, we achieve an mAP of 38.6% on the weather transfer task, which is the best result among all the counterparts. Since detection performance before adaptation is different for each method, we point out that “Gain” is also a key criterion for fair comparison, which is ignored by previous work. In particular, we achieve a remarkable increase of +17.8% over the source only model. Among all the feature-level adaptation methods, we improve the mAP by +2.6% compared to MDA [68]. Although we do not leverage extra pixel-level adaptation, our method still outperforms previous state-of-the-art PDA [25] by +1.7%. Besides, with the help of coarse-to-fine feature adaptation on foregrounds, the proposed method brings improvements on all the categories than the 3DC model does, which shows that feature alignment on foregrounds can boost performance. Additionally, we find that the proposed method is comparable to or even better than the oracle model in several categories. It suggests that the performance which is similar to that of supervised learning methods can be achieved, if we effectively transfer knowledge across domains.

Synthetic-to-Real.

Table 2 displays the results on the Synthetic-to-Real task. We obtain an average precision of 43.8% on Car and find that there is a slight gain of +0.8% compared to SCDA [74]. The reason is that knowledge transfer is much easier for single category, and many other methods can also adapt well. Further, one may wonder why the PSA module is still effective for single category adaptation, and we argue that it serves as another attention mechanism that focuses on foreground regions, which conveys some complementary clues to the ART module in this case.

Cross-Camera.

In Table 3, we illustrate the performance comparison on the cross-camera task. The proposed method reaches an mAP of 41.0% with a gain of +7.6% over the non-adaptive model. Due to the fact that scenes are similar across domains and the Car sample dominate the two datasets, we can observe that the score on Car is already good for the source only model. Compared with DAF [8] and MDA [68], our method reduces the influence of negative transfer in Car detection. Meanwhile, our method also outperforms the baseline model (3DC) in the rest categories.

Cityscapes KITTI
Method Arch. Person Rider Car Truck Train mAP mAP* Gain
DAF [8] V 40.9 16.1 70.3 23.6 21.2 34.4 34.0 0.4
MDA [68] V 53.3 24.5 72.2 28.7 25.3 40.7 34.0 6.7
Source Only V 48.1 23.2 74.3 12.2 9.2 33.4 - -
3DC (Baseline) V 45.8 27.0 73.9 26.4 18.4 38.3 33.4 4.9
Ours w/o ART V 50.2 27.3 73.2 29.5 17.1 39.5 33.4 6.1
Ours w/o PSA V 50.5 27.8 73.3 26.8 20.5 39.8 33.4 6.4
Ours V 50.4 29.7 73.6 29.7 21.6 41.0 33.4 7.6
Oracle V 71.1 86.6 88.4 90.7 90.1 85.4 - -
Table 3: Results (%) of the Cross-Camera adaptation scenario.
(a) Source Only (b) SWDA (c) Ours (d) Attention Map
Figure 4: Qualitative results on the Normal-to-Foggy adaptation scenario. (a)-(c): The detection results of the Source Only model, SWDA and the proposed method. (d): Visualization of the corresponding attention maps (best viewed by zooming in).

4.4 Further Analysis

Feature distribution discrepancy of foregrounds.

The theoretical result in [2] suggests that -distance can be used as a metric of domain discrepancy. In practice, we calculate the Proxy -distance to approximate it, which is defined as . is the generalization error of a binary classifier (linear SVM in our experiments) that tries to distinguish which domain the input features come from. Figure 5 displays the distances for each category on the Normal-to-Foggy task with the features of ground truth foregrounds extracted from the models of Source Only, SWDA and Ours. Compared with the non-adaptive model, SWDA and Ours reduce the distances in all the categories by large margins, which demonstrates the necessity of domain adaptation. Besides, since we explicitly optimize the prototypes of each category by PSA, we achieve a smaller feature distribution discrepancy of foregrounds than the others do.

Figure 5: Feature distribution discrepancy of foregrounds.

Error analysis of highest confident detections.

To further validate the effect of the proposed framework for cross-domain object detection, we analyze the errors of the models of Source Only, SWDA and Ours caused by highest confident detections on the Normal-to-Foggy task. We follow [24, 8, 4] to categorize the detections into three error types: 1) Correct (IoU with GT 0.5), 2) Mislocalization (0.3 IoU with GT 0.5), and 3) Background (IoU with GT 0.3). For each category, we select top- predictions to analyze the error type, where is the number of ground truths in this category. We report the mean percentage of each type across all categories in Figure 6. We can see that the Source Only model seems to take most of backgrounds as false positives (green color). Compared with SWDA, we improve the percentage of correct detections (blue color) from 39.3% to 43.0% and reduce other error types simultaneously. The results indicate that the proposed framework can effectively increase true positives and reduce false positives, resulting in better detection performance.

Qualitative results.

Figure 4 shows some qualitative results. Due to the domain shift problem, the Source Only model simply detects some salient objects as shown in (a). From (b) to (c), we can observe that the proposed method not only increases true positives (detects more cars in the first and second row), but also reduces false positives (discards persons in the third row), which is consistent with previous analysis. Further, we visualize the attention maps generated from the ART module. Despite some noise, the attention maps well locate the foreground regions, which is beneficial to knowledge transfer across domains.

Figure 6: Error analysis of highest confident detections.

5 Conclusion

In this paper, we present a novel coarse-to-fine feature adaptation approach to address the issue of cross-domain object detection. The proposed framework achieves the goal with the incorporation of two delicately designed modules, \ie, ART and PSA. The former highlights the importance of the foreground regions figured out by the attention mechanism in a category-agnostic way, and aligns their feature distributions across domains. The latter takes the advantage of prototypes to perform fine-grained adaptation of foregrounds at the semantic level. Comprehensive experiments are conducted on various adaptation scenarios and state-of-the-art results are reached, demonstrating the effectiveness of the proposed approach.

Acknowledgment.

This work is funded by the National Key Research and Development Plan of China under Grant 2018AAA0102301, the Research Program of State Key Laboratory of Software Development Environment (SKLSDE-2019ZX-03), and the Fundamental Research Funds for the Central Universities.

References

  1. S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira and J. W. Vaughan (2010) A theory of learning from different domains. Machine Learning 79 (1-2), pp. 151–175. Cited by: §1, §2.
  2. S. Ben-David, J. Blitzer, K. Crammer and F. Pereira (2007) Analysis of representations for domain adaptation. In NeurIPS, Cited by: §2, §4.4.
  3. K. Bousmalis, N. Silberman, D. Dohan, D. Erhan and D. Krishnan (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, Cited by: §2.
  4. Q. Cai, Y. Pan, C. Ngo, X. Tian, L. Duan and T. Yao (2019) Exploring object relation in mean teacher for cross-domain detection. In CVPR, Cited by: §1, §2, Table 1, §4.3, §4.4, Table 2.
  5. C. Chen, W. Xie, W. Huang, Y. Rong, X. Ding, Y. Huang, T. Xu and J. Huang (2019) Progressive feature alignment for unsupervised domain adaptation. In CVPR, Cited by: §3.4.
  6. M. Chen, Z. Kira, G. AlRegib, J. Yoo, R. Chen and J. Zheng (2019) Temporal attentive alignment for large-scale video domain adaptation. In ICCV, Cited by: §2.
  7. M. Chen, H. Xue and D. Cai (2019) Domain adaptation for semantic segmentation with maximum squares loss. In ICCV, Cited by: §1.
  8. Y. Chen, W. Li, C. Sakaridis, D. Dai and L. Van Gool (2018) Domain adaptive faster r-cnn for object detection in the wild. In CVPR, Cited by: §1, §1, §2, Table 1, §4.1, §4.1, §4.1, §4.3, §4.3, §4.4, Table 2, Table 3.
  9. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth and B. Schiele (2016) The cityscapes dataset for semantic urban scene understanding. In CVPR, Cited by: §4.1.
  10. N. Dalal and B. Triggs (2005) Histograms of oriented gradients for human detection. In CVPR, Cited by: §2.
  11. J. Deng, W. Dong, R. Socher, L. Li, K. Li and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §1, §4.2.
  12. Z. Ding, S. Li, M. Shao and Y. Fu (2018) Graph adaptive knowledge transfer for unsupervised domain adaptation. In ECCV, Cited by: §1, §2.
  13. P. Felzenszwalb, D. McAllester and D. Ramanan (2008) A discriminatively trained, multiscale, deformable part model. In CVPR, Cited by: §2.
  14. Y. Fu, Y. Wei, G. Wang, Y. Zhou, H. Shi and T. S. Huang (2019) Self-similarity grouping: a simple unsupervised cross domain adaptation approach for person re-identification. In ICCV, Cited by: §2.
  15. Y. Ganin and V. Lempitsky (2015) Unsupervised domain adaptation by backpropagation. In ICML, Cited by: §1, §2, §3.3.
  16. A. Geiger, P. Lenz and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, Cited by: §4.1.
  17. R. Girshick, J. Donahue, T. Darrell and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, Cited by: §2.
  18. R. Girshick (2015) Fast r-cnn. In ICCV, Cited by: §2.
  19. R. Gopalan, R. Li and R. Chellappa (2011) Domain adaptation for object recognition: an unsupervised approach. In ICCV, Cited by: §2.
  20. A. Gretton, K. Borgwardt, M. J. Rasch, B. Scholkopf and A. J. Smola (2008) A kernel method for the two-sample problem. In NeurIPS, Cited by: §1.
  21. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1.
  22. Z. He and L. Zhang (2019) Multi-adversarial faster-rcnn for unrestricted object detection. In ICCV, Cited by: §1, §2, Table 1, §4.3, Table 2.
  23. J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros and T. Darrell (2018) Cycada: cycle-consistent adversarial domain adaptation. In ICML, Cited by: §1.
  24. D. Hoiem, Y. Chodpathumwan and Q. Dai (2012) Diagnosing error in object detectors. In ECCV, Cited by: §4.4.
  25. H. Hsu, W. Hung, H. Tseng, C. Yao, Y. Tsai, M. Singh and M. Yang (2019) Progressive domain adaptation for object detection. In CVPR Workshops, Cited by: §1, §2, Table 1, §4.3, §4.3.
  26. M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K. Rosaen and R. Vasudevan (2017) Driving in the matrix: can virtual worlds replace human-generated annotations for real world tasks?. In ICRA, Cited by: §4.1.
  27. G. Kang, L. Jiang, Y. Yang and A. G. Hauptmann (2019) Contrastive adaptation network for unsupervised domain adaptation. In CVPR, Cited by: §1.
  28. Z. Ke, D. Wang, Q. Yan, J. Ren and R. W.H. Lau (2019) Dual student: breaking the limits of the teacher in semi-supervised learning. In ICCV, Cited by: §1.
  29. M. Khodabandeh, A. Vahdat, M. Ranjbar and W. G. Macready (2019) A robust learning approach to domain adaptive object detection. In ICCV, Cited by: §2, Table 1, §4.3, Table 2.
  30. S. Kim, J. Choi, T. Kim and C. Kim (2019) Self-training and adversarial background regularization for unsupervised domain adaptive one-stage object detection. In ICCV, Cited by: §1, §2.
  31. T. Kim, M. Jeong, S. Kim, S. Choi and C. Kim (2019) Diversify and match: a domain adaptive representation learning paradigm for object detection. In CVPR, Cited by: §1, §2, Table 1, §4.3.
  32. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.2.
  33. T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan and S. Belongie (2017) Feature pyramid networks for object detection. In CVPR, Cited by: §2.
  34. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In ICCV, Cited by: §2.
  35. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §1.
  36. S. Liu, D. Huang and Y. Wang (2018) Receptive field block net for accurate and fast object detection. In ECCV, Cited by: §2.
  37. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu and A. C. Berg (2016) SSD: single shot multibox detector. In ECCV, Cited by: §2.
  38. J. Long, E. Shelhamer and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §1.
  39. M. Long, Y. Cao, J. Wang and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. In ICML, Cited by: §2.
  40. M. Long, J. Wang, G. Ding, J. Sun and P. S. Yu (2014) Transfer feature learning with joint distribution adaptation. In ICCV, Cited by: §1.
  41. Y. Luo, P. Liu, T. Guan, J. Yu and Y. Yang (2019) Significance-aware information bottleneck for domain adaptive semantic segmentation. In ICCV, Cited by: §2.
  42. Y. Luo, L. Zheng, T. Guan, J. Yu and Y. Yang (2019) Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. In CVPR, Cited by: §1.
  43. F. Mirrashed, V. I. Morariu, B. Siddiquie, R. S. Feris and L. S. Davis (2013) Domain adaptive object detection. In WACV, Cited by: §2.
  44. S. J. Pan and Q. Yang (2010) A survey on transfer learning. TKDE 22 (10), pp. 1345–1359. Cited by: §1.
  45. Y. Pan, T. Yao, Y. Li, Y. Wang, C. Ngo and T. Mei (2019) Transferrable prototypical networks for unsupervised domain adaptation. In CVPR, Cited by: §3.4.
  46. Z. Pei, Z. Cao, M. Long and J. Wang (2018) Multi-adversarial domain adaptation. In AAAI, Cited by: §2.
  47. A. Raj, V. Namboodiri and T. Tuytelaars (2015) Subspace alignment based domain adaptation for rcnn detector. In BMVC, Cited by: §2.
  48. J. Redmon, S. Divvala, R. Girshick and A. Farhadi (2016) You only look once: unified, real-time object detection. In CVPR, Cited by: §2.
  49. S. Ren, K. He, R. Girshick and J. Sun (2017) Faster r-cnn: towards real-time object detection with region proposal networks. T-PAMI 39 (6), pp. 1137–1149. Cited by: §1, §2, §3.2, §3.3.
  50. A. L. Rodriguez and K. Mikolajczyk (2019) Domain adaptation for object detection via style consistency. In BMVC, Cited by: §1, §2.
  51. K. Saito, Y. Ushiku, T. Harada and K. Saenko (2019) Strong-weak distribution alignment for adaptive object detection. In CVPR, Cited by: §1, §1, §2, Table 1, §4.3, Table 2.
  52. K. Saito, Y. Ushiku and T. Harada (2017) Asymmetric tri-training for unsupervised domain adaptation. In ICML, Cited by: §1.
  53. C. Sakaridis, D. Dai and L. Van Gool (2018) Semantic foggy scene understanding with synthetic data. IJCV 126 (9), pp. 973–992. Cited by: §4.1.
  54. Z. Shen, H. Maheshwari, W. Yao and M. Savvides (2019) SCL: towards accurate domain adaptive object detection via gradient detach based stacked complementary losses. arXiv preprint arXiv:1911.02559. Cited by: §2.
  55. K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §4.2.
  56. J. Snell, K. Swersky and R. Zemel (2017) Prototypical networks for few-shot learning. In NeurIPS, Cited by: §3.4.
  57. B. Sun and K. Saenko (2016) Deep coral: correlation alignment for deep domain adaptation. In ECCV, Cited by: §2.
  58. A. Tarvainen and H. Valpola (2017) Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, Cited by: §2.
  59. A. Torralba and A. A. Efros (2011) Unbiased look at dataset bias. In CVPR, Cited by: §1.
  60. E. Tzeng, J. Hoffman, K. Saenko and T. Darrell (2017) Adversarial discriminative domain adaptation. In CVPR, Cited by: §2.
  61. E. Tzeng, J. Hoffman, N. Zhang, K. Saenko and T. Darrell (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474. Cited by: §2.
  62. D. Vázquez, A. M. López and D. Ponsa (2012) Unsupervised domain adaptation of virtual and real worlds for pedestrian detection. In ICPR, Cited by: §2.
  63. P. Viola and M. Jones (2001) Rapid object detection using a boosted cascade of simple features. In CVPR, Cited by: §2.
  64. T. Vu, H. Jain, M. Bucher, M. Cord and P. Perez (2019) ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. In CVPR, Cited by: §1.
  65. F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang and X. Tang (2017) Residual attention network for image classification. In CVPR, Cited by: §3.3.
  66. M. Wang and X. Wang (2011) Automatic adaptation of a generic pedestrian detector to a specific traffic scene. In CVPR, Cited by: §2.
  67. Y. Xia, D. Huang and Y. Wang (2017) Detecting smiles of young children via deep transfer learning. In ICCV Workshops, Cited by: §2.
  68. R. Xie, F. Yu, J. Wang, Y. Wang and L. Zhang (2019) Multi-level domain adaptive learning for cross-domain detection. In ICCV Workshops, Cited by: §1, §2, Table 1, §4.1, §4.3, §4.3, §4.3, Table 2, Table 3.
  69. S. Xie, Z. Zheng, L. Chen and C. Chen (2018) Learning semantic representations for unsupervised domain adaptation. In ICML, Cited by: §3.4.
  70. J. Xu, S. Ramos, D. Vázquez and A. M. López (2014) Domain adaptation of deformable part-based models. T-PAMI 36 (12), pp. 2367–2380. Cited by: §2.
  71. P. Xu, P. Gurram, G. Whipps and R. Chellappa (2019) Wasserstein distance based domain adaptation for object detection. arXiv preprint arXiv:1909.08675. Cited by: §2.
  72. S. Zhao, H. Fu, M. Gong and D. Tao (2019) Geometry-aware symmetric domain adaptation for monocular depth estimation. In CVPR, Cited by: §2.
  73. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, Cited by: §1.
  74. X. Zhu, J. Pang, C. Yang, J. Shi and D. Lin (2019) Adapting object detectors via selective cross-domain alignment. In CVPR, Cited by: §1, §1, §2, Table 1, §4.3, §4.3, Table 2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
412547
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description