Progressive Feature Polishing Network for Salient Object Detection

Progressive Feature Polishing Network for Salient Object Detection

Bo Wang,1,2 Quan Chen,2 Min Zhou,2 Zhiqiang Zhang,2 Xiaogang Jin,1 Kun Gai2
1Zhejiang University,2Alibaba Group
wangbo060@zju.edu.cn,{chenquan.cq,yunqi.zm,zhang.zhiqiang,jingshi.gk}@alibaba-inc.com,jin@cad.zju.edu.cn
Abstract

Feature matters for salient object detection. Existing methods mainly focus on designing a sophisticated structure to incorporate multi-level features and filter out cluttered features. We present Progressive Feature Polishing Network (PFPN), a simple yet effective framework to progressively polish the multi-level features to be more accurate and representative. By employing multiple Feature Polishing Modules (FPMs) in a recurrent manner, our approach is able to detect salient objects with fine details without any post-processing. A FPM parallelly updates the features of each level by directly incorporating all higher level context information. Moreover, it can keep the dimensions and hierarchical structures of the feature maps, which makes it flexible to be integrated with any CNN-based models. Empirical experiments show that our results are monotonically getting better with increasing number of FPMs. Without bells and whistles, PFPN outperforms the state-of-the-art methods significantly on five benchmark datasets under various evaluation metrics.

\nocopyright

Introduction

Salient object detection, which aims to extract the most attractive regions in an image, is widely used in computer vision tasks, including video compression [10], visual tracking [3], and image retrieval [5].

Benefitting from the hierarchical structure of CNN, deep models can extract multi-level features that contain both low-level local details and high-level global semantics. To make use of detailed and semantic information, a straightforward integration of the multi-level context information with concatenation or element-wise addition of different level features can be applied. However, as the features can be cluttered and inaccurate at some levels, this kind of simple feature integrations tends to get suboptimal results. Therefore, most recent attractive progress focuses on designing a sophisticated integration of these multi-level features. We point out the drawbacks of current methods in three folds. First, many methods [40, 19] employ the U-Net [24] like structure in which the information flow from high level to low level during feature aggregation, while BMPM [37] uses a bidirectional message passing between consecutive levels to incorporate semantic concepts and fine details. However, these integrations, performed indirectly among multi-level features, may be deficient because of the incurred long-term dependency problem [2]. Second, other works [42, 37, 12] recursively refine the predicted results in a deep-to-shallow manner to supplement details. However, predicted saliency maps have lost the rich information and the capability of refinement is limited. Furthermore, although valuable human priors can be introduced by designing sophisticated structures to incorporate multi-level features, this process can be complicated and the structure might lack generality.

(a) (b) (c)
(d) (e) (f)
Figure 1: Illustration of results with progressively polished features. (a) Original images. (f) Ground truth. (b)-(e) Saliency maps predicted by PFPN with FPMs, respectively.

To make full use of semantic and detailed information, we present a novel Progressive Feature Polishing Network (PFPN) for salient object detection, which is simple and tidy, yet effective. First, PFPN adopts a recurrent manner to progressively polish every level features in parallel. With the gradually polishing, cluttered information would be dropt out and multi-level features would be rectified. As this parallel structure could keep the feature levels in backbone, some common decoder structures can be easily applied. In one feature polishing step, each level feature is updated with the fusion of all deeper level features directly. Therefore, high level semantic information can be integrated directly to all low level features to avoid the long-term dependency problem. In summary, the progressive feature polishing network greatly improves the multi-level representations, and even with the simplest concatenation feature fusion, PFPN works well to detect salient objects accurately. Our contributions are as follows:

We propose a novel multi-level representation refinement method for salient object detection, as well as a simple and tidy framework PFPN to progressively polish the features in a recurrent manner.

For each polishing step, we propose the FPM to refine the representations, which preserves the dimensions and hierarchical structure of the feature maps. It integrates high level semantic information directly to all low level features to avoid the long-term dependency problem.

Empirical evaluations show that our proposed method significantly outperforms state-of-the-art methods on five benchmark datasets under various evaluation metrics.

Related Work

During the past decades, salient object detection has experienced continuous innovation. In earlier years, saliency prediction methods [13, 22] mainly focus on heuristic saliency priors and low-level handcrafted features, such as center prior, boundary prior, and color contrast.

In recent years, deep convolutional networks have achieved impressive results in various computer vision tasks and also been introduced to salient object detection. Early attempts of deep saliency models include Li [15] which exploits multi-scale CNN contextual features to predict the saliency of each image segment, and Zhao [41] which utilizes both the local and global context to score each superpixel. While these methods achieve obvious improvements over handcrafted methods, their scoring one image patch with the same saliency prediction drops the spatial information and results in low prediction resolution. To solve this problem, many methods based on Fully Convolutional Network [21] are proposed to generate pixel-wise saliency. Roughly speaking, these methods can be categorized into two lines.

Figure 2: Overview of the proposed Progressive Feature Polishing Network (PFPN). PFPN is a deep fully convolutional network composed of four kinds of modules: the Backbone, two Transition Modules (TM), a series of Feature Polishing Modules (FPM) and a Fusion Module (FM). An implementation with ResNet-101 [11] as backbone and is illustrated. For an input image with the size of 256x256, the multi-level features are first extracted by the backbone and transformed to same dimension by the TM1. Then the features are progressively polished by passing through the two FPMs. Finally, they are upsampled to the same size by TM2 and concatenated to locate the salient objects in FM.

Feature Integration

Although multi-level features extracted by CNN contain rich information about both high level semantics and low level details, the reduced spatial feature resolution and the likely inaccuracy at some feature levels make it an active line of work to design sophisticated feature integration structures. Lin [18] adotps RefineNet to gradually merge high-level and low-level features from backbone in bottom-up method. Wang [30] propose to better localize salient objects by exploiting contextual information with attentive mechanism. Zhuge [42] employ a structure which embeds prior information to generate attentive features and filter out cluttered information. Different from above methods which design sophisticated structure to make information fusion, we use simple structure to polish multi-level features in recurrent manner and in parallel. Meanwhile, the multi-level structure would be kept and the polished multi-level features can be applied in common decoder modules. Zhang [37] use a bidirectional message passing between consecutive levels to incorporate semantic concepts and fine details. However, the incorporating the features in between adjacent feature levels results in long-term dependency. Our method directly aggregates features of all higher levels at each polishing step and thus high level information could be fused to lower level features sufficiently during multiple steps.

Refinement on saliency map

Another line focuses on progressively refining the predicted saliency map by rectifying previous errors. DHSNet [20] first learns a coarse global prediction and then progressively refines the details of saliency map by integrating local context features. Wang [28] propose to recurrently apply an encoder-decoder structure to previous predicted saliency map to perform refinement. DSS [12] adotps short connections to make progressive refining on saliency maps. CCNN [26] cascads local saliency refiner to refines the details from initial predicted salient map. However, since the predicted results have severe information loss than original representations, the refinement might be deficient. Different from these methods, our approach progressively improves the multi-level representations in a recurrent manner instead of attempting to rectify the predicted results. Besides, most previous refinements are performed in a deep-to-shallow manner, in which at each step only the features specific to that step are exploited. In contrast to that, our method polishes the representations at every level with multi-level context information at each step. Moreover, many methods utilize an extra refinement module, either as a part of their model or as a post-process, to further recover the details of the predicted results, such as DenseCRF [12, 19], BRN [30] and GFRN [42]. In contrast, our method delivers superior performance without such modules.

Approach

In this section, we first describe the architecture overview of the proposed Progressive Feature Polishing Network (PFPN). Then we detail the structure of the Feature Polishing Module (FPM) and the design of feature fusion module. Finally we present some implementation details.

Overview of PFPN

In this work, we propose the Progressive Feature Polishing Network (PFPN) for salient object detection. An overview of this architecture is shown in Fig. 2. Our model consists of four kinds of modules: the Backbone, two Transition Modules (TM), a series of Feature Polishing Modules (FPM), and a Fusion Module (FM).

The input image is first fed into the backbone network to extract multi-scale features. The choice of backbone structure is flexible and ResNet-101 [11] is used in the paper to be consistent with previous work [42]. Results of VGG-16 [25] version is also reported in experiments. Specifically, the ResNet-101 [11] network can be grouped into five blocks by a serial of downsampling operations with a stride of 2. The outputs of these blocks are used as the multi-level feature maps: Conv-1, Res-2, Res-3, Res-4, Res-5. To reduce feature dimensions and keep the implementation tidy, these feature maps are passed through the first transition module (TM1 in Fig. 2), in which the features at each level are transformed in parallel into a same number of dimensions, such as 256 in our implementation, by 1x1 convolutions. After obtaining the multi-level feature maps with the same dimension, a series of Feature Polishing Modules (FPM) are performed on these features successively to improve them progressively. Fig. 2 shows an example with . In each FPM, high level features are directly introduced to all low level features to improve them, which is efficient and notably reduces information loss than indirect ways. The inputs and outputs of FPM have the same dimensions and all FPMs share the same network structure. We use different parameters for each FPM in expectation that they could learn to focus on more and more refined details gradually. Experiments show that the model with outperforms the state-of-the-art and also has a fast speed of 20 fps, while the accuracy of saliency predictions converges at with marginal improvements over . Then we exploit the second transition module (TM2 in Fig. 2), which consists of a bilinear upsampling followed by a 1x1 convolution, to interpolate all features to the original input resolution and reduce the dimension of them to 32. At last, a fusion module (FM) is used to integrate the multi-scale features and obtain the final saliency map. Owing to the more accurate representations after FPMs, the FM is implemented with a simple concatenation strategy. Our network is trained in an end-to-end manner.

Feature Polishing Module

The Feature Polishing Module (FPM) plays a core role in our proposed PFPN. FPM is a simple yet effective module that can be incorporated with any deep convolutional backbones to polish the feature representation. It keeps the multi-level structure of the representations generated by CNNs, such as the backbone or preceding FPM, and learns to update them with residual connections.

For feature maps , FPM will also generate polished features maps with the same size. As is shown in Fig. 2, FPM consists of parallel FPM blocks, each of which corresponds to a separate feature map and is denoted as FPM-. Specifically, a series of short connections [12] from deeper side to shallower side are adopted. As a result, higher level features with global information are injected directly to lower ones to help better discover the salient regions. Taking the FPM1-3 in Fig. 2 as an example, all features of Res-3, Res-4, Res-5 are utilized through short connections to update the features of Res-3. FPM also takes advantage of residual connections [11] so that it can update the features and gradually filter out the cluttered information. This is illustrated by the connection surrounding each FPM block in Fig. 2.


Figure 3: Illustration of the detail implementation of a FPM block with a residual connection, which is formally formulated by Eq. 1. This is an example of , i.e. FPM1-3 and FPM2-3 in Fig. 2.

The implementation of FPM- block is formally formulated as Eq. 1:

(1)

It takes in feature maps, i.e. . For feature map , we first apply a convolutional layer with a 3x3 kernel followed by a batch normalization and a ReLU non-linearity to capture context knowledge, and interpolate it bilinearly to the size of . These features are then combined with a concatenation along channels and fused by a 1x1 convolutional layer to reduce the dimension, obtaining . Finally, is used as a residual function to update the original feature map to compute the with element-wise addition. An example of this procedure with is illustrated in Fig. 3.

Fusion Module

We use the Fusion Module (FM) to finally integrate the multi-level features and detect salient objects. As result of our refined features, the FM can be quite simple. As is illustrated in Fig. 2, the multi-level features from TM2 are first concatenated and then fed into two successive convolutional layers with 3x3 kernels. At last, a 1x1 convolutional layer followed by a sigmoid function is applied to obtain the final saliency map.

Implementation Details

We use the cross-entropy loss between the final predicted saliency map and ground truth to train our model end-to-end. Following previous works [12, 19, 42], side outputs are also employed to calculate auxiliary losses. In detail, 1x1 convolutional layers are performed on the multi-level feature maps before the Fusion Module to obtain a series of intermediate results. The total loss is as follows:

(2)

where is the final result of our model, denotes the -th intermediate result, and represents the ground truth. The weights are set empirically to bias towards the final result.

We implement our method with Pytorch [1] framework. The last average pooling layer and fully connected layer of the pre-trained ResNet-101 [11] are removed. We initialize the layers of backbone with the weights pre-trained on ImageNet classification task and randomly initialize the rest layers. We follow source code of PiCA [19] given by author and FQN [16] and freeze the BatchNorm statistics of the backbone.

Method ECSSD HKU-IS DUT-O DUTS-TE PASCAL-S
MAE max F mean F S MAE max F mean F S MAE max F mean F S MAE max F mean F S MAE max F mean F S
VGG [25]
RFCN [28] 0.107 0.890 0.811 0.852 0.079 0.892 0.805 0.859 0.111 0.742 0.656 0.764 0.091 0.784 0.728 0.794 0.118 0.837 0.785 0.804
DHS [20] 0.059 0.907 0.885 0.883 0.053 0.890 0.867 0.869 - - - - 0.067 0.807 0.777 0.817 0.094 0.842 0.829 0.802
RAS [4] 0.056 0.921 0.900 0.893 0.045 0.913 0.887 0.887 0.062 0.786 0.762 0.813 0.060 0.831 0.803 0.838 0.104 0.837 0.829 0.785
Amulet [39] 0.059 0.915 0.882 0.893 0.052 0.895 0.856 0.883 0.098 0.742 0.693 0.780 0.085 0.778 0.731 0.804 0.098 0.837 0.838 0.822
DSS [12] 0.052 0.916 0.911 0.882 0.041 0.910 0.904 0.879 0.066 0.771 0.764 0.787 0.057 0.825 0.814 0.824 0.096 0.852 0.849 0.791
PiCA [19] 0.047 0.931 0.899 0.913 0.042 0.921 0.883 0.905 0.068 0.794 0.756 0.820 0.054 0.851 0.809 0.858 0.088 0.880 0.854 0.842
BMPM [37] 0.045 0.929 0.900 0.911 0.039 0.921 0.888 0.905 0.064 0.774 0.744 0.808 0.049 0.851 0.814 0.861 0.074 0.862 0.855 0.834
AFN [8] 0.042 0.935 0.915 0.914 0.036 0.923 0.899 0.905 0.057 0.797 0.776 0.826 0.046 0.862 0.834 0.866 0.076 0.879 0.866 0.841
CPD [33] 0.040 0.936 0.923 0.910 0.033 0.924 0.903 0.904 0.057 0.794 0.780 0.817 0.043 0.864 0.846 0.866 0.074 0.877 0.868 0.832
MLMS [32] 0.044 0.928 0.900 0.911 0.039 0.921 0.888 0.906 0.063 0.774 0.745 0.808 0.048 0.846 0.815 0.861 0.079 0.877 0.857 0.836
ICTBI [31] 0.041 0.921 - - 0.040 0.919 - - 0.060 0.770 - - 0.050 0.830 - - 0.073 0.840 - -
ours 0.040 0.938 0.915 0.916 0.035 0.928 0.902 0.909 0.063 0.777 0.753 0.805 0.042 0.868 0.836 0.864 0.071 0.891 0.866 0.834
ResNet [11]
SRM [29] 0.054 0.917 0.896 0.895 0.046 0.906 0.881 0.886 0.069 0.769 0.744 0.797 0.059 0.827 0.796 0.836 0.085 0.847 0.847 0.830
PiCA [19] 0.047 0.935 0.901 0.918 0.043 0.919 0.880 0.904 0.065 0.803 0.762 0.829 0.051 0.860 0.816 0.868 0.077 0.881 0.851 0.845
DGRL [30] 0.041 0.922 0.912 0.902 0.036 0.910 0.899 0.894 0.062 0.774 0.765 0.805 0.050 0.829 0.820 0.842 0.072 0.872 0.854 0.831
CAPS [38] - - - - 0.057 0.882 0.865 0.852 - - - - 0.060 0.821 0.802 0.819 0.078 0.866 0.860 0.826
BAS [23] 0.037 0.942 0.927 0.916 0.032 0.928 0.911 0.908 0.056 0.805 0.790 0.835 0.047 0.855 0.842 0.865 0.084 0.872 0.861 0.824
ICTBI [31] 0.040 0.926 - - 0.038 0.920 - - 0.059 0.780 - - 0.048 0.836 - - 0.072 0.848 - -
CPD [33] 0.037 0.939 0.924 0.918 0.034 0.925 0.904 0.905 0.056 0.797 0.780 0.824 0.043 0.865 0.844 0.869 0.078 0.876 0.865 0.835
DEF [42] 0.036 - 0.915 - 0.033 - 0.907 - 0.062 - 0.769 - 0.045 - 0.821 - 0.070 - 0.826 -
ours 0.033 0.949 0.926 0.932 0.030 0.939 0.912 0.921 0.053 0.820 0.794 0.842 0.037 0.888 0.858 0.887 0.068 0.892 0.873 0.851
Table 1: Quantitative comparisons with different methods on 5 datasets with MAE (smaller is better), max/mean F-measure score (larger is better) and S-measure (larger is better). The best three results are shown in red, blue and green. The results of our method with based on both ResNet101 [11] and VGG16 [25] are reported.
ECSSD HKU-IS DUT-O DUTS-TE PASCAL-S
Figure 4: PR curves with different thresholds of our method and other state-of-art methods on five benchmark datasets.

Experiments

Datasets and metrics

We conduct experiments on five well-known benchmark datasets: ECSSD, HKU-IS, PASCAL-S, DUT-OMRON and DUTS. ECSSD [35] consists of 1,000 images. This dataset contains salient objects with complex structures in multiple scales. HKU-IS [15] consists of 4,447 images and most images are chosen to contain mutliple disconnected salient objects. PASCAL-S [17] includes 850 natural images. These images are selected from PASCAL VOC 2010 segmentation challenge and are pixel-wise annotated. DUT-O [36] is a challenging dataset in that each image contains one or more salient objects with fairly complex scenes. This dataset has 5,168 high-quality images. DUTS [27] is a large scale dataset which consists of 15,572 images, which are selected from ImageNet DET [6] and SUN [34] dataset. It has been split into two parts: 10,553 for training and 5,019 for testing. We evaluate the performance of different salient object detection algorithms through 4 main metrics, including the precision-recall curves (PR curves), F-measure, mean absolute error(MAE), S-measure [7]. By binarizing the predicted saliency map with thresholds in [0,255], a sequence of precision and recall pairs are calculated for each image of the dataset. The PR curve is plotted using the average precision and recall of the dataset at different thresholds. F-measure is calculated as a weighted combination of Precision and Recall with the formulation as follows:

(3)

where is usually set to to emphasize Precision more than Recall as suggested in [36].

Training and Testing

Following the conventional practice [19, 37, 37], our proposed model is trained on the training set of DUTS dataset. We also perform a data augmentation similar to [19] during training to mitigate the over-fitting problem. Specifically, the image is first resized to 300x300 and then a 256x256 image patch is randomly cropped from it. Random horizontal flipping is also applied. We use Adam optimizer to train our model without evaluation until the training loss convergences. The initial learning rate is set to 1e-4 and the overall training procedure takes about 16000 iterations. For testing, the images are scaled to 256x256 to feed into the network and then the predicted saliency maps are bilinearly interpolated to the size of the original image.

Input GT ours BAS CPD DGRL BMPM Amulet PiCA-R DSS DHS RFCN
Figure 5: Visual comparison with different methods in various scenarios.

Comparison with the state-of-the-art

We compare our proposed model with 16 state-of-the-art methods. For fair comparison, the metrics of these 16 methods are obtained from a public leaderboard [9] or their original papers, and we evaluate our method in the same way as [9]. We report the results of our model with ResNet-101 [11] as backbone and two FPMs (i.e. ) if not otherwise mentioned. The saliency maps for visual comparisons are provided by the authors.

Quantitative Evaluation. The quantitative performances of all methods can be found in Table 1 and Fig. 4. Table 1 shows the comparisons of MAE and F-measure. Note that is adopted by almost all methods except DEF [42], which only reports . We report the MAE, F-measure and S-measure of our method for a direct comparison. Our ResNet based model achieves best results and consistently outperforms all other methods on all five datasets under different measurements, demonstrating the effectiveness of our proposed model. Moreover, our VGG based model also ranks the top among VGG based methods. This confirms that our proposed feature polishing method is effective and compatible with different backbone structures. In Fig. 4, we compare the PR curves and F-measure curves of different approaches on five datasets. We can see that the PR curves of our method show better performance than others with a significant margin. In addition, the F-measure curves of our method locate consistently higher than other methods. This verifies the robustness of our method.

Visual Comparison. Fig. 5 shows some example results of our model along with other six state-of-the-art methods for visual comparisons. We observe that our method gives superior results in complex backgrounds (row 1-2) and low contrast scenes (row 3-4). And it recovers meticulous details (row 5-6, note the suspension cable of Golden Gate and the legs of the mantis). From this comparison, we can see that our method performs robustly facing these challenges and produces better saliency maps.

Ablation Study

Backbone. VGG-16 [25] is a commonly used backbone by previous works [37, 19]. To demonstrate the capability of our proposed method to cooperate with different backbones, we introduce how it is applied to the multi-level features computed from VGG-16. This adaption is straightforward. VGG-16 contains 13 convolutional layers and 2 fully connected layers, along with 5 max-pooling layers which split the network into 6 blocks. The 2 fully connected layers are first transformed to convolutional layers, and then the 6 blocks generate outputs with decreasing spatial resolutions, i.e. 256, 128, 64, 32, 16, 8, if the input image is set to the fixed size of 256x256. These multi-level feature maps are fed into PFPN as described in Section Approach to obtain the saliency map. Table 1 shows the comparisons with other VGG based state-of-the-art methods and Table 2 shows the evaluations of various number of FPMs. We can see that our method based on VGG-16 also shows excellent performance, which confirms that our method is effective for feature refining and generalizable to different backbones.

Settings ECSSD DUTS-TE
MAE max F mean F S MAE max F mean F S
PFPN (0 FPM) 0.048 0.928 0.894 0.911 0.052 0.851 0.811 0.862
PFPN (1 FPM) 0.036 0.946 0.921 0.928 0.040 0.884 0.848 0.883
PFPN (2 FPM)‡ 0.041 0.944 0.914 0.924 0.043 0.876 0.840 0.884
PFPN (2 FPM) 0.033 0.949 0.926 0.932 0.037 0.888 0.858 0.887
PFPN (3 FPM) 0.032 0.950 0.929 0.932 0.037 0.888 0.862 0.889
PFPN-V (0 FPM) 0.057 0.911 0.883 0.890 0.058 0.825 0.793 0.837
PFPN-V (1 FPM) 0.045 0.931 0.905 0.908 0.046 0.853 0.825 0.862
PFPN-V (2 FPM) 0.040 0.938 0.915 0.916 0.042 0.868 0.836 0.864
PFPN-V (3 FPM) 0.040 0.939 0.915 0.920 0.043 0.868 0.839 0.873
Table 2: Ablation evaluations of PFPM with different , the number of FPMs. The numbers in () denote the value of . PFPN-V denotes the models with VGG [25] as backbone. PFPN (2 FPM)‡ denotes the FPMs the same share weights. Full metrices are given in supplementary materials.

Feature Polishing Module. To confirm the effectiveness of the proposed FPM, we conduct an ablation evaluation by varying the number of FPM employed. The results with ranging from 0 to 3 on ECSSD and DUTS-TE are shown in Table 2. For , two transition modules are directly connected without employing FPM, and for , FPM is applied times in between the two transition modules, as illustrated in Fig. 2. Other settings, including the loss and training strategy, are kept the same for these evaluations. For ResNet based models, we can see that FPM significantly boosts the performance than the plain baseline with no FPM, and the performances increase gradually with using more FPMs. Actually the PFPN with 1 FPM and 2 FPMs both have great improvement progressively. When it comes to , the lift of accuracy converges and the improvement is marginal. Similar phenomena can be observed with the VGG based PFPN. This supports our argument that multiple FPMs progressively polish the representations so as to improve the final results. We suppose the accuracy converges due to the limited scale of current dataset. And we also conduct an experiment that a PFPN with 2 FPMs share the same weights. The conclusion is that compared to PFPN (0 FPM), it has great improvement. However, compared to PFPN (1 FPM) and PFPN (2 FPMs), the performance decay. Although PFM can refine multi-level features, separate weights make FPM learning to refine features better according to different refinement stages.

Settings DUTS-TE
MAE max F mean F S
DSS [12] 0.056 0.825 0.814 0.824
PiCA [19] 0.051 0.860 0.816 0.868
PiCA+crf 0.041 0.866 0.855 0.862
PFPN 0.037 0.888 0.858 0.887
PFPN+crf 0.037 0.871 0.866 0.858
Table 3: Quantitative comparison of the models with or without dense conditional random field (DenseCRF) as a post-process. Full metrices are given in supplementary materials.

DenseCRF. The dense connected conditional random field (DenseCRF [14]) is widely used by many methods [12, 19] as a post-process to refine the predicted results. We investigate the effects of DenseCRF on our method. The results are listed in Table 3. DSS [12] reports the results with DenseCRF. Both results with or without DenseCRF are reported for PiCA [19] and our method. We can see that previous works can benefit from the long range pixel similarity prior brought by DenseCRF. Furthermore, even without DenseCRF post-processing, our method performs better than other models with DenseCRF. However, DenseCRF does not bring benefits for our method, where we find that DenseCRF only improves the MAE on a few datasets, but decreases the F-measure on all datasets. This indicates that our method already sufficiently captures the information about the saliency objects from the data, so that heuristic prior fails to provide more help.

Visualization of feature polishing

In this section, we present an intuitive understanding of the procedure of feature polishing. Since directly visualizing the intermediate features are not straightforward, we instead compare the results of our model with different numbers of FPMs. Several example saliency maps are illustrated in Fig. 1 and Fig. 6. We can see that the quality of predicted saliency maps is monotonically getting better with increasing number of FPMs, which is consistent with quantitative results in Table 2. Specifically, the model with can roughly detect the salient objects in the images, which benefits from rich semantic information of multi-level feature maps. As more FPMs are employed, more details are recovered and cluttered results are eliminated.

(a) (b) (c) (d) (e) (f)
Figure 6: Saliency maps predicted by our proposed PFPN with various numbers of FPMs. (a) Original images. (f) Ground truth. (b)-(e) Saliency maps predicted by PFPN with FPMs, respectively.

Conclusion

We have presented a novel Progressive Feature Polishing Network for salient object detection. PFPN focuses on improving the multi-level representations by progressively polishing the features in a recurrent manner. For each polishing step, a Feature Polishing Module is designed to directly integrate high level semantic concepts to all lower level features, which reduces information loss. Although the overall structure of PFPN is quite simple and tidy, empirical evaluations show that our method significantly outperforms 16 state-of-the-art methods on five benchmark datasets under various evaluation metrics.

Acknowledgement

This work was supported by Alibaba Group through Alibaba Innovative Research Program. Xiaogang Jin is supported by the Key Research and Development Program of Zhejiang Province (No. 2018C03055) and the National Natural Science Foundation of China (Grant Nos. 61972344, 61732015).

References

  • [1] P. Adam, C. Soumith, C. Gregory, Y. Edward, D. Zachary, L. Zeming, D. Alban, A. Luca, and L. Adam (2017) Automatic differentiation in pytorch. In Proceedings of Neural Information Processing Systems, Cited by: Implementation Details.
  • [2] Y. Bengio, P. Simard, P. Frasconi, et al. (1994) Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks 5 (2), pp. 157–166. Cited by: Introduction.
  • [3] A. Borji, S. Frintrop, D. N. Sihite, and L. Itti (2012) Adaptive object tracking by learning background context. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 23–30. Cited by: Introduction.
  • [4] S. Chen, X. Tan, B. Wang, and X. Hu (2018) Reverse attention for salient object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 234–250. Cited by: Table 1.
  • [5] M. Cheng, Q. Hou, S. Zhang, and P. L. Rosin (2017) Intelligent visual media processing: when graphics meets vision. Journal of Computer Science and Technology 32 (1), pp. 110–121. Cited by: Introduction.
  • [6] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: Datasets and metrics.
  • [7] D. Fan, M. Cheng, Y. Liu, T. Li, and A. Borji (2017) Structure-measure: a new way to evaluate foreground maps. In IEEE International Conference on Computer Vision, Cited by: Datasets and metrics.
  • [8] M. Feng, H. Lu, and E. Ding (2019) Attentive feedback network for boundary-aware salient object detection. In cvpr, Cited by: Table 1.
  • [9] M. Feng (2018) Evaluation toolbox for salient object detection.. https://github.com/ArcherFMY/sal_eval_toolbox. Cited by: Comparison with the state-of-the-art.
  • [10] C. Guo and L. Zhang (2010) A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE transactions on image processing 19 (1), pp. 185–198. Cited by: Introduction.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Figure 2, Overview of PFPN, Feature Polishing Module, Implementation Details, Table 1, Comparison with the state-of-the-art.
  • [12] Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu, and P. H. Torr (2017) Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3203–3212. Cited by: Introduction, Refinement on saliency map, Feature Polishing Module, Implementation Details, Table 1, Ablation Study, Table 3.
  • [13] L. Itti, C. Koch, and E. Niebur (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence, pp. 1254–1259. Cited by: Related Work.
  • [14] P. Krähenbühl and V. Koltun (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pp. 109–117. Cited by: Ablation Study.
  • [15] G. Li and Y. Yu (2015) Visual saliency based on multiscale deep features. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5455–5463. Cited by: Related Work, Datasets and metrics.
  • [16] R. Li, Y. Wang, F. Liang, H. Qin, J. Yan, and R. Fan (2019-06) Fully quantized network for object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Implementation Details.
  • [17] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille (2014) The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280–287. Cited by: Datasets and metrics.
  • [18] G. Lin, A. Milan, C. Shen, and I. Reid (2017) Refinenet: multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1925–1934. Cited by: Feature Integration.
  • [19] N. Liu, J. Han, and M. Yang (2018) Picanet: learning pixel-wise contextual attention for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098. Cited by: Introduction, Refinement on saliency map, Implementation Details, Implementation Details, Table 1, Training and Testing, Ablation Study, Ablation Study, Table 3.
  • [20] N. Liu and J. Han (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 678–686. Cited by: Refinement on saliency map, Table 1.
  • [21] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: Related Work.
  • [22] D. Parkhurst, K. Law, and E. Niebur (2002) Modeling the role of salience in the allocation of overt visual attention. Vision research 42 (1), pp. 107–123. Cited by: Related Work.
  • [23] X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand (2019) BASNet: boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7479–7489. Cited by: Table 1.
  • [24] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: Introduction.
  • [25] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: Overview of PFPN, Table 1, Ablation Study, Table 2.
  • [26] Y. Tang and X. Wu (2019) Salient object detection using cascaded convolutional neural networks and adversarial learning. IEEE Transactions on Multimedia. Cited by: Refinement on saliency map.
  • [27] L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan (2017) Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 136–145. Cited by: Datasets and metrics.
  • [28] L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan (2016) Saliency detection with recurrent fully convolutional networks. In European conference on computer vision, pp. 825–841. Cited by: Refinement on saliency map, Table 1.
  • [29] T. Wang, A. Borji, L. Zhang, P. Zhang, and H. Lu (2017) A stagewise refinement model for detecting salient objects in images. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4019–4028. Cited by: Table 1.
  • [30] T. Wang, L. Zhang, S. Wang, H. Lu, G. Yang, X. Ruan, and A. Borji (2018) Detect globally, refine locally: a novel approach to saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3127–3135. Cited by: Feature Integration, Refinement on saliency map, Table 1.
  • [31] W. Wang, J. Shen, M. Cheng, and L. Shao (2019) An iterative and cooperative top-down and bottom-up inference network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5968–5977. Cited by: Table 1.
  • [32] R. Wu, M. Feng, W. Guan, D. Wang, H. Lu, and E. Ding (2019) A mutual learning method for salient object detection with intertwined multi-supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8150–8159. Cited by: Table 1.
  • [33] Z. Wu, L. Su, and Q. Huang (2019) Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: Table 1.
  • [34] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba (2010) Sun database: large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3485–3492. Cited by: Datasets and metrics.
  • [35] Q. Yan, L. Xu, J. Shi, and J. Jia (2013) Hierarchical saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1155–1162. Cited by: Datasets and metrics.
  • [36] C. Yang, L. Zhang, H. Lu, X. Ruan, and M. Yang (2013) Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3166–3173. Cited by: Datasets and metrics.
  • [37] L. Zhang, J. Dai, H. Lu, Y. He, and G. Wang (2018) A bi-directional message passing model for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1741–1750. Cited by: Introduction, Feature Integration, Table 1, Training and Testing, Ablation Study.
  • [38] L. Zhang, J. Zhang, Z. Lin, H. Lu, and Y. He (2019) CapSal: leveraging captioning to boost semantics for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6024–6033. Cited by: Table 1.
  • [39] P. Zhang, D. Wang, H. Lu, H. Wang, and X. Ruan (2017) Amulet: aggregating multi-level convolutional features for salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 202–211. Cited by: Table 1.
  • [40] X. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang (2018) Progressive attention guided recurrent network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 714–722. Cited by: Introduction.
  • [41] R. Zhao, W. Ouyang, H. Li, and X. Wang (2015) Saliency detection by multi-context deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1265–1274. Cited by: Related Work.
  • [42] Y. Zhuge, Y. Zeng, and H. Lu (2019) Deep embedding features for salient object detection. In Thirty-Third AAAI Conference on Artificial Intelligence, Cited by: Introduction, Feature Integration, Refinement on saliency map, Overview of PFPN, Implementation Details, Table 1, Comparison with the state-of-the-art.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398363
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description