A Deep Learning Based Fast Image Saliency Detection Algorithm

A Deep Learning Based Fast Image Saliency Detection Algorithm

Hengyue Pan
York University
4700 Keele Street, Toronto, Ontario, CA
panhy@cse.yorku.ca
   Hui Jiang
York University
4700 Keele Street, Toronto, Ontario, CA
hj@cse.yorku.ca
Abstract

In this paper, we propose a fast deep learning method for object saliency detection using convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify the input images based on the pixel-wise gradients to reduce a pre-defined cost function, which is defined to measure the class-specific objectness and clamp the class-irrelevant outputs to maintain image background. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. We further apply SLIC superpixels and LAB color based low level saliency features to smooth and refine the gradients. Our methods are quite computationally efficient, much faster than other deep learning based saliency methods. Experimental results on two benchmark tasks, namely Pascal VOC 2012 and MSRA10k, have shown that our proposed methods can generate high-quality salience maps, at least comparable with many slow and complicated deep learning methods. Comparing with the pure low-level methods, our approach excels in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.

1 Introduction

In the past few years, deep convolutional neural networks (DCNNs) [13] have achieved the state-of-the-art performance in many computer vision tasks, starting from image recognition [12, 22, 21] and object localization [18] and more recently extending to object detection and semantic image segmentation [9, 11]. These successes are largely attributed to the capacity that large-scale DCNNs can effectively learn end-to-end from a large amount of labelled images in a supervised learning mode.

In this paper, we consider to apply the popular deep learning techniques to another computer vision problem, namely object saliency detection. The saliency detection attempts to locate the objects that have the most interests in an image, where human may also pay more attention [16]. The main goal of the saliency detection is to compute a saliency map that topographically represents the level of saliency for visual attention [24]. For each pixel in an image, the saliency map can provide how likely this pixel belongs to the salient objects [5]. Computing such saliency maps has recently raised a great amount of research interest [4]. The computed saliency maps have been shown to be beneficial to various vision tasks, such as image segmentation [6], object recognition and visual tracking. The saliency detection has been extensively studied in computer vision, and a variety of methods have been proposed to generate the saliency maps for images. Under the assumption that the salient objects probably are the parts that significantly differ from their surroundings, most of the existing methods use low-level image features to detect saliency regions based on the criteria related to color contrast, rarity and symmetry of image patches [6, 16, 17, 5, 8]. In some cases, the global topological cues may be leveraged to refine the perceptual saliency maps [10, 24, 15]. In these methods, the saliency is normally measured based on different mathematical models, including decision theoretic models, Bayesian models, information theoretic models, graphical models, and spectral analysis models [4].

Different from the previous low level methods, we propose a novel deep learning method for the object saliency detection based on the powerful DCNNs. As shown in [12, 22, 21], relying on a pre-trained classification DCNN, we can achieve a fairly high accuracy in object category recognition for many real-world images. Even though DCNNs can recognize what kind of objects are contained in an image, it is not straightforward for them to precisely locate the recognized objects in the image. In [18, 9, 11], some rather complicated and time-consuming post-processing stages are needed to detect and locate the objects for semantic image segmentation. In [25], two DCNNs are applied to generate superpixel based global saliency features and local saliency features, which should be combined for the final saliency maps.

In this work, we propose a much simpler and more computationally efficient method to generate a class-specific object saliency map directly from the classification DCNN model. In our approach, we use a gradient descent (GD) method to iteratively modify each input image based on the refined pixel-wise gradients to reduce a pre-defined cost function, which is defined to measure the class-specific objectness and clamp the class-irrelevant outputs to maintain image background. The gradients with respect to all image pixels can be efficiently computed using the back-propagation algorithm for DCNNs. After the back-propagation procedure, the discrepancy between the modified image and the original one is calculated as the raw saliency map for this image. The raw saliency maps are smoothed by using SLIC [1] superpixel maps and refined by using low level saliency features. Since we only need to run a very small number of GD iterations in the saliency detection, our methods are extremely computationally efficient (average processing time for one image in one GPU is around 0.45 second).

Experimental results on two databases, namely Pascal VOC 2012 [7] and MSRA10k [3], have shown that our proposed methods can generate high-quality salience maps, at least comparable with many slow and complicated deep learning methods. On the other hand, comparing with the traditional low-level methods, our approach excels on many difficult images, containing complex background, highly-variable salient objects, multiple objects, and/or very small objects.

2 Related Work

In the literature, the previous saliency detection methods mostly adopt the well-known bottom-up strategy [6, 16, 17, 5]. They relies on the local image features derived from patches to detect contrast, rarity and symmetry to identify the salient objects in an image. Meanwhile, some other methods have been proposed to take into account some global information or prior knowledge to screen the local features. For example, in [24], a boolean map is created to represent global topological cues in an image, which in turn is used to guide the generation of saliency maps. In [15], the visual saliency algorithm considers the prior information and the local features simultaneously in a probabilistic model.The algorithm defines task-related components as the prior information to help the feature selection procedure. In [6], a region contrast based image saliency method is proposed to generate the saliency maps, in which the global contrast differences are evaluated as the main saliency features. In [8], the SLIC superpixels are used as the unit to generate the global contrast based saliency maps, and an average ground truth prior is introduced to eliminate some false positives. This research also takes color distribution information into account to further refine the saliency maps. The traditional saliency detection methods normally work well for the images containing simple dominant foreground objects in homogenous backgrounds. However, they are usually not robust enough to handle images containing complex scenes [14], such as the relatively small objects in heterogenous backgrounds .

Recently, some deep learning techniques have been proposed for image saliency detection and semantic image segmentation [18, 9, 11, 25]. These methods typically use DCNNs to examine a large number of region proposals from other algorithms, and use the features generated by DCNNs along with other post-stage classifiers to localize the target objects. And currently more and more methods tend to directly generate pixel-wise saliency maps or segmentation [11]. For example, in [25], two DCNNs are applied to model the global context and local context for each superpixel in the input images, and the two levels of context are finally combined to generate the pixel-wise multi-context saliency maps.

In this paper, instead of directly generating the high-level semantic saliency maps from DCNNs, we propose to use DCNNs to generate middle-level saliency maps in a very efficient way, which may be fed to other traditional computer vision algorithms for various vision tasks, such as semantic segmentation, video tracking, etc. The work in [19] is the most relevant to the work in this paper. In [19], the authors have borrowed the idea of explanation vectors in [2] to generate a static pixel-wise gradient vector of the network learning objective function, and use it as a saliency map. In our work, an iterative gradient descent method is proposed to generate more reliable and robust saliency maps. More importantly, we introduce a new cost function for the back-propagation and apply SLIC superpixel maps and low level saliency features to refine the gradients for better saliency performance.

The structure of the rest of this paper are listed below: Section 3 defines our proposed saliency and segmentation algorithm; Section 4 shows experiment results of different databases and compare with the state-of-the-art method; finally Section 5 provides the conclusion.

Figure 1: The proposed method to generate the object-specific saliency maps directly from DCNNs.

3 Our Approach for Object Saliency Detection

In this section we will consider the main idea of our DCNN based saliency detection method, and also discuss how to smooth and refine the raw saliency map for better performance.

3.1 Backpropagating and partially clamping DCNNs to generate raw saliency maps

As we have known, DCNNs can automatically learn all sorts of features from a large amount of labelled images, and a well-trained DCNN can achieve a very good classification accuracy in recognizing objects in images. In this work, based on the idea of explanation vectors in [2], we argue that the classification DCNNs themselves may have learned enough features and information to generate good object saliency for the images. Extending a preliminary study in [19], we explore a novel method to generate the saliency maps directly from DCNNs. The key idea of our approaches is shown in Figure 1. After an input image is recognized by a DCNN as containing one particular object, if we can modify the input image in such a way that the DCNN no longer recognizes the object from it and meanwhile attempts to maintain image background as much as possible, the discrepancy between the modified image and the original one may serve as a good saliency map for the recognized object. In this paper, we propose to use a gradient descent (GD) method to iteratively modify the input image based on the pixel-wise gradients to reduce a cost function formulated in the output layer of the DCNN. The proposed cost function is defined to measure the class-specific objectness. The cost function is reduced under the constraint that all class-irrelevant DCNN outputs are clamped to the original values. The image is modified by the gradients computed by applying the back-propagation procedure all the way to the input layer. In this way, the underlying object may be erased from the image while the irrelevant background may be largely retained.

First of all, we simply train a regular DCNN for the image classification. After the DCNN is learned, we may apply our saliency detection method to generate the class-specific object saliency map. For each input image , we firstly use the pre-trained classification DCNN to generate its class label, denoted as , as in a normal classification step. Meanwhile, we obtain the DCNN outputs prior to the final softmax layer, denoted as . Apparently, achieves the maximum value (due to the image is recognized as ). Here, we assume that the DCNN output is mainly relevant to the underlying object in the image while the remaining DCNN outputs are more relevant to the image background excluding the underlying object. Under this assumption, we propose a procedure to modify the image to reduce the -th output of the DCNN as much as possible and meanwhile clamp the other outputs to their original values . We further denote the output nodes (prior to softmax) of the DCNN in the saliency generation procedure as . Therefore, for the image , we attempt modify to reduce the corresponding largest DCNN output, i.e. , subject to the constraint that all remaining DCNN outputs are clamped to their initial values:

Next, we propose to cast the above constraints as penalty terms to construct the following cost function:

(1)

where is a hyperparameter to balance the contribution from the constraints. In this way, we have converted the original constrained optimization problem into an unconstrained problem, which can be easily minimized by gradient descent (GD) methods.

Obviously, this cost function is constructed based on the assumption that the recognized -th output of the DCNN, i.e. , corresponds to the foreground area in the input image while the remaining outputs of DCNN are more relevant to the image background. Therefore, if we modify the image to reduce the above cost function and hopefully the underlying object (belonging to class ) will be removed as the consequence due to that fact that is reduced significantly, but the background remains largely unchanged due to the rest DCNN outputs are clamped in this procedure. In this paper, we propose to use an iterative GD procedure to modify as follows:

(2)

where is the learning rate, and we floor all negative gradients in the GD updates. We have observed in our experiments that the cost function can be significantly reduced by running only a small number of updates (typically 5-10 iterations) for each image, which guarantees the efficiency of the proposed method.

We can easily compute the above gradients using the standard back-propagation algorithm. Based on the cost function in Eq.(1), we can derive the error signals in the output layer, (), as follows:

(3)

These error signals are back-propagated all the way to the input layer to derive the above gradient, , for saliency detection.

At the end of the gradient descent updates, the raw object saliency map is computed as the difference between the modified image and the original one, i.e. . For colour images, we average the differences over the RGB channels to obtain a pixel-wise raw saliency map, which is then normalized to be of unit norm. After that, we can apply a simple threshold to filter out some weak signals (in most situations they are corresponding to background) of the raw saliency maps (see the second column in Figure 2).

Figure 2: From left to right: original images, raw saliency maps, smoothed saliency maps and refined saliency maps

3.2 SLIC based saliency map smoothing

In practice, we have found that the continuity of the above raw saliency map is still not good enough in many cases. The main reason is that the DCNN outputs are not totally independent and their correlation is not considered in the above procedure. Roughly speaking, we have observed that most of the strong signals in the gradients are located in the saliency region. However, from Figure  2 we can see that some problems may still exist, such as background noises, blurred edges or small holes in the foreground. In order to further smooth the saliency maps, we use SLIC superpixels [1] to impose a continuity constraint that all image pixels located in a superpixel always have the same saliency value. More specifically, we firstly generate the superpixel maps of all test images (In our experiments we will first spilt each test image into superpixels, and the compact factor is set to ). If -th pixel in an image belongs to the th superpixel , then the smoothed saliency value can be calculated as Eq. (4) shows:

(4)

Where is the number of pixels in , and we use to denote the smoothed saliency maps. Obviously, comparing with , we can see that may fill holes in the saliency regions, sharpen the object edges, and also significantly reduce the isolated background noises (see the third column in Figure 2).

3.3 Refine saliency maps using low level features

In Section 3.2, we have generated the smoothed saliency maps, which can provide much better performance than the original raw saliency maps. On top of that, we propose to introduce some constraints based on low-level features to further improve the quality of the saliency maps.

Based on the main idea of [8], we can generate low level saliency features for each test image. Firstly, we apply the SLIC superpixel generation method in [1] to generate superpixel maps for the test images. Next, for one superpixel in an image, we calculate its color feature by averaging the LAB color value over its all pixels, and use the color feature to calculate its global color contrast as follows:

(5)

where denotes the Euclidean distance. Following [8], we can further smooth the global color contrast maps and calculate the color distribution maps as the raw low-level saliency maps, which is denoted as . Moreover, is applied to refine the smoothed saliency map , generated from the last step. Here, we normalize between and , where . The reason to use is that the low level features contain a lot of errors, which may over-smooth some saliency values in the foreground of some images. By using , we can prevent this refining procedure from removing some correct saliency regions in . The refined saliency map can be generated as:

(6)

where denotes the element-wise multiplication. At the end, we may further filter out some weak signals in and re-normalize it (see the fourth column in Figure 2). The entire algorithm to generate the final saliency maps is shown in Algorithm 1.

  Input: an input image , DCNN, SLIC superpixel map , low level saliency feature ;
  Use DCNN to recognize the object label for as ;
  ;
  for each epoch to  do
     forward pass: compute the cost function ;
     backward pass: back-propagate to input layer to compute gradient: ;
     ;
  end for
  Average over RGB: ;
  Prune noises with a threshold : ;
  Normalize: ;
  Smoothing: using to smooth as ;
  Refine: ;
  Prune noises and normalize again;
  Output: the refined saliency map ;
Algorithm 1 DCNN based Object Saliency Detection

4 Experiments

We select two benchmark databases to evaluate the performance of the proposed object saliency detection and image segmentation methods, namely Pascal VOC 2012 [7] and MSRA10k [3]. For Pascal VOC 2012, we use the validation images in its segmentation task as the test set, while for MSRA10k we directly use all images to do the test. Both databases provide the pixel-wise segmentation map (ground truth), thus we can easily measure the performances of different saliency algorithms. Here we compare our approaches with three exisiting methods: i) the first one is the Region Contrast saliency method and the SaliencyCut segmentation method in [6]. This method is one of the most popular bottom-up image saliency detection methods in the literature and it has achieved the state-of-the-art image saliency and segmentation performance on many tasks; ii) the second one is the DCNN based image saliency detection method proposed in [19]. Similar to our approaches, this method also uses DCNNs and the back-propagation algorithm to generate saliency maps; iii) the third one is the multi-context deep learning based saliency proposed by Zhao et al. [25] This method uses two DCNNs to calculate global context and local context respectively, and the two level contexts are further combined to generate the final multi-context saliency maps. This method is one of the state-of-the-art deep learning based image saliency algorithm. In our experiments, we use the precision-recall curves (PR-curves) against the ground truth as one metric to evaluate the performance of saliency detection.

As [6], for each saliency map, we vary the cutoff threshold from to to generate precision and recall pairs, which are used to plot a PR-curve. Besides, we also use to measure the performance for both saliency detection and segmentation, which is calculated based on precision and recall values with a non-negative weight parameter as follows [5]:

(7)

In this paper, we follow [6] to set to emphasize the importance of . We may derive a sequence of values along the PR-curve for each saliency map and the largest one is selected as the performance measure (see [5]).

4.1 Databases

Pascal VOC 2012 database [7] is a classical image database that can be used for several vision tasks including image classification and saliency. This database currently contains training images and validation images with labeled categories. However, among them, only validation images that include ground truth information are used to evaluate the performance in our image saliency tasks. Therefore, to expand the training set and improve the classification performance of the DCNN, we merge the original training set with the remaining validation images without ground truth to form a new training set, which has training samples. For images that are labelled to have more than one class of objects, we use the area of the labelled objects to measure their importance, and use the class of the largest object to label the images for our DCNN training process.

Unfortunately, the Pascal training set is still relatively small for DCNN training. Therefore, we have used a pre-trained DCNN for the ImageNet database, which contains convolutional layers and fully connected layers111We use the net imagenet-vgg-verydeep-16 [20].. We only use the above-mentioned training data to fine-tune this DCNN for each task with MatConvNet in [23]. Here we considered fine-tune strategies: 1) update the parameters of all hidden layers with same learning rates; 2) update all hidden layers, but only apply large learning rate for the last layer, which corresponding to the output of the DCNN; 3) only update the last layer, and keep other parameters unchange. We have listed top-1 and top-5 classification error rates to measure the performance of the fine-tune methods. Based on the performance of the methods, the fine-tuned DCNN from method 1 are used to recognize the test sets on the two tasks we selected.

Method1 Method2 Method3
Top-1 Err 18.0% 20.4% 19.1%
Top-5 Err 1.74% 2.08% 1.79%
Table 1: The classification error rates of three fine-tune methods on the Pascal VOC 2012 test sets.

The classification errors on the test sets imply that the training sample size of Pascal VOC 2012 is still not enough for training deep convolutional networks well. However, as we will see, the proposed algorithms can still yield good performance for saliency detection. If we have more training data, we may expect even better saliency results.

MSRA10k [3] is another widely-used image saliency database, which is constructed based on Microsoft MSRA saliency database [16]. MSRA10k selects images from MSRA and includes pixel-wised salient objects information instead of bounding boxes, which make it suitable for our task. However, MSRA10k dose not include the corresponding training set and class labels of all images. Therefore, for MSRA10k, we directly use the DCNN imagenet-vgg-verydeep-16 [20] (without any fine-tuning) to proceed our algorithm.

4.2 Saliency Results

In this part we will provide saliency detection results on the selected two databases. In the following, the PR-curves, values and some sample images will be used to compare different methods.

4.2.1 Efficiency

We firstly consider the speed of our saliency method. Here we will not take the DCNN training time into account because for all of the experiments based on one database, we need only train DCNN once. We can even directly use the will trained DCNN for ImageNet classification for our method without any fine-tune, and the saliency results are also good. Our computing platform includes Intel Xeon E5-1650 CPU (6 cores), 64 GB memory and Nvidia Geforce TITAN X GPU (12 GB memory). The time consumption of processing one image of different algorithms are listed in Table  2.

Methods RC Method Deep Our
[6] in [19] Saliency [25] Method
Execution time 1.92s 0.22s 4.38s 0.45s
Table 2: The time for processing one image of different saliency methods.

From Table  2 we can learn that our method yields much faster processing speed than [6] and [25]. Due to the introducing of SLIC superpixel and low level feature, our method is slower than [19]. However, in the next part we can find that the proposed method has much better performance than [19].

4.2.2 Pascal VOC 2012

For the object saliency detection, we first plot the PR-curves for different methods, which are all shown in Figure 3. From the PR-curves, we can see that the performance of our proposed saliency detection methods significantly outperform the region contrast in [6] and the DCNN based saliency method in [19]. The proposed method also yield comparable performance as the method in [25].

Figure 3: The PR-curves of different saliency methods on the Pascal VOC 2012 test set.

Figure 4 shows the values of the different saliency and segmentation methods, from which we can see that the proposed saliency detection method gives the better value than [6] and [19], and also similar with [25]. However, comparing with [25], our method yields much faster speed. Finally, in Figure 7 (Row 1 to 5), we provide some examples of the saliency detection results from the Pascal VOC 2012 validation set. From these examples we can see that the region contrast algorithm does not work well when the input images have complex background or contain highly variable salient objects, and this problem is fairly common among most bottom-up saliency and segmentation algorithms. On the other hand, we can also see that with the help of SLIC superpixels and low level features, our method can provide comparable performance with [25].

Figure 4: The values of different saliency methods on Pascal VOC 2012 test set.

4.2.3 MSRA10k

Similarly, we also use PR-curves and to evaluate the saliency and segmentation performance on MSRA10k database. From Figure 5, we can see that the proposed method is significantly better than [19], and also has slightly better performance than [6]. As shown in Figure 6, our methods also give better value than [6] and [19].

From Figures  5 and  6, we can see that our method performs slightly worse than [25] in the MSRA10k dataset. The main reason is attributed to that we directly use a mismatched DCNN trained from the ImageNet dataset. We can not fine-tune the model for this database due to the lack of class labels in MSRA10k. As shown in the figures, the gap between two methods is very small even though we use a mismatched DCNN for our method.

In Figure 7, we also select several MSRA10k images to show the saliency results (Row 6 to 10).

Figure 5: The PR-curves of different saliency methods on MSRA10k dataset.
Figure 6: The values of different saliency methods on MSRA10k dataset.
(A) Original (B) Ground Truth (C) RC [6] (D) Method in [19] (E) Deep Saliency in [25] (F) Our Raw Saliency Maps (G) Our Smoothed Saliency Maps (H) Our Refined Saliency Maps
Figure 7: Saliency Results of Pascal VOC 2012 (Row 1 to 5) and MSRA10k (Row 6 to 10). (A) original images, (B) ground truth, (C) Region Contrast saliency maps [6], (D) DCNN based saliency maps by using [19], (E) multi-context deep saliency method [25], (F) our raw saliency maps, (G) our smoothed saliency maps, (H) our refined saliency maps.

5 Conclusion

In this paper, we have proposed a novel DCNN-based method for object saliency detection. The method firstly train a regular DCNN for saliency detection. After that, for each test image, we firstly recognize the image class label, and then we can use the pre-trained DCNN to generate a saliency map. Specifically, we attempt to reduce a cost function defined to measure the class-specific objectness of each image, and we back-propagate the corresponding error signal all way to the input layer and use the gradient of inputs to revise the input images. After several iterations, the difference between the original input images and the revised images is calculated as a raw saliency map. The raw saliency maps are then smoothed and refined by using SLIC superpixels and low level saliency features. We have evaluated our methods on two benchmark tasks, namely Pascal VOC 2012 [7] and MSRA10k [3]. Experimental results have shown that our proposed methods can generate high-quality saliency maps in relatively short time (nearly 10 times faster than the state-of-the-art DCNN based method in [25]), which clearly outperforming many other existing methods. Comparing with many low-level feature methods, our DCNN-based approach excels on many difficult images, containing complex background, highly-variable salient objects, multiple objects, and very small objects.

References

  • [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(11):2274–2282, 2012.
  • [2] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K.-R. Mueller. How to explain individual classification decisions. Journal of Machine Learning Research, 11:1803–1831, 2010.
  • [3] A. Borji, M.-M. Cheng, H. Jiang, and J. Li. Salient object detection: A survey. ArXiv e-prints, 2014.
  • [4] A. Borji and L. Itti. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 35(1):185–207, 2013.
  • [5] A. Borji, D. N. Sihite, and L. Itti. Salient object detection: A benchmark. In ECCV, pages 414–429. Springer, 2012.
  • [6] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu. Global contrast based salient region detection. In Computer Vision and Pattern Recognition (CVPR), pages 409–416. IEEE, 2011.
  • [7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338, June 2010.
  • [8] K. Fu, C. Gong, J. Yang, Y. Zhou, and I. Y.-H. Gu. Superpixel based color contrast and color distribution driven salient object detection. Signal Processing: Image Communication, 28(10):1448–1463, 2013.
  • [9] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR. 2014.
  • [10] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In Advances in neural information processing systems (NIPS), pages 545–552, 2006.
  • [11] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In ECCV, pages 297–312. 2014.
  • [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1097–1105. 2012.
  • [13] Y. LeCun and Y. Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361, 1995.
  • [14] J. Li, Y. Tian, and T. Huang. Visual saliency with statistical priors. International journal of computer vision, 107(3):239–253, 2014.
  • [15] J. Li, Y. Tian, T. Huang, and W. Gao. Probabilistic multi-task learning for visual saliency estimation in video. International journal of computer vision, 90(2):150–165, 2010.
  • [16] T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum. Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 33(2):353–367, 2011.
  • [17] N. Riche, M. Mancas, B. Gosselin, and T. Dutoit. Rare: A new bottom-up saliency model. In IEEE International Conference on Image Processing (ICIP), pages 641–644, 2012.
  • [18] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations (ICLR). 2014.
  • [19] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In arXiv preprint arXiv:1312.6034. 2014.
  • [20] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR). 2015.
  • [22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In arXiv preprint arXiv:1409.4842. 2014.
  • [23] A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. CoRR, abs/1412.4564, 2014.
  • [24] J. Zhang and S. Sclaroff. Saliency detection: A boolean map approach. In ICCV. 2013.
  • [25] R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1265–1274, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48422
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description