Deep, Dense, and Low-Rank Gaussian Conditional Random Fields

Deep, Dense, and Low-Rank Gaussian Conditional Random Fields

Siddhartha Chandra Iasonas Kokkinos
siddhartha.chandra@inria.fr i.kokkinos@cs.ucl.ac.uk
INRIA GALEN & Centrale Supélec Paris, France                     University College London, U.K.
Abstract

In this work we introduce a fully-connected graph structure in the Deep Gaussian Conditional Random Field (G-CRF) model. For this we express the pairwise interactions between pixels as the inner-products of low-dimensional embeddings, delivered by a new subnetwork of a deep architecture. We efficiently minimize the resulting energy by solving the resulting low-rank linear system with conjugate gradients, and derive an analytic expression for the gradient of our embeddings which allows us to train them end-to-end with backpropagation.

We demonstrate the merit of our approach by achieving state of the art results on three challenging Computer Vision benchmarks, namely semantic segmentation, human parts segmentation, and saliency estimation. Our implementation is fully GPU based, built on top of the Caffe library, and will be made publicly available.

1 Introduction

Figure 1: A brief overview of our proposed fully-connected model for dense labeling tasks. Each patch in the input image denotes a node in our fully-connected graph structure. While the unary terms (B) are obtained from the output of a fully convolutional network, our framework expresses the pairwise terms (A) as dot products of pixel embeddings. These embeddings are also obtained from the output of a parallel stream of the fully convolutional network. We solve a system of linear equations , to infer the prediction x. Our approach can be used to learn task-specific unary and pairwise terms for dense labeling tasks such as semantic segmentation, part segmentation and saliency estimation.

Structure prediction combined with deep-learning has delivered striking results on a variety of computer vision benchmarks [2, 4, 6, 7, 30, 32, 37]. Rather than leaving all of the modeling task to a generic deep network, the explicit treatment of interactions between image regions in a principled manner has been advocated in  [1, 7, 17, 31, 37]. Several of the earlier approaches [4, 6] exploited the Dense-CRF [17] framework that uses a pre-determined parametric pairwise interaction term among output variables, effectively refining object boundaries while compensating for the effects of spatial downsampling within the network. Other more recent approaches [23, 24, 25, 30, 32, 37] showed that structure prediction could be done in an end-to-end manner for both sparsely-connected [30] and fully-connected [23, 25, 32, 37] graphical structures. These approaches typically took a mean-field approximation to the original CRF as in  [17] and implemented mean-field inference for a fixed number of iterations via back-propagation-through-time. Even though some of these approaches allowed for end-to-end training for fully-connected graphical models, they all relied on approximate inference.

Figure 2: A detailed schematic representation of our fully convolutional neural network with the dense-G-CRF module. Our base network for generating the unary terms is deeplab-v2-resnet-101. This network has three parallel branches of the resnet-101 network operating at different image scales. The final unary scores are obtained by upsampling the network responses for the low resolution branches to the original resolution, and taking an element-wise maximum of the responses at the three resolutions. Our pairwise terms are generated by a parallel pairwise residual network branch (resnet-pw) which has layers conv-1 to res4a of the resnet-101 network. The resnet-pw network outputs the pixel embeddings of our formulation. The unary terms and pairwise embeddings are then fed to our fully connected G-CRF module (dense-G-CRF). This module outputs the prediction x by solving the inference equation . Additionally, the dense-G-CRF module computes the error gradients for the unary and pairwise terms and and back propagates them through the respective streams of the network during the training phase.

More recently, drawing inspiration from [13, 29] the deep G-CRF model [2] alleviated the need for approximate inference. In particular, given an image the G-CRF model defines a joint posterior distribution through a Gaussian multi-variate density:

where , are the canonical parameters of the data-dependent Gaussian density. Dropping the dependence on for simplicity, we note that can be seen as the inverse covariance or precision matrix, and for a positive-definite matrix , inference involves solving the system of linear equations . The deep G-CRF model showed that the parameters of the Gaussian density could be learnt in an end-to-end manner through a CNN, and exact inference could be made efficiently by using the conjugate gradients [28] algorithm. While the G-CRF model allowed for efficient and exact inference, in practice it can only capture interactions in small (, and connected) neighbourhoods, which result in a sparse precision matrix. The authors take advantage of this by using sparse linear algebra optimizations to provide a fast implementation. While this speed is preferable, the model loses some of its richness by ignoring long range interactions.

To recapitulate, on one hand we have approaches that employ approximate inference for potentially fully-connected graphical structures, and on the other we have an approach that allows exact inference but only for sparsely-connected graphical structures.

In this work, we propose a strategy that enables efficient and exact inference for fully-connected graphical models. We do so by extending the G-CRF model to fully connected graphical models.

The extension to fully-connected graphs is technically challenging because of the non-sparse precision matrix it involves. Assuming an image size of pixels, labels (PASCAL VOC benchmark), and a network with a spatial downsampling factor of (as in [4, 5]), the number of pairwise terms would be . This is prohibitively large due to both memory and computational requirements. To overcome this challenge, we advocate using a low-rank precision matrix. In particular we propose composing the precision matrix via dot-products of “low-dimensional” pixel embeddings , that is .

This drastically reduces the memory footprint. Rather than explicitly obtaining the precision matrix, as in the original G-CRF work, we now only need to discover and store in memory low-dimensional pixel embeddings. While maintaining a fully-connected graph structure, we can tune the embedding dimension at will, so as to meet the available resources and/or accuracy standards. Furthermore, the embedding’s dimensionality can be used to control the computational complexity of solving the resulting linear system with Conjugate Gradients, since the latter operate with rather than .

Figure 1 shows the overview of our approach. The input image first passes through a fully convolutional network, which results in spatial downsampling by a factor of . Each patch in the input image is represented by a node that is accompanied by the dotted lines. The network generates the unary terms and the pixel embeddings. The pairwise terms between any two nodes are expressed as the dot-product of their respective embeddings, yielding the precision matrix . Given the unary terms and the pairwise terms , we infer the prediction x by solving . Solving this system of linear equations results in coupling among all the node variables. This approach is loss-agnostic and works with arbitrary differentiable losses, and has enabled us to learn task-specific networks for a variety of tasks, such as saliency estimation, human parts segmentation, and semantic segmentation. Finally, our implementation is efficient, fully GPU based, and built on top of the Caffe library [14].

We first give a brief review of the G-CRF model in Sec. 2, then provide a detailed description of our approach in Sec. 3, and finally demonstrate the merit of our approach on three challenging tasks, namely, semantic segmentation (Sec. 4.1), human parts segmentation (Sec. 4.2), and saliency estimation (Sec. 4.3).

2 Deep Gaussian Conditional Random Fields

We briefly describe the Deep G-CRF formulation of [2] for completeness, following the notation of [2].

We consider an image containing pixels where each pixel can take a label . The predictions are represented as a real-valued vector, , giving the score for every pixel-label combination. The unary and pairwise terms are respectively denoted by , . Dropping the probabilistic formulation provided in the introduction, we view the G-CRF as a structured layer [12] that minimizes a quadratic energy :

(1)

where a positive constant is added to the diagonal entries of to make it positive definite. Given such that and any , has a unique global minimum at

(2)

Thus the “exact” inference for x that gives the minimum energy involves solving a system of linear equations.

The authors in [2] propose learning the model parameters , and via end-to-end network training. To achieve this, they design a G-CRF layer or deep-learning module, which (a) collects the unary and pairwise terms during the forward pass and outputs the predictions, and (b) collects the gradients of the prediction in the backward pass and uses them to compute the gradients of the model parameters, which it then back propagates through the network.

In particular, considering that the G-CRF layer obtains a gradient for the loss with respect to its output x, , the authors of [2] show that the gradients of the unary terms can be obtained by solving a new system of linear equations:

(3)

while the gradients of the pairwise terms are given by:

(4)

where denotes the Kronecker product operator.

3 Low-Dimensional G-CRF Pixel Embeddings

Having introduced more formally the G-CRF model, we now establish the fully-connected G-CRF through low-dimensional embeddings. We use the same mathematical notation as in the previous section.

For brevity, we denote the number of variables in our formulation by , where . We also denote by the dimensionality of the feature embedding space, where . We denote by the matrix of pixel embeddings for all variables in our formulation. We begin by first defining the pairwise terms or the precision matrix in terms of the pixel embeddings as:

(5)

The matrix gives the pairwise terms for every pair of pixels and labels in the label set, i.e.

Since is composed via dot-products of pixel embeddings, it is symmetric and positive semi definite by design. However, is low-rank with a rank of , and not strictly positive definite. Therefore, to make it strictly positive definite, we add a positive constant to its diagonal elements. We note that is now guaranteed to be positive definite for any , unlike the case for the Sparse G-CRF, where had to be set by hand in [2].

Following this definition of , the energy function that we wish to minimize now is given by:

(6)

Given that the precision matrix is strictly positive definite, has a unique global minimum at

(7)

Thus, inference involves solving a system of linear equations. We take advantage of the positive definiteness of to use fast Conjugate Gradients method [28] for solving the system of linear equations iteratively. The computational complexity of this method can be controlled by , since we only need to employ vector-matrix products between and , rather than - resulting in very efficient solvers.

Gradients of the dense G-CRF parameters

We now turn to learning the model parameters via end-to-end network training. To achieve this we require derivatives of the overall loss with respect to the model parameters, namely and . As described in Eq. 7, we have an analytical closed form relationship between our model parameters ,, and the prediction x. Therefore, by applying the chain rule of differentiation, we can analytically express the gradients of the model parameters in terms of the gradients of the prediction. The gradients of the prediction are delivered by the neural network layer on top of our dense-G-CRF module through back- propagation.

The gradients of the unary terms are straightforward to obtain by substituting Eq. 5 in Eq. 3 as:

(8)

We thus obtain the gradients of the unary terms by solving a system of linear equations.

Turning to the gradients of the pixel embeddings, , we use the chain rule of differentiation as follows:

(9)

We know the expression for from Eq. 4, but to obtain the expression for , we need to follow some more tedious steps, which however lead to a simple solution.

As in [9], we define a permutation matrix of size matrix as follows:

(10)

where vec is the vectorization operator that vectorizes a matrix by stacking its columns. When premultiplied with another matrix, rearranges the ordering of rows of that matrix, while when postmultiplied with another matrix, rearranges its columns. Using this matrix, we can form the following expression [9]:

(11)

where is the identity matrix. Substituting Eq. 4 and Eq. 11 into Eq. 9, we obtain

(12)

Thus, the gradients of can be obtained analytically from the closed form expression in Eq. 12. Despite the apparently complex form, this final expression is particularly simple to implement.

We now give an intuitive interpretation of this expression. As shown in Eq. 9, the gradients of come from the product of two terms. The first term involves . The quantities and can be understood to be linearly related via x, as seen in Eq. 2. Thus, the gradients of depend linearly on the gradients of , scaled by the coefficients in x. The second term involves . This involves taking the derivative of a quadratic form of , i.e. with respect to . Thus the second term is linear in , albeit with repetitions and permutations to satisfy the rules of matrix multiplications. We use this insight to very efficiently implement the computation of the gradients during the backward pass, without computing any Kronecker products or multiplications with operators explicitly.

Implementation and Efficiency

Our approach is implemented as a layer in the Caffe deep learning library [14], and we use the conjugate gradients algorithm for solving the systems of linear equations. The complexity of our method is directly proportional to the size of the matrix of pixel embeddings, , thus depends on the embedding dimension, the number of pixels, and the number of labels. Our implementation is efficient, and exploits the fast linear algebra routines of CUDA blas library. For these timing comparisons, we use a GTX-1080 GPU. Our inference procedure takes s on average for the semantic segmentation task ( labels) for an image patch of size pixels downsampled by a factor of , and for an embedding dimension of . This is an order of magnitude faster than the approximate dense CRF mean-field inference which takes s on average. The sparse G-CRF, and the Potts-type sparse G-CRF from [2] take s and s respectively for the same input size. Thus, our dense inference procedure comes at negligible extra cost compared to the sparse G-CRF. The average inference times for the same sized input, and dimensional embeddings for the human parts segmentation ( labels), and for the saliency estimation task ( labels) are s and s respectively. We intend to make our implementation publicly available.

4 Experiments and Results

In this section, we describe our experimental setup and results.

Base network. Our base network is deeplab-v2 resnet-101 [5], which is a three branch multiresolution network based on the resnet-101 network [11]. It processes the input image at resolutions, with scaling factors of and then combines the network responses by upsampling them to the original image resolution and taking an element-wise maximum of the responses at the three resolutions. It also uses atrous spatial pyramid pooling (ASPP), i.e. multiple parallel filters with different dilation parameters to better exploit multi-scale features. While this gives a slight performance boost for semantic segmentation, it does not help performance on other tasks, namely human part segmentation and saliency estimation. Therefore, we do not use ASPP for these two tasks. The training regime uses random horizontal flipping, and random scaling of the input image to achieve data augmentation.

Fully-Connected G-CRF network. Our fully-connected G-CRF (dense-G-CRF) network is shown in figure 2. The dense-G-CRF network uses the base network to provide unaries, and a piece of the resnet-101 network in parallel to the base network to construct the pixel embeddings for the pairwise terms. Our validation experiments on the image segmentation benchmark in Sec.4.1 indicate that chopping the pairwise branch, referred to as resnet-pw at the res4a layer yields the best performance, and we use the same resnet-pw for the other benchmarks. The resnet-pw processes the image at the original resolution. We use a phase piece-wise training strategy. We first train the unary network without the pairwise stream, and then train the pairwise stream of the network in a second phase, while keeping the unary stream fixed. This strategy gives us a smaller training loss than training both unary and pairwise streams at the same time. Each training phase uses K iterations with a batch size of as in [5]. The initial learning rate for the unary stream is fixed to , while for the second phase we set it to . We use a polynomial decaying learning rate with power as in [5]. Training each network takes around days on a GTX-1080 GPU with GB RAM.

4.1 Semantic Segmentation

Semantic image segmentation task involves classifying each pixel in the image as belonging to one of a set of candidate classes. In this section, we use our dense labeling framework on the challenging task of semantic image segmentation. We begin with the description of the dataset, followed by a brief description of our baselines, before discussing our results.

Dataset. We use our approach on the PASCAL VOC 2012 image segmentation benchmark. This dataset contains training and validation images with manually annotated per-pixel labels for twenty foreground object classes, and one background class. We also use the additional per-pixel segmentation annotations provided by [10], obtaining training images in total. This benchmark has a secret test set containing unannotated images. While the test set images are publicly available, the ground-truth is not available, and the evaluation happens on an online server. The evaluation criterion is the pixel intersection-over-union (IOU) metric, averaged across the classes.

Ablation Studies. We first delve into the ablation studies on the validation set. For these experiments, we use the training images, and evaluate on the validation images. We study the effect of varying the depth of the pairwise network stream by chopping the resnet-101 at three lengths, indicated by the standard resnet layer names. Additionally, we also study the effect of varying the size of the pixel-embeddings for the pairwise terms. These results are reported in table 1. We notice that the best results are obtained at the embedding dimension of , and the results improve as we increase the depth of resnet-pw. We do not increase the depth of the resnet-pw beyond res4a due to memory constraints. The improvement over the base-network is in mean IoU.

Base network [5]
dense-G-CRF Embedding Dimension
resnet-pw size 64 128 256 512
res2a
res3a
res4a 77.05
Table 1: Ablation study- mean Intersection Over Union (IOU) accuracy on PASCAL VOC 2012 validation set. We compare the performance of our method against that of the base network, and study the effect of varying the depth of the pairwise stream network, and the size of pixel embeddings.

Performance on test set. We now compare our approach with the base network [5], the base network with the sparse deep G-CRF from [2], as well as other leading approaches on this benchmark. In these experiments, we train with the augmented training and validation sets, and evaluate performance on the test set. We use our best configuration (res4a, ) from table 1.

Baselines. The mainstream approach on this task is to use fully convolutional networks [4, 5, 26] trained with the Softmax cross-entropy loss. For this task, we compare our approach with the state of the art methods on this benchmark. The baselines include (a) the multi-scale deeplab network [4] which introduced the hole algorithm in conjunction with dense CRF postprocessing [17] and used weak supervision to exploit additional training samples with bounding box annotations, (b) the CRF as RNN network [37] which proposed first that dense CRF parameters could be learnt alongside the unary features in an end-to-end manner, (c) the Deeplab+Boundary network [15] which exploits an edge detection detection network to boost the performance of the deeplab network, (d) the Adelaide Context network [23] and (e) the deep parsing network both of which learn pairwise terms using variants of the mean-field inference (f) our base network, i.e. the deeplab-v2 network [5] and (h) the G-CRF model [2] which combines the deeplab-v2 network with 4-connected Potts type pairwise terms.

We report the results in table 2. We report an improvement of over the sparse deep G-CRF approach. Using dense-CRF post processing boosts the performance of all methods consistently. Qualitative improvements are shown in Fig. 3.

Method mean IoU
Deeplab Cross-Joint [6] 73.9
CRFRNN [37] 74.7
Deeplab Multi-Scale + CRF [15] 74.8
Adelaide Context [23] 77.8
Deep Parsing Network [25] 77.4
Deeplab V2 + CRF [5] 79.7
Deeplab-V2 G-CRF Potts [2] 79.5
Deeplab-V2 G-CRF + CRF Potts [2] 80.2
dense-G-CRF (Ours) 79.8
dense-G-CRF + CRF (Ours) 80.4
Table 2: Semantic segmentation - mean Intersection Over Union (IOU) accuracy on PASCAL VOC 2012 test.
Figure 3: Qualitative Results of the Semantic Segmentation Task on the PASCAL VOC 2012 image segmentation validation dataset. The first column (a) shows the input image, the second column (b) shows the segmentation ground truth, the third column (c) shows the unary or base network, and the fourth column (d) shows the dense-G-CRF output. It can be seen that the pairwise terms from the pixel embeddings enforce pairwise consistencies by gathering evidence from other image regions, thus demonstrating better performance. Further, the dense-G-CRF network better captures finer details, and better connects parts of the same object separated by ambiguous image regions.

4.2 Human Parts Segmentation

The human parts segmentation task involves semantic segmentation of human parts. This is another challenging dense labeling task because of the huge variation in the human pose in different scenarios.

Dataset. We use the PASCAL Person Parts dataset introduced in [8]. This dataset is a subset of the PASCAL VOC 2010 dataset, with human part segmentations annotated by Chen et al.[8], and has humans in a large variety of poses and scales. The dataset contains detailed part segmentations for every person. As in [21], we merge the annotations to obtain six person part classes, namely the head, torso, upper arms, lower arms, upper legs, and lower legs. Additionally we have a seventh background class. This dataset has training images and testing images. The evaluation criterion is the pixel intersection-over-union (IOU) metric, averaged across the classes.

Baselines. As in the case of semantic segmentation, the state of the art approaches on human part segmentation also use fully convolutional networks, sometimes additionally exploiting Long Short Term Memory Units [21, 22]. For this task, we compare our approach to the following methods: (a) the deeplab attention to scale network [3] which proposes to weigh the features at multiple image scales with deep supervision, (b) the Auto Zoom network [35] which adaptively resizes predicted object instances and parts to the desired scale for refinement of parsed objects, (c) the Local Global LSTM network [22] which combines local and global cues via LSTM units, (d) the Graph LSTM network [21] which proposes a sophisticated super-pixel based graph construction approach to generate sequences that are then memorized by LSTM units, (e) the base network with and without dense CRF post-processing, and (f) the sparse G-CRF Potts model.

We report the results in table 3. For our dense-G-CRF method, we use an embedding dimension of . Please note that while the previous state of the art approach deeplab-v2 achieves with dense-CRF post-processing, we outperform it by mean IoU without using dense-CRF post-processing. Additionally, we outperform the Deeplab-V2 G-CRF Potts baseline from [2] by . We show qualitative results in Fig. 4.

Attention  [3] 56.39
Auto Zoom [35] 57.54
LG-LSTM  [22] 57.97
Graph LSTM  [21] 60.16
Deeplab-V2  [5] 64.40
Deeplab-V2-CRF [5] 64.94
Deeplab-V2 G-CRF Potts [2] 65.21
dense-G-CRF (Ours)  [5] 66.10
Table 3: Part segmentation - mean Intersection-Over-Union accuracy on the PASCAL Parts dataset of [8].
Figure 4: Qualitative Results of the Part Segmentation Task on the PASCAL Parts validation dataset. The first column (a) shows the input image, the second column (b) shows the part ground truth, the third column (c) shows the unary or base network, and the fourth column (d) shows the dense-G-CRF output. Our method captures the boundaries better than the base network.

4.3 Saliency Estimation

The saliency estimation problem intends to discover the most interesting regions in an image. It is also posed as a dense labeling task where the objective is to assign a score to each pixel which is proportional to the magnitude of attention that an observer would give to the pixel. This task is very hard to evaluate since the magnitude of attention is a subjective quantity. Standard benchmarks work around this difficulty by having the dataset annotated by a set of annotators, and then averaging their annotations to estimate an average consensus. These scores are normalized and thresholded to give a binary classification to each pixel indicating whether the image region is interesting or not.

Dataset. We use a training protocol similar to [16] for this task. More specifically, we use the MSRA-10K saliency dataset introduced in [33] for training, and evaluate our performance on the PASCAL-S dataset introduced in [20], as well as the HKU-IS dataset from [18]. The MSRA-10K dataset contains images with annotated pixel-wise segmentation masks for salient objects. The Pascal-S saliency dataset contains pixel-wise saliency estimates in the range for images. As suggested in [20], we threshold the saliency values at to obtain the binary masks. The HKU-IS dataset has images, and is known for low-contrast and multiple salient objects in each image. The evaluation criterion is the maximal F-Measure as in  [16, 20].

Baselines. Our baselines for the saliency estimation task include (a) BSCA  [27] which is a classical approach based on a dynamic evolution model called the cellular automata and Bayesian inference, (b) the Local Estimation and Global Search (LEGS) framework [34] which is a deep learning based approach that combines local and global cues, (c) the multi-context network [36] that captures both global and local context in the input image, (d) the multiscale deep features network [18] which proposes a multi-resolution network architecture, a refinement strategy and aggregates saliency maps at different levels, (e) the deep contrast learning networks [19], which proposes a network structure that better exploits object boundaries to improve saliency estimation and additionally uses a fully connected CRF model, (f) the Ubernet architecture [16] which demonstrates that sharing parameters for mutually symbiotic tasks can help improve overall performance of these tasks, (g) our base network, i.e. deeplab-v2, and (h) the sparse G-CRF Potts model alongside the base network.

The results are tabulated in table 4. Our approach yields a significant point improvement in maximal F-score over the other previous state of the art Ubernet approach. For our dense-G-CRF method, we use an embedding dimension of . We show some qualitative results in Fig. 5.

Method PASCAL-S HKU-IS
BSCA [27] 0.666 0.723
LEGS [34] 0.752 0.770
MC [36] 0.740 0.798
MDF [18] 0.764 0.861
FCN [19] 0.793 0.867
DCL [19] 0.815 0.892
DCL + CRF [19] 0.822 0.904
Ubernet 1-Task [16] 0.835 -
Deeplab-v2 [5] 0.859 0.916
Deeplab-V2 G-CRF Potts [2] 0.861 0.914
dense-G-CRF (Ours) 0.864 0.919
Table 4: Saliency estimation results: we report the Maximal F-measure (MF) on the PASCAL Saliency dataset of [20], and the HKU-IS dataset of [18].

5 Conclusions and Future Work

In this work we propose a fully-connected G-CRF model for end-to-end training of deep architectures. Our model expresses the pairwise interactions in the G-CRF model as a low-rank matrix composed via dot-products of low-dimensional pixel embeddings. This allows us to perform exact inference in fully connected models, while using efficient and low-memory conjugate gradient solvers that exploit the low-rank nature of the resulting system. Our implementation is fully GPU based, and implemented using the Caffe library. Our experimental evaluation indicates consistent improvements over the state of the art approaches on three challenging public benchmarks for semantic segmentation, human parts segmentation and saliency estimation. In the future, we intend to exploit this framework on other dense labeling tasks, as well as regression tasks, such as depth estimation, image denoising and normal estimation, which can be naturally handled by our model thanks to its continuous nature.

Figure 5: Qualitative Results of the Visual Saliency Estimation Task on the PASCAL-S dataset. The first column (a) shows the input image, the second column (b) shows the saliency ground truth, the third column (c) shows the unary or base network, and the fourth column (d) shows the dense-G-CRF output. It can be seen that the pairwise terms help improve the base network performance significantly.

References

  • [1] N. D. F. Campbell, K. Subr, and J. Kautz. Fully-connected crfs with non-parametric pairwise potentials. 2013.
  • [2] S. Chandra and I. Kokkinos. Fast, exact and multi-scale inference for semantic image segmentation with deep gaussian crfs. In ECCV, 2016.
  • [3] L. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille. Attention to scale: Scale-aware semantic image segmentation. CVPR, 2016.
  • [4] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062, 2014.
  • [5] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915, 2016.
  • [6] L.-C. Chen, G. Papandreou, K. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a deep convolutional network for semantic image segmentation. ICCV, 2015.
  • [7] L.-C. Chen, A. G. Schwing, A. L. Yuille, and R. Urtasun. Learning Deep Structured Models. In ICML, 2015.
  • [8] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. 2014.
  • [9] P. L. Fackler. Notes on matrix calculus. http://www4.ncsu.edu/~pfackler/MatCalc.pdf.
  • [10] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [12] O. V. Ionescu, Catalin and C. Sminchisescu. Training deep networks with structured layers by matrix backpropagation. In ICCV, 2015.
  • [13] J. Jancsary, S. Nowozin, T. Sharp, and C. Rother. Regression tree fields - an efficient, non-parametric approach to image labeling problems. In CVPR, 2012.
  • [14] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
  • [15] I. Kokkinos. Pushing the Boundaries of Boundary Detection using Deep Learning. In ICLR, 2016.
  • [16] I. Kokkinos. Ubernet: A ‘universal’ cnn for the joint treatment of low-, mid-, and high- level vision problems. In POCV workshop, 2016.
  • [17] P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In NIPS, 2011.
  • [18] G. Li and Y. Yu. Visual saliency based on multiscale deep features. In CVPR, 2015.
  • [19] G. Li and Y. Yu. Deep contrast learning for salient object detection. In CVPR, 2016.
  • [20] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille. The secrets of salient object segmentation. 2014.
  • [21] X. Liang, X. Shen, J. Feng, L. Liang, and S. Yan. Semantic object parsing with graph lstm. 2016.
  • [22] X. Liang, X. Shen, D. Xiang, J. Feng, L. Lin, and S. Yan. Semantic object parsing with local-global long short-term memory. In CVPR, 2016.
  • [23] G. Lin, C. Shen, I. D. Reid, and A. van den Hengel. Efficient piecewise training of deep structured models for semantic segmentation. CVPR, 2016.
  • [24] F. Liu, C. Shen, , and G. Lin. Deep convolutional neural fields for depth estimation from a single image. In CVPR, 2015.
  • [25] Z. Liu, X. Li, P. Luo, C.-C. Loy, and X. Tang. Semantic image segmentation via deep parsing network. In CVPR, pages 1377–1385, 2015.
  • [26] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015.
  • [27] Y. Qin, H. Lu, Y. Xu, and H. Wang. Saliency detection via cellular automata. In CVPR, 2015.
  • [28] J. R. Shewchuk. An introduction to the conjugate gradient method without the agonizing pain. https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf.
  • [29] M. F. Tappen, C. Liu, E. H. Adelson, and W. T. Freeman. Learning gaussian conditional random fields for low-level vision. In CVPR, 2007.
  • [30] R. Vemulapalli, O. Tuzel, M.-Y. Liu, and R. Chellapa. Gaussian conditional random field network for semantic segmentation. In CVPR, June 2016.
  • [31] V. Vineet, J. Warrell, P. Sturgess, and P. H. Torr. Improved initialization and gaussian mixture pairwise terms for dense random fields with mean-field inference. 2013.
  • [32] T.-H. Vu, A. Osokin, and I. Laptev. Context-aware cnns for person head detection. In ICCV, pages 2893–2901, 2015.
  • [33] K. Wang, L. Lin, J. Lu, C. Li, and K. Shi. PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence. 2015.
  • [34] L. Wang, H. Lu, X. Ruan, and M. Yang. Deep networks for saliency detection via local estimation and global search. In CVPR, 2015.
  • [35] F. Xia, P. Wang, L. Chen, and A. L. Yuille. Zoom better to see clearer: Human part segmentation with auto zoom net. 2016.
  • [36] R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In CVPR, 2015.
  • [37] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional random fields as recurrent neural networks. In ICCV, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
19981
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description