# Supervised Convolutional Sparse Coding

###### Abstract

Convolutional Sparse Coding (CSC) is a well established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

## 1 Introduction

Convolutional Sparse Coding (CSC) is a rich image representation model that is inspired by the response of neurons to stimuli within their receptive fields for human vision. It is applied to various computer vision tasks such as image and video processing [1, 2, 3, 4, 5], computational imaging [6], line drawings [7], tracking [8] as well as the design of deep learning architectures [9].

CSC is a special type of sparse dictionary learning (DL) algorithm, in which it uses the convolution operator, unlike traditional DL that uses regular linear combinations. This results in diverse translation-invariant patches while maintaining the latent structure of the underlying signal. In its formulation, CSC inherently provides sparse maps that correspond to each dictionary element. These sparse maps fire at locations where the dictionary element is prevalent.

Since traditional dictionary learning produces dictionary elements that focus on image reconstruction (a generative task), the resulting elements might not necessarily have significant semantic value, i.e. they might not be discriminative of any semantic class (e.g. objects). In this context, there has been successful work [10, 11, 12] that studies traditional dictionary learning from a supervised point of view. In this case, more semantically meaningful dictionaries are generated instead of purely reconstructive ones.

Inspired by these supervised dictionary learning approaches, we model the CSC problem as a supervised convolutional learning and coding task. We will show in this paper, that our framework leads to two important advantages. First, we can learn dictionary elements that are more semantically meaningful. Figure 1 shows an example of how the semantics of window dictionary elements improves as the weight of the regularizer in our Supervised CSC framework is increased. While semantics of patches is partially judged by visual inspection, we propose to additionally evaluate patch semantics by comparing the classification performance of a simple classifier trained using our supervised dictionary elements as opposed to a traditional unsupervised dictionary. This shows that our dictionary is more discriminative and therefore carries more semantic information. Second, we can improve reconstruction quality on unseen data, because the learned dictionary elements are semantically meaningful and thus less prone to over-fitting. The surprising result of our work is that by adding a regularizer, we not only achieve a trade-off between two objectives in training, but we in fact improve both objectives simultaneously on unseen data.

Some reformulations and extensions to traditional CSC have emerged recently [13, 14, 15, 16]. However, there exists no prior work that handles the CSC problem from a supervised approach in which the convolutional dictionary learning incorporates groundtruth annotations of a target class. In this work, we are the first to jointly learn convolutional dictionaries that are both reconstructive and discriminative by adding a logistic discrimination loss to the CSC objective. We solve the resulting optimization in the Fourier domain using coordinate descent, since the overall objective is not jointly convex, but it is convex for each of the variables separately. We compare our approach to a baseline approach in which traditional CSC is cascaded with a classifier trained on its learned dictionary.

## 2 Related Work

In this section, we show research related to the CSC problem and supervised dictionary learning since our work is most related to these two areas in computer vision.

Convolutional Sparse Coding. CSC has many applications and quite a few methods have been proposed for solving the optimization problems in CSC.
The seminal work of [17] proposes Deconvolutional Networks, a learning framework based on convolutional decomposition of images under a sparsity constraint. Unlike previous work in sparse image decomposition [18, 19, 20, 10] that builds hierarchical representations of an image on a patch level, Deconvolutional Networks perform a sparse decomposition over whole images. This strategy significantly reduces the redundancy among filters compared with those obtained by the patch-based approaches. Kavukcuoglu et al. [21]
propose a convolutional extension to the coordinate descent sparse coding algorithm [22] to represent images using convolutional dictionaries for object recognition tasks. Following this path, Yang et al. [23] propose a supervised dictionary learning approach to improve the efficiency of sparse coding.
To efficiently solve the optimization problems in CSC, most existing approaches transform the problem into the frequency domain. Bristow et al. [24] propose a quad-decomposition of the original objective into convex subproblems and they exploit the Alternating Direction Method of Multipliers (ADMM) approach to solve the convolution subproblems in the Fourier domain. In their follow-up work [25], a number of optimization methods for solving convolution problems and their applications are discussed. In the work of [26], the authors further exploit the separability of convolution across bands in the frequency domain. Their gain in efficiency is due to computing a partial vector (instead of a full vector). To further improve efficiency, Heide et al. [27] transform the original constrained problem into an unconstrained problem by encoding the constraints in the objective. The new objective function is then further split into two subproblems that are easier to optimize separately. They also devise a more flexible solution by adding a diagonal matrix to the objective function to handle the boundary artifacts. Recent work [15, 16] has also reformulated the CSC problem by extending its applicability to higher dimensions [13] and to large scale data [14]. We adopt the optimization strategy used in [27] as it is the state of the art unsupervised CSC solver.

Supervised Dictionary Learning. Supervised dictionary learning (SDL) has received great attention in both the computer vision and image processing communities. SDL methods exploit the sparse signal decompositions to learn representative dictionaries whose elements can be used to discriminate certain semantic classes from each other. The algorithms vary in the procedure of supervision and the elements involved in the learning task. In the most simplistic approach to SDL, multiple dictionaries are computed for each class [28] and then combined into one [29]. A more robust approach involves a joint learning of dictionary elements and classifier parameters to learn discriminative dictionaries[10, 11, 30]. Zhang et al. [31] follow a similar approach where they learn discriminative projections along with the dictionary thus learning the dictionary in a projected space. More recent work [32] formulate SDL by a multimodal task-driven approach using sparsity models under a joint sparsity prior. Yankelevsky et al. [33] use two graph-based regularizations to encourage the dictionary atoms to preserve their feature similarities.
In [34], the authors structure the learned dictionaries by crosslabel suppression with group regularization, which increases the computational efficiency without sacrificing classification. In this work, we are the first to embed such supervision into the CSC model, leading to convolutional dictionary elements that are more semantically relevant to the classes they are trained to discriminate.

## 3 Convolutional Sparse Coding

In this section, we present the mathematical formulation of the CSC problem and discuss state-of-the-art optimization methods to compute an approximate solution. There are multiple slightly different, but similar formulations for the CSC problem. We follow the formulation provided by Heide et al. [27] in which boundary handling is augmented in the reconstructive term and a stationary solution is guaranteed by a coordinate descent approach.

### 3.1 Unsupervised CSC Model

The CSC problem can be expressed in the following form:

(1) |

where are the vectorized 2D patches representing dictionary elements, are the vectorized sparse maps corresponding to each of the dictionary elements, and is a binary diagonal matrix for boundary handling (see Figure 2). The data term reconstructs the image using a sum of convolutions of the dictionary elements with the sparse maps, and controls the tradeoff between the sparsity of the feature maps and the reconstruction error. The inequality constraint on the dictionary elements assumes Laplacian distributed coefficients, which ensures solving the problem at a proper scale since a larger value of would scale down the value of the corresponding respectively. The above equation shows the objective function on a single image, and it can be easily extended to multiple images where for each image, sparse maps are inferred, whereas all the images share the same dictionary elements.

### 3.2 CSC Subproblems

The objective in Eq. 1 is not jointly convex. However, solving it for a group of variables while keeping the others fixed leads to two convex subproblems, which we refer to as the coding subproblem and the dictionary learning subproblem. For ease of notation, we represent the convolution operations by multiplication of Toeplitz matrices with the corresponding variables.

#### 3.2.1 Learning Subproblem

We learn the dictionary elements for a fixed set of sparse feature maps as shown in Eq. 2.

(2) |

Here, is of size and is a concatenation of the sparse convolution matrices, is a concatenation of the dictionary elements, and pads the filter to allow larger spatial support.

#### 3.2.2 Coding Subproblem

We infer the sparse maps for a fixed set of dictionary elements as shown in Eq. 3.

(3) |

Similar to above, is of size and is a concatenation of the convolution matrices of the dictionary elements, and is a concatenation of the vectorized sparse maps.

### 3.3 CSC Optimization

Finding an efficient solution to the CSC problem is a challenging task due to its high computational complexity and the non-convexity of its objective function. Seminal advances [24, 26, 27] in CSC have demonstrated computational speed-up by solving the problem efficiently in the Fourier domain where the convolution operator is transformed to element-wise multiplication. As such, the optimization is modeled as a biconvex problem with two convex subproblems that are solved iteratively and combined to form a coordinate descent solution. Despite the performance boost attained by solving the CSC optimization problem in the Fourier domain, the problem is still deemed computationally heavy due to the dominating cost of solving large linear systems. More recent work [27, 26, 35] makes use of the block-diagonal structure of the matrices involved and solves the linear systems in a parallel fashion. To efficiently solve the subproblems, they are reformulated in [27] as sum of functions that are simple to optimize individually as such:

where

The subproblems shown above can be remapped to a general form that can be solved using existing optimization algorithms. ADMM in general solves equations of the general form shown below.

(4) |

We can map this general form to the coding and learning CSC subproblems by setting and , with for the coding subproblem and for the learning subproblem.

Solving the subproblems using scaled ADMM leaves us with a minimization that is separable in all the as shown in Algorithm 1.

The second line of the algorithm involves a large linear system that can be efficiently solved in the Fourier domain. The circulant convolution matrices and become diagonal in the Fourier space and thus inverting them can be done in parallel by making use of the Woodbury formula for inverting block diagonal matrices. The technical report by [26] gives more details to the solution. The third line constitutes proximal operators for each of the s that are simple to derive and well known in the literature. In the paper by [27], complexity and convergence details of this algorithm are shown.

In the next section, we show how to extend the formulation above to obtain a mathematical formulation for supervised CSC and we discuss the optimization algorithm used to compute an approximate solution.

## 4 Supervised Convolutional Sparse Coding

In this work, given the input images, we consider that each pixel may belong to any of different classes coming from a variety of ground truth annotations including segmentation or bounding boxes. Now, we seek to derive a convolutional sparse representation that not only reconstructs the image, but also resembles the classification available in the annotation (see Figure 2). In this sense, the learned dictionary elements are guided to become supervised and more representative of the available classes.

### 4.1 Supervised CSC Model

The SCSC problem can be expressed in the following form:

(5) |

Here, we take to be the logistic regression function: , where constitutes the associated pixelwise groundtruth labels, parameterizes the classification model with the linear classification coefficients and , and is the number of selected groundtruth pixels. is the regularization parameter that prevents overfitting and is the tradeoff parameter between reconstruction and classification. The above formulation shows the objective for the case of two classes and it can be easily extended into multiple classes in a one vs. all framework. This is similar to the approach of [10] for supervised dictionary learning.

### 4.2 Supervised CSC Subproblems.

The addition of a logistic regression loss function as a regularizer to the CSC problem gives rise to three subproblems with an additional subproblem for optimizing the weights of the linear classification model. We will refer to the third subproblem as the classification subproblem. As in unsupervised CSC, the overall objective is not convex, but it is convex when holding all variables except one fixed.

Learning Subproblem. We will start with the learning subproblem where we need to optimize the dictionary elements given the sparse codes and the classifier parameters. Here, we end up with a subproblem that is exactly the same as in unsupervised CSC shown in Equation 2, since the supervision regularizer is independend of the dictionary.

Supervised Coding Subproblem. In the coding subproblem, we infer the sparse maps given the dictionary and the classifier parameters as shown in Equation 6.

(6) |

Here, corresponds to the row of the matrix that is a concatenation of diagonal matrices, each one representing the support for the features taken from the sparse maps. The sparse maps here are added to the classification term which serves as an additional regularizer to the problem. Unlike the unsupervised coding subproblem in section 3.2.2, the reconstruction is not the only factor involved in computing the sparse maps. A good sparse map in the supervised coding task would trade-off some reconstruction to become more discriminative.

Classification Subproblem. The classification subproblem is a regular logistic regression problem as shown in Equation 7. It can be solved using gradient descent.

(7) |

### 4.3 Supervised CSC Optimization

We will follow the approach presented in [27] for solving the supervised CSC problem. We will represent the learning and coding subproblems as a sum of simple convex functions. With this approach, the learning subproblem can be solved exactly, similar to unsupervised CSC, as shown in Section 3.3.

The coding subproblem has an additional function holding the logistic function fitting term, and thus can be reformulated as such:

(8) |

This formulation allows us to follow the same solution by casting the subproblem into the general form of ADMM shown in Equation 4 with a minor modification of the variables and .

The solution of the coding subproblem follows the steps presented in Algorithm 1. The second line involves solving a quadratic least squares problem with the solution:

(9) |

The inverse can be solved efficiently in the Fourier domain since similar to , is a concatenation of diagonal matrices and one can find a variable reordering that makes block diagonal, thus making the inversion parallelizable over the blocks. Using the Woodburry formula, the inverse can be computed as:

(10) |

where is the same over all the blocks whose inverse can be computed beforehand.

In addition, the third line of the algorithm for supervised CSC includes an additional proximal operator for the logistic loss function. Setting its gradient to zero constitutes solving an intersection of an exponential with a line. Unfortunately, no closed form solution to this operator exists in the literature but it can be efficiently computed using Newton’s method.

## 5 Experimental Evaluation

In this section, we show results for applying supervised CSC (denoted as SCSC) presented in Algorithm 2 compared with unsupervised CSC as a baseline approach using the implementation of [27]. We describe implementation details, the quantitative and qualitative evaluation of the discriminability of the learned dictionaries, the reconstructive quality of the disriminative dictionaries, and inpainting results.

### 5.1 Implementation Details

We tested our algorithm on different datasets: Graz [36] made of 50 images and having groundtruth labels for the window class, ecp [37] made of 104 images using window and balcony labels, label-me [38] made of a random subset of 50 images using the tree and building classes, and coco [39] on a random subset of 50 images using the cat class. In most experiments, we used 10 images for training on Graz, label-me and coco datasets and 25 images for training on ecp.

For Graz and ecp, since we need to target the sparse maps to fire at the center of windows and thus learn dictionary elements that are similar to window patches, we apply a preprocessing step where we restrict the positive samples to the centers of the windows as shown in Figure 4-(d). For coco and label-me, due to the high variation in scale and appearance of the groundtruth segmentations in this dataset, we do not apply a preprocessing step and thus seek dictionary elements that are representative of sub-patches of the entire groundtruth segmentation. As shown in Figure 4-(d), there exists class imbalance because background pixels are dominant. Thus, we make sure to sample a similar number of positive and negative samples to make the classification less affected by this imbalance.

The scale at which the dictionaries are learned greatly affects the quality of the learning and classification. In our experiments, we learn dictionaries of size since objects in our datasets are generally at this scale or smaller. We set the sparsity coefficient and the logistic regression regularization parameter . In the following experiments, we use the average precision (AP) score of each class to evaluate the classification accuracy and the peak signal to noise ratio (PSNR) to evaluate the quality of reconstruction.

### 5.2 Discriminative Dictionary Results

We first show the learned filters converged by our Supervised CSC optimization algorithm. Starting from random initial filters, SCSC proceeds iteratively transforming random pixels to more meaningful filters reflecting the structure of the corresponding class. To demonstrate how the supervised dictionaries compare to unsupervised ones, we show in Figure 3 learned filters from the Graz dataset. By visual inspection, the supervised dictionaries show patches that are more representative of the classification task at hand. Although the two problems are initialized from the same random patches, supervised CSC converges to dictionary elements that resemble the window class on which we perform the supervision (see elements highlighted in red). The accompanying video in supplementary material also shows the progression of the learned filters during the optimization.

In addition, to verify that the learned dictionaries have semantic meaning, we show classification results on two example images. Please note that our goal is not to achieve state of the art image classification results, but only to show that the learned filters have semantic meaning, i.e. are more discriminative.
Figure 4-(b) and (c) show the label predictions (estimated by the learned logistic regression model trained on the sparse maps as features) corresponding to the unsupervised and supervised CSC approaches respectively. We do not directly use the classifier parameters inferred from the SCSC dictionary learning phase. A simple logistic regression classifier is retrained on the sparse maps inferred from unsupervised vs. supervised dictionaries and used to generate the predictions for the image pixels. As shown in th e figure, SCSC class predictions are much more distinctive as compared with unsupervised CSC. Background pixels get oppressed and the structure of the windows gets more prevalent when using supervised dictionaries.

We also show how the learnt dictionaries boost the performance of classification on unseen test sets. We first vary the classification coefficient to show the effect of giving higher weight for the classification loss. Figure (a)a shows the AP score as increases for the Graz dataset. In addition, Table 1 shows the AP score for the window and balcony classes in the ecp dataset. The increase in generally increases the classification accuracy until it reaches a point where the AP drops. The drop happens since the reconstruction term becomes negligible for higher value of , which results in over-fitting of the dictionary elements to the training data without taking their appearance into account. We also show how the AP changes with varied number of filters and number of training images . Figure (b)b and (c)c show that as the number of filters and images increases, the classification precision for SCSC is higher than that of CSC.

### 5.3 Reconstruction Results

In the following experiments, we validate that learning supervised dictionaries also improve the reconstruction quality on unseen images. The reconstruction quality is evaluated by the PSNR score where a higher score indicates a better reconstruction of the original image.

Figure (d)d and the last row in Table 1 show the positive effect of using supervised dictionary learning on the quality of the reconstruction where for some values of the supervised approach achieves significantly higher PSNR values for the Graz dataset and slightly improved values for the ecp dataset. When the dictionary elements are discriminative enough they can generalize to different variations of the object class instead of overfitting to those appearing in the training set, which is what happens in unsupervised CSC.

We also show how the PSNR changes when the number of filters and number of training images vary. The number of filters corresponds to the number classifier features and thus increasing gives a richer representation of the pixels. Figure (d)d shows that as the number of filters increases, the reconstruction quality increase for both CSC and SCSC with SCSC giving better reconstructions.

To further verify the effect of reconstruction improvement for supervised CSC, we test our approach on classes from well known segmentation datasets. Figure 6 shows that using supervised dictionaries, the reconstruction quality of images generally increases. Steering the dictionary elements to semantically meaningful representations allows learning filters that better reconstruct other instances of images within the supervised class.

CSC | Supervised CSC | |||||||
---|---|---|---|---|---|---|---|---|

0 | 0.2 | 0.5 | 1 | 2 | 5 | 10 | 100 | |

Window AP | 21.8 | 21.1 | 21.7 | 21.3 | 22.6 | 24.1 | 24 | 15.8 |

Blacony AP | 4.6 | 7.2 | 7.3 | 8.2 | 11.7 | 14.1 | 13.3 | 7 |

PSNR | 26.1 | 26.1 | 26.1 | 26.2 | 26.1 | 26.4 | 26.3 | 26.1 |

### 5.4 Image Inpainting Results

The inclusion of the matrix in the CSC optimization allows reconstructing incomplete images as discussed in [27]. Thus, we validate the performance of inpainting unseen images from the same dataset that we used for supervised learning vs. images from a different dataset. For inpainting, we randomly set of the pixel values to zero and code the image with incomplete data. Table 2 shows the reconstruction quality when comparing the reconstructed image with its original complete image. As shown, the inpainting results using supervised dictionaries generally lead to a better reconstruction quality. This is more evident on instances of images within the same supervised class where the PSNR value shows a higher boost compared to instances from another class where CSC and SCSC have comparable performance. This shows that the learned supervised dictionaries have semantic value that serves in better reconstructing general image instances from the supervised class unlike traditional CSC which learns dictionaries that overfits the training data.

same dataset | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|

CSC | 22.7 | 24.1 | 20.6 | 21 | 19.8 | 23.3 | 21.7 | 24.7 | 21.5 | 24 |

SCSC | 23.5 | 25.1 | 22.6 | 21.5 | 17.7 | 24.7 | 24 | 24.2 | 22.2 | 24.2 |

other dataset | ||||||||||

CSC | 21.8 | 20.5 | 19.6 | 18 | 18.3 | 21 | 19.5 | 18.6 | 19.5 | 17.7 |

SCSC | 21.8 | 20.5 | 19.8 | 18.2 | 18.4 | 21.2 | 19.5 | 18.7 | 19.6 | 17.8 |

The above results verify that using supervised convolutional sparse coding gives rise to learning shift invariant dictionaries that are not only very representative in reconstructing images but more semantically relevant than those trained in an unsupervised manner. While it is expected that our approach improves the classification performance, it is somewhat surprising that we can also improve the quality of the reconstruction. We attribute this to the fact that we make the filters semantically more meaningful and implicitly steer the optimization in the right direction to improve reconstruction performance. Embedding semantics into the dictionary makes it more generalizable to unseen images in which the objects appear with appearances that vary from what is seen in training data. This is opposed to unsupervised CSC which tends to overfit to the appearance of objects in its training set.

## 6 Conclusion and Future Work

In this work, we proposed a model for supervised convolutional sparse coding (SCSC) in which the CSC problem can be solved jointly for learning reconstructive dictionary elements while targeting a classification model. Results on multiple datasets showed more semantically meaningful, i.e. discriminative, filters when compared to regular CSC while at the same time improving the reconstruction quality. We believe that this is a very surprising and interesting result that our more discriminative filters are also better suitable for reconstruction. In the future, we will work on extending the model to solving dictionary elements that can be solved up to transformation which allows scaling the dictionary elements during the learning phase to handle assets that could be of varied different sizes and aspect ratios. This should make the model more robust to variation in the scale and size of the convolutional filters.

## References

- [1] Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing 15(12) (2006) 3736–3745
- [2] Aharon, M., Elad, M., Bruckstein, A.M.: On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them. Linear algebra and its applications 416(1) (2006) 48–67
- [3] Couzinie-Devy, F., Mairal, J., Bach, F., Ponce, J.: Dictionary learning for deblurring and digital zoom. arXiv preprint arXiv:1110.0957 (2011)
- [4] Yang, J., Wright, J., Huang, T., Ma, Y.: Image super-resolution as sparse representation of raw image patches. In: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, IEEE (2008) 1–8
- [5] Gu, S., Zuo, W., Xie, Q., Meng, D., Feng, X., Zhang, L.: Convolutional sparse coding for image super-resolution. ICCV (2015)
- [6] Heide, F., Xiao, L., Kolb, A., Hullin, M.B., Heidrich, W.: Imaging in scattering media using correlation image sensors and sparse convolutional coding. Optics express (2014)
- [7] Shaheen, S., Affara, L., Ghanem, B.: Constrained convolutional sparse coding for parametric based reconstruction of line drawings. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 4424–4432
- [8] Zhang, T., Bibi, A., Ghanem, B.: In Defense of Sparse Tracking: Circulant Sparse Tracker. CVPR (2016)
- [9] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012)
- [10] Mairal, J., Ponce, J., Sapiro, G., Zisserman, A., Bach, F.R.: Supervised dictionary learning. In: Advances in neural information processing systems. (2009) 1033–1040
- [11] Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Discriminative learned dictionaries for local image analysis. In: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, IEEE (2008) 1–8
- [12] Jiang, Z., Lin, Z., Davis, L.S.: Learning a discriminative dictionary for sparse coding via label consistent k-svd. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, IEEE (2011) 1697–1704
- [13] Bibi, A., Ghanem, B.: High order tensor formulation for convolutional sparse coding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 1772–1780
- [14] Choudhury, B., Swanson, R., Heide, F., Wetzstein, G., Heidrich, W.: Consensus convolutional sparse coding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 4280–4288
- [15] Wang, Y., Yao, Q., Kwok, J.T., Ni, L.M.: Online convolutional sparse coding. CoRR abs/1706.06972 (2017)
- [16] Wohlberg, B.: Boundary handling for convolutional sparse representations. In: Image Processing (ICIP), 2016 IEEE International Conference on, IEEE (2016) 1833–1837
- [17] Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: CVPR. (2010)
- [18] Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research 37(23) (1997) 3311–3325
- [19] Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: Advances in neural information processing systems. (2006) 801–808
- [20] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: Proceedings of the 26th annual international conference on machine learning, ACM (2009) 689–696
- [21] Kavukcuoglu, K., Sermanet, P., Boureau, Y.L., Gregor, K., Mathieu, M., Cun, Y.L.: Learning convolutional feature hierarchies for visual recognition. In: Advances in neural information processing systems. (2010) 1090–1098
- [22] Li, Y., Osher, S.: Coordinate descent optimization for â 1 minimization with application to compressed sensing; a greedy algorithm. Inverse Probl. Imaging 3(3) (2009) 487–503
- [23] Yang, J., Yu, K., Huang, T.: Supervised translation-invariant sparse coding. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE (2010) 3517–3524
- [24] Bristow, H., Eriksson, A., Lucey, S.: Fast Convolutional Sparse Coding. CVPR (2013)
- [25] Bristow, H., Lucey, S.: Optimization Methods for Convolutional Sparse Coding. arXiv Prepr. arXiv1406.2407v1 (2014)
- [26] Kong, B., Fowlkes, C.C.: Fast Convolutional Sparse Coding. Tech. Rep. UCI (2014)
- [27] Heide, F., Heidrich, W., Wetzstein, G.: Fast and Flexible Convolutional Sparse Coding. CVPR (2015)
- [28] Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence 31(2) (2009) 210–227
- [29] Varma, M., Zisserman, A.: A statistical approach to texture classification from single images. International Journal of Computer Vision 62(1) (2005) 61–81
- [30] Zhang, Q., Li, B.: Discriminative k-svd for dictionary learning in face recognition. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE (2010) 2691–2698
- [31] Zhang, H., Zhang, Y., Huang, T.S.: Simultaneous discriminative projection and dictionary learning for sparse representation based classification. Pattern Recognition 46(1) (2013) 346–354
- [32] Bahrampour, S., Nasrabadi, N.M., Ray, A., Jenkins, W.K.: Multimodal task-driven dictionary learning for image classification. IEEE Transactions on Image Processing 25(1) (2016) 24–38
- [33] Yankelevsky, Y., Elad, M.: Structure-aware classification using supervised dictionary learning. In: Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, IEEE (2017) 4421–4425
- [34] Wang, X., Gu, Y.: Cross-label suppression: A discriminative and fast dictionary learning with group regularization. IEEE Transactions on Image Processing (2017)
- [35] Wohlberg, B.: Efficient convolutional sparse coding. In: Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, IEEE (2014) 7173–7177
- [36] Riemenschneider, H., Krispel, U., Thaller, W., Donoser, M., Havemann, S., Fellner, D., Bischof, H.: Irregular lattices for complex shape grammar facade parsing. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE (2012) 1640–1647
- [37] Teboul, O.: Ecole centrale paris facades database. URL: http://vision. mas. ecp. fr/Personnel/teboul/data. php (2010)
- [38] Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: Labelme: a database and web-based tool for image annotation. International journal of computer vision 77(1-3) (2008) 157–173
- [39] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision, Springer (2014) 740–755