A Generative Model for Deep Convolutional Learning

A Generative Model for Deep Convolutional Learning

Yunchen Pu, Xin Yuan and Lawrence Carin
Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
{yunchen.pu,xin.yuan,lcarin}@duke.edu
Abstract

A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.

A Generative Model for Deep Convolutional Learning

Yunchen Pu, Xin Yuan and Lawrence Carin
Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
{yunchen.pu,xin.yuan,lcarin}@duke.edu

1 Introduction

We develop a deep generative statistical model, which starts at the highest-level features, and maps these through a sequence of layers, until ultimately mapping to the data plane (e.g., an image). The feature at a given layer is mapped via a multinomial distribution to one feature in a block of features at the layer below (and all other features in the block at the next layer are set to zero). This is analogous to the method in Lee et al. (2009), in the sense of imposing that there is at most one non-zero activation within a pooling block. We use bottom-up pretraining, in which initially we sequentially learn parameters of each layer one at a time, from bottom to top, based on the features at the layer below. However, in the refinement phase, all model parameters are learned jointly, top-down. Each consecutive layer in the model is locally conjugate in a statistical sense, so learning model parameters may be readily performed using sampling or variational methods.

2 Modeling Framework

Assume gray-scale images , with ; the images are analyzed jointly to learn the convolutional dictionary . Specifically consider the model

(1)

where is the convolution operator, denotes the Hadamard (element-wise) product, the elements of are in , the elements of are real, and represents the residual. indicates which shifted version of is used to represent .

Assume an -layer model, with layer the top layer, and layer 1 at the bottom, closest to the data. In the pretraining stage, the output of layer is the input to layer , after pooling. Layer has dictionary elements, and we have:

(2)
(3)

The expression may be viewed as a 3D entity, with its -th plane defined by a “pooled” version of .

The 2D activation map is partitioned into dimensional contiguous blocks (pooling blocks with respect to layer of the model); see the left part of Figure 1. Associated with each block of pixels in is one pixel at layer of ; the relative locations of the pixels in are the same as the relative locations of the blocks in . Within each block of , either all pixels are zero, or only one pixel is non-zero, with the position of that pixel selected stochastically via a multinomial distribution. Each pixel at layer of equals the largest-amplitude element in the associated block of (, max pooling).

Figure 1: Schematic of the proposed generative process. Left: bottom-up pretraining, right: top-down refinement. (Zoom-in for best visulization and a larger version can be found in the Supplementary Material.)

The learning performed with the top-down generative model (right part of Fig. 1) constitutes a refinement of the parameters learned during pretraining, and the excellent initialization constituted by the parameters learned during pretraining is key to the subsequent model performance.

In the refinement phase, we now proceed top down, from (2) to (3). The generative process constitutes and , and after convolution is manifested; the is now absent at all layers, except layer , at which the fit to the data is performed. Each element of has an associated pooling block in .

3 Experimental Results

We here apply our model to the MNIST and Caltech 101 datasets.

MNIST Dataset

Methods Test error
0.35%
MCDNN Ciresan et al. (2012) 0.23%
SPCNN Zeiler & Fergus (2013) 0.47%
0.89%
Ours, 2-layer model + 1-layer features 0.42%
Table 1: Classification Error of MNIST data

Table 1 summaries the classification results of our model compared with some related results, on the MNIST data. The second (top) layer features corresponding to the refined dictionary are sent to a nonlinear support vector machine (SVM) (Chang & Lin, 2011) with Gaussian kernel, in a one-vs-all multi-class classifier, with classifier parameters tuned via 5-fold cross-validation (no tuning on the deep feature learning).

Caltech 101 Dataset

# Training Images per Category 15 30
DN Zeiler et al. (2010) 58.6 % 66.9%
CBDN Lee et al. (2009) 57.7 % 65.4%
HBP  Chen et al. (2013) 58% 65.7%
ScSPM  Yang et al. (2009) 67 % 73.2%
P-FV  Seidenari et al. (2014) 71.47% 80.13%
R-KSVD  Li et al. (2013) 79 % 83%
Convnet Zeiler & Fergus (2014) 83.8 % 86.5%
Ours, 2-layer model + 1-layer features 70.02% 80.31%
Ours, 3-layer model + 1-layer features 75.24% 82.78%
Table 2: Classification Accuracy Rate of Caltech-101.

We next consider the Caltech 101 dataset.For Caltech 101 classification, we follow the setup in Yang et al. (2009), selecting 15 and 30 images per category for training, and testing on the rest. The features of testing images are inferred based on the top-layer dictionaries and sent to a multi-class SVM; we again use a Gaussian kernel non-linear SVM with parameters tuned via cross-validation. Ours and related results are summarized in Table 2.

4 Conclusions

A deep generative convolutional dictionary-learning model has been developed within a Bayesian setting. The proposed framework enjoys efficient bottom-up and top-down probabilistic inference. A probabilistic pooling module has been integrated into the model, a key component to developing a principled top-down generative model, with efficient learning and inference. Extensive experimental results demonstrate the efficacy of the model to learn multi-layered features from images.

References

References

  • Chang & Lin (2011) Chang, C.-C. and Lin, C.-J. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2011.
  • Chen et al. (2013) Chen, B., Polatkan, G., Sapiro, G., Blei, D., Dunson, D., and Carin, L. Deep learning with hierarchical convolutional factor analysis. IEEE T-PAMI, 2013.
  • Ciresan et al. (2012) Ciresan, D., Meier, U., and Schmidhuber, J. Multi-column deep neural networks for image classification. In CVPR, 2012.
  • Ciresan et al. (2011) Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., and Flexible, J. Schmidhuber. high performance convolutional neural networks for image classification. IJCAI, 2011.
  • Lee et al. (2009) Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009.
  • Li et al. (2013) Li, Q., Zhang, H., Guo, J., Bhanu, B., and An, L. Reference-based scheme combined with K-svd for scene image categorization. IEEE Signal Processing Letters, 2013.
  • Seidenari et al. (2014) Seidenari, L., Serra, G., Bagdanov, A., and Del Bimbo, A. Local pyramidal descriptors for image recognition. IEEE T-PAMI, 2014.
  • Yang et al. (2009) Yang, J., Yu, K., Gong, Y., and Huang, T. Linear spatial pyramid matching using sparse coding for image classification. In CVPR, 2009.
  • Zeiler & Fergus (2013) Zeiler, M. and Fergus, R. Stochastic pooling for regularization of deep convolutional neural networks. ICLR, 2013.
  • Zeiler & Fergus (2014) Zeiler, M. and Fergus, R. Visualizing and understanding convolutional networks. ECCV, 2014.
  • Zeiler et al. (2010) Zeiler, M., Kirshnan, D., Taylor, G., and Fergus, R. Deconvolutional networks. CVPR, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
5490
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description