CRAM: Clued Recurrent Aattentionttention Model

CRAM: Clued Recurrent Aattentionttention Model

Minki Chung, Sungzoon Cho1 Data Mining Center, Department of Industrial Engineering, Seoul National University, Seoul, Republic of Korea Email: minki.chung@dm.snu.ac.kr, zoon@snu.ac.kr
Abstract

To overcome the poor scalability of convolutional neural network, recurrent attention model(RAM) selectively choose what and where to look on the image. By directing recurrent attention model how to look the image, RAM can be even more successful in that the given clue narrow down the scope of the possible focus zone. In this perspective, this work proposes clued recurrent attention model (CRAM) which add clue or constraint on the RAM better problem solving. CRAM follows encoder-decoder framework, encoder utilizes recurrent attention model with spatial transformer network and decoder which varies depending on the task. To ensure the performance, CRAM tackles two computer vision task. One is the image classification task, with clue given as the binary image saliency which indicates the approximate location of object. The other is the inpainting task, with clue given as binary mask which indicates the occluded part. In both tasks, CRAM shows better performance than existing methods showing the successful extension of RAM.

Clue, Recurrent Attention Model, Visual Attention, Encoder-Decoder, Classification, Inpainting,

I Introduction

Adoption of convolutional neural network(CNN) [1] brought huge success on a lot of computer vision tasks such as classification and segmentation. One of limitation of CNN is its poor scalability with increasing input image size in terms of computation efficiency. With limited time and resources, it is necessary to be smart on selecting where, what, and how to look the image. Facing bird specific fine grained classification task, for example, it does not help much to pay attention on non-dog image part such as tree and sky. Rather, one should focus on regions which play decisive roles on classification such as beak or wings. If machine can learn how to pay attention on those regions will results better performance with lower energy usage.

In this context, Recurrent Attention Model(RAM) [2] introduces visual attention method in the problem of fine-grained classification task. By sequentially choosing where and what to look, RAM achieved better performance with lower usage of memory. Even more, attention mechanism tackled the vulnerable point, that deep learning model is the black box model by enabling interpretations of the results. But still there is more room for RAM for improvement. In addition to where and what to look, if one can give some clues on how to look, the task specific hint, learning could be more intuitive and efficient. From this insight, we propose the novel architecture, Clued Recurrent Attention Model(CRAM) which inserts problem solving oriented clue on RAM. These clues, or constraints give directions to machine for faster convergence and better performance.

For evaluation, we perform experiments on two computer vision task classification and inpainting. In classification task, clue is given as the binary saliency of the image which indicates the rough location of the object. In inpainting problem, clue is given as the binary mask which indicates the location the occluded region. Codes are implemented in tensorflow version 1.6.0 and uploaded at https://github.com/brekkanegg/cram.

In summary, the contributions of this work are as follows:

  1. Proposed novel model clued recurrent attention model(CRAM) which inserted clue on RAM for more efficient problem solving.

  2. Defined clues for classification and inpainting task respectively for CRAM which are easy to interpret and approach.

  3. Evaluated on classification and inpainting task, showing the powerful extension of RAM.

Ii Related Work

Ii-a Reccurrent Attention Model(RAM)

RAM [2] first proposed recurrent neural network(RNN) [3] based attention model inspired by human visual system. When human are confronted with large image which is too big to be seen at a glance, he processes the image from part by part depending on his interest. By selectively choosing what and where to look RAM showed higher performance while reducing calculations and memory usage. However, since RAM attend the image region by using sampling method, it has fatal weakness of using REINFORCE, not back-propagation for optimization. After works of RAM, Deep Recurrent Attention Model(DRAM) [4] showed advanced architecture for multiple object recognition and Deep Recurrent Attentive Writer(DRAW) [5] introduced sequential image generation method without using REINFORCE.

Spatial transformer network (STN) [6] first proposed a parametric spatial attention module for object classification task. This model includes a localization network that outputs the parameters for selecting region to attend in the input image. Recently, Recurrent Attentional Convolutional-Deconvolutional Network(RACDNN) [7] gathered the strengths of both RAM and STN in saliency detection task. By replacing RAM locating module with STN, RACDNN can sequentially select where to attend on the image while still using back-propagation for optimization. This paper mainly adopted the RACDNN network with some technical twists to effectively insert the clue which acts as supervisor for problem solving.

Iii Cram

The architecture of CRAM is based on encoder-decoder structure. Encoder is similar to RACDNN[7] with modified spatial transformer network[6] and inserted clue. While encoder is identical regardless of the type of task, decoder becomes different where the given task is classification or inpainting. Figure 1 shows the overall architecture of CRAM.

Fig. 1: Overall architecture of CRAM. Note that image and clue become different depending on the task (left under and right under).

Iii-a Encoder

Encoder is composed of 4 subnetworks: context network, spatial transformer network, glimpse network and core recurrent neural network. The overall architecture of encoder is shown in Figure 2. Considering the flow of information, we will go into details of each network one by one.

Fig. 2: Architecture of CRAM encoder. Note that the image is for inpainting task, where clue is given as binary mask that indicates the occluded region.

Context Network: The context network is the first part of encoder which receives image and clue as inputs and outputs the initial state tuple of . is first input of second layer of core recurrent neural network as shown in Figure 2. Using downsampled image and downsampled clue, context network provides reasonable starting point for choosing image region to concentrate. Downsampled image and clue are processed with CNN followed by MLP respectively.

(1)
(2)

where (, ) is the first state tuple of .

Spatial Transformer Network: Spatial transformer network(STN) select region to attend considering given task and clue [6]. Different from existing STN, CRAM uses modified STN which receives image, clue, and output of second layer of core RNN as an inputs and outputs glimpse patch. From now on, glimpse patch indicates the attended image which is cropped and zoomed in. Here, the STN is composed of two parts. One is the localization part which calculates the transformation matrix with CNN and MLP. The other is the transformer part which zoom in the image using the transformation matrix above and obtain the glimpse. The affine transformation matrix with isotropic scaling and translation is given as Equation 3.

(3)

where are the scaling, horizontal translation and vertical translation parameter respectively.

Total process of STN is shown in Figure 3.

Fig. 3: Architecture of STN. STN is consists of localisation part which calculates and transformer part which obtain glimpse.

In equations, the process STN is as follows:

(4)

where in is the step of core RNN ranging from 1 to total glimpse number. is obtained by the below equation.

(5)

where is concat operation.

Glimpse Network: The glimpse network is a non-linear function which receives current glimpse patch, ) and attend region information, as inputs and outputs current step glimpse vector. Glimpse vector is later used as the input of first core RNN. is obtained by multiplicative interaction between extracted features of and . The method of interaction is first proposed by [8]. Similar to other mentioned networks, CNN and MLP are used for feature extraction.

(6)

where is a element-wise vector multiplication operation.

Core Recurrent Neural Network: Recurrent neural network is the core structure of CRAM, which aggregates information extracted from the stepwise glimpses and calculates encoded vector z. Iterating for set RNN steps(total glimpse numbers), core RNN receives at the first layer. The output of second layer is again used by spatial transformer network’s localization part as Equation 5.

(7)
(8)

Iii-B Decoder

Iii-B1 Classification

Like general image classification approach, encoded z is passed through MLP and outputs possibility of each class. Image of decoder for classification is shown in Figure 4.

Fig. 4: Architecture of CRAM decoder for image classification.

Iii-B2 Inpainting

Utilizing the architecture of DCGAN [9], contaminated image is completed starting from the the encoded z from the encoder. To ensure the quality of completed image, we adopted generative adversarial network(GAN)[10] framework in both local and global scale [11]. Here decoder works as generator and local and global discriminators evaluate its plausibility in local and global scale respectively. Image of decoder for inpainting is shown in Figure 5.

Fig. 5: Architecture of CRAM decoder and discriminators for image inpainting.

Iv Training

Loss function of CRAM can be divided into two: encoder related loss() and decoder-related loss(). constraints the glimpse patch to be consistent with the clue. For classification task, where the clue is object saliency, it is favorable if the glimpse patches covers the salient part at most. For inpainting case, there should be a supervisor that urges the glimpse patches contains the occluded region considering the region neighbor of occlusion are the most relevant part for completion. In order to satisfy above condition for both classification and inpainting cases or is as follows:

(9)

where is the trained spatial transformer network Equation 4 and is obtained from Equation 3 in each step of core RNN. Decoder loss which is different depending on the given task will be dealt separately shortly. Note that clue is binary image for both classification and inpainting tasks.

Since is different depending on whether the problem is classification or completion, further explanations for losses will be divided into two.

Iv-a Classification

Decoder related loss in image classification task utilize cross entropy loss like general classification approach. Then total loss for image classification becomes:

(10)
(11)

where clue is the binary image which takes the value of 1 for salient part and otherwise 0, and and are predicted class label vector and ground truth class label vector respectively.

Iv-B Inpainting

Decoder related loss for image inpainting consists of reconstruction loss and gan loss.

Reconstruction loss helps completion to be more stable and gan loss enables better quality of restoration. For reconstruction loss L1 loss considering contaminated region of input is used:

(12)

where z is encoded vector from the encoder, clue is the binary image which takes the value of 1 for occluded region and otherwise 0, G is generator(or decoder) and is the original image before contamination.

Since there are two discriminators, local and global scale gan loss is summation of local gan loss and global gan loss.

(13)

GAN loss for local and global scale are defined as follows:

(14)
(15)

Combining Equation 9, 12 and 13, the total loss for image inpainting becomes:

(16)
(17)

where and is weighting hyperparameter and is summation of and .

V Implementation Details

V-a Classification

In order to obtain the clue, saliency map, we use a convolutional deconvolutional network (CNN-DecNN) [12] as shown in Figure 6. CNN-DecNN is pre-trained with the MSRA10k[13] dataset, which is by far the largest publicly available saliency detection dataset, containing 10,000 annotated saliency images. This CNN-DecNN is trained with Adam[14] in default learning settings. In training and inference period, rough saliency(or clue) is obtained from the pre-trained CNN-DecNN.

Fig. 6: CNN-DecNN to obtain rough saliency of image. This rough saliency is the clue classification task.

As mentioned earlier, encoder consists of 4 subnetworks: context network, spatial transformer network, glimpse network, and core RNN. Image and clue is 4 times downsampled and used an input of context network. Each passes 3 layer-CNN(3 x 3 kernel size, 1 x 1 stride, same zero padding) each followed by max-pooling layer(3 x 3 kernel size, 2 x 2 stride, same zero padding) and outputs vectors. These vectors are concatenated and once again passes 2 layer MLP and outputs the initial state for second layer of core RNN. Localization part of spatial transformer network consists of CNN and MLP. For image and clue input, 3 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) is applied. 2 layer-MLP is applied in second core RNN output input. Output vectors of CNN and MLP are concatenated and once again pass through 2-layer MLP for , the parameters of . Glimpse network receives glimpse patch and above as an input. 1-layer MLP is applied on while Glimpse patch passes through 3-layer CNN and 1-layer MLP to match the vector length with the vector after 1-layer MLP. Glimpse vector is obtained by element-wise vector multiplication operation of above output vectors. Core RNN is composed of 2 layer with Long-Short-Term Memory (LSTM) units for [15] for of its ability to learn long-range dependencies and stable learning dynamics. Decoder is quite simple, only made up of 3-layer MLP. Filter number of CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image. All CNN and MLP except last layer includes batch normalization[16] and elu activation[17]. We used Adam optimizer [14] with learning rate 1e-4.

V-B Inpainting

Encoder settings are identical with image classification case. Decoder(or generator) consists of fractionally-strided CNN(3 x 3 kernel size, 1/2 stride) until the original image size are recovered. Both local and global discriminators are based on CNN, extracts the features from the image to judge the input genuineness. Local discriminator is composed of 4 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) and 2-layer MLP. Global discriminator consists of 3-layer CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding)and 2-layer MLP. Sigmoid function is applied on the last outputs of local and global discriminator in order to ensure the output value to be between 0 and 1. All CNN, fractionally-strided CNN, and MLP except last layer includes batch normalization and elu activation. Same as classification settings, filter number of CNNs, filter number of fractionally-strided CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.

Vi Experiment

Vi-a Image Classification

Work in progress.

Vi-B Image Inpainting

Vi-B1 Dataset

Street View House Numbers (SVHN) dataset[18] is a real world image dataset for object recognition obtained from house numbers in Google street view image. SVHN dataset contains 73257 training digits and 26032 testing digits size of 32 x 32 in RGB color scale.

Vi-B2 Result

Figure 7 showed the result of inpainting with SVHN dataset. 6.25% pixels of image at the center are occluded. Even though the result is not excellent, it is enough to show the possibility and scalability of CRAM. With better generative model, it is expected to show better performance.

Fig. 7: Experiment results on SVHN. From left to right is ground truth, input contaminated image, generated image by CRAM decoder and finally the completed image which was partially replaced with generated image only for missing region.

Vii Conclusion

Work in progress.

References

  • [1] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, 1989, pp. 541–551.
  • [2] V. Mnih, N. Heess, A. Graves et al., “Recurrent models of visual attention,” in Advances in neural information processing systems, 2014, pp. 2204–2212.
  • [3] T. Mikolov, M. Karafiát, L. Burget, J. Černockỳ, and S. Khudanpur, “Recurrent neural network based language model,” in Eleventh Annual Conference of the International Speech Communication Association, 2010.
  • [4] J. Ba, V. Mnih, and K. Kavukcuoglu, “Multiple object recognition with visual attention,” arXiv preprint arXiv:1412.7755, 2014.
  • [5] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “Draw: A recurrent neural network for image generation,” arXiv preprint arXiv:1502.04623, 2015.
  • [6] M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spatial transformer networks,” in Advances in neural information processing systems, 2015, pp. 2017–2025.
  • [7] J. Kuen, Z. Wang, and G. Wang, “Recurrent attentional networks for saliency detection,” arXiv preprint arXiv:1604.03227, 2016.
  • [8] H. Larochelle and G. E. Hinton, “Learning to combine foveal glimpses with a third-order boltzmann machine,” in Advances in neural information processing systems, 2010, pp. 1243–1251.
  • [9] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [11] S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, 2017, p. 107.
  • [12] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1520–1528.
  • [13] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. Torr, and S.-M. Hu, “Global contrast based salient region detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, 2015, pp. 569–582.
  • [14] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [15] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, 1997, pp. 1735–1780.
  • [16] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
  • [17] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.
  • [18] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, no. 2, 2011, p. 5.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
169246
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description