CRAM: Clued Recurrent Aattentionttention Model
To overcome the poor scalability of convolutional neural network, recurrent attention model(RAM) selectively choose what and where to look on the image. By directing recurrent attention model how to look the image, RAM can be even more successful in that the given clue narrow down the scope of the possible focus zone. In this perspective, this work proposes clued recurrent attention model (CRAM) which add clue or constraint on the RAM better problem solving. CRAM follows encoder-decoder framework, encoder utilizes recurrent attention model with spatial transformer network and decoder which varies depending on the task. To ensure the performance, CRAM tackles two computer vision task. One is the image classification task, with clue given as the binary image saliency which indicates the approximate location of object. The other is the inpainting task, with clue given as binary mask which indicates the occluded part. In both tasks, CRAM shows better performance than existing methods showing the successful extension of RAM.
Adoption of convolutional neural network(CNN)  brought huge success on a lot of computer vision tasks such as classification and segmentation. One of limitation of CNN is its poor scalability with increasing input image size in terms of computation efficiency. With limited time and resources, it is necessary to be smart on selecting where, what, and how to look the image. Facing bird specific fine grained classification task, for example, it does not help much to pay attention on non-dog image part such as tree and sky. Rather, one should focus on regions which play decisive roles on classification such as beak or wings. If machine can learn how to pay attention on those regions will results better performance with lower energy usage.
In this context, Recurrent Attention Model(RAM)  introduces visual attention method in the problem of fine-grained classification task. By sequentially choosing where and what to look, RAM achieved better performance with lower usage of memory. Even more, attention mechanism tackled the vulnerable point, that deep learning model is the black box model by enabling interpretations of the results. But still there is more room for RAM for improvement. In addition to where and what to look, if one can give some clues on how to look, the task specific hint, learning could be more intuitive and efficient. From this insight, we propose the novel architecture, Clued Recurrent Attention Model(CRAM) which inserts problem solving oriented clue on RAM. These clues, or constraints give directions to machine for faster convergence and better performance.
For evaluation, we perform experiments on two computer vision task classification and inpainting. In classification task, clue is given as the binary saliency of the image which indicates the rough location of the object. In inpainting problem, clue is given as the binary mask which indicates the location the occluded region. Codes are implemented in tensorflow version 1.6.0 and uploaded at https://github.com/brekkanegg/cram.
In summary, the contributions of this work are as follows:
Proposed novel model clued recurrent attention model(CRAM) which inserted clue on RAM for more efficient problem solving.
Defined clues for classification and inpainting task respectively for CRAM which are easy to interpret and approach.
Evaluated on classification and inpainting task, showing the powerful extension of RAM.
Ii Related Work
Ii-a Reccurrent Attention Model(RAM)
RAM  first proposed recurrent neural network(RNN)  based attention model inspired by human visual system. When human are confronted with large image which is too big to be seen at a glance, he processes the image from part by part depending on his interest. By selectively choosing what and where to look RAM showed higher performance while reducing calculations and memory usage. However, since RAM attend the image region by using sampling method, it has fatal weakness of using REINFORCE, not back-propagation for optimization. After works of RAM, Deep Recurrent Attention Model(DRAM)  showed advanced architecture for multiple object recognition and Deep Recurrent Attentive Writer(DRAW)  introduced sequential image generation method without using REINFORCE.
Spatial transformer network (STN)  first proposed a parametric spatial attention module for object classification task. This model includes a localization network that outputs the parameters for selecting region to attend in the input image. Recently, Recurrent Attentional Convolutional-Deconvolutional Network(RACDNN)  gathered the strengths of both RAM and STN in saliency detection task. By replacing RAM locating module with STN, RACDNN can sequentially select where to attend on the image while still using back-propagation for optimization. This paper mainly adopted the RACDNN network with some technical twists to effectively insert the clue which acts as supervisor for problem solving.
The architecture of CRAM is based on encoder-decoder structure. Encoder is similar to RACDNN with modified spatial transformer network and inserted clue. While encoder is identical regardless of the type of task, decoder becomes different where the given task is classification or inpainting. Figure 1 shows the overall architecture of CRAM.
Encoder is composed of 4 subnetworks: context network, spatial transformer network, glimpse network and core recurrent neural network. The overall architecture of encoder is shown in Figure 2. Considering the flow of information, we will go into details of each network one by one.
Context Network: The context network is the first part of encoder which receives image and clue as inputs and outputs the initial state tuple of . is first input of second layer of core recurrent neural network as shown in Figure 2. Using downsampled image and downsampled clue, context network provides reasonable starting point for choosing image region to concentrate. Downsampled image and clue are processed with CNN followed by MLP respectively.
where (, ) is the first state tuple of .
Spatial Transformer Network: Spatial transformer network(STN) select region to attend considering given task and clue . Different from existing STN, CRAM uses modified STN which receives image, clue, and output of second layer of core RNN as an inputs and outputs glimpse patch. From now on, glimpse patch indicates the attended image which is cropped and zoomed in. Here, the STN is composed of two parts. One is the localization part which calculates the transformation matrix with CNN and MLP. The other is the transformer part which zoom in the image using the transformation matrix above and obtain the glimpse. The affine transformation matrix with isotropic scaling and translation is given as Equation 3.
where are the scaling, horizontal translation and vertical translation parameter respectively.
Total process of STN is shown in Figure 3.
In equations, the process STN is as follows:
where in is the step of core RNN ranging from 1 to total glimpse number. is obtained by the below equation.
where is concat operation.
Glimpse Network: The glimpse network is a non-linear function which receives current glimpse patch, ) and attend region information, as inputs and outputs current step glimpse vector. Glimpse vector is later used as the input of first core RNN. is obtained by multiplicative interaction between extracted features of and . The method of interaction is first proposed by . Similar to other mentioned networks, CNN and MLP are used for feature extraction.
where is a element-wise vector multiplication operation.
Core Recurrent Neural Network: Recurrent neural network is the core structure of CRAM, which aggregates information extracted from the stepwise glimpses and calculates encoded vector z. Iterating for set RNN steps(total glimpse numbers), core RNN receives at the first layer. The output of second layer is again used by spatial transformer network’s localization part as Equation 5.
Like general image classification approach, encoded z is passed through MLP and outputs possibility of each class. Image of decoder for classification is shown in Figure 4.
Utilizing the architecture of DCGAN , contaminated image is completed starting from the the encoded z from the encoder. To ensure the quality of completed image, we adopted generative adversarial network(GAN) framework in both local and global scale . Here decoder works as generator and local and global discriminators evaluate its plausibility in local and global scale respectively. Image of decoder for inpainting is shown in Figure 5.
Loss function of CRAM can be divided into two: encoder related loss() and decoder-related loss(). constraints the glimpse patch to be consistent with the clue. For classification task, where the clue is object saliency, it is favorable if the glimpse patches covers the salient part at most. For inpainting case, there should be a supervisor that urges the glimpse patches contains the occluded region considering the region neighbor of occlusion are the most relevant part for completion. In order to satisfy above condition for both classification and inpainting cases or is as follows:
where is the trained spatial transformer network Equation 4 and is obtained from Equation 3 in each step of core RNN. Decoder loss which is different depending on the given task will be dealt separately shortly. Note that clue is binary image for both classification and inpainting tasks.
Since is different depending on whether the problem is classification or completion, further explanations for losses will be divided into two.
Decoder related loss in image classification task utilize cross entropy loss like general classification approach. Then total loss for image classification becomes:
where clue is the binary image which takes the value of 1 for salient part and otherwise 0, and and are predicted class label vector and ground truth class label vector respectively.
Decoder related loss for image inpainting consists of reconstruction loss and gan loss.
Reconstruction loss helps completion to be more stable and gan loss enables better quality of restoration. For reconstruction loss L1 loss considering contaminated region of input is used:
where z is encoded vector from the encoder, clue is the binary image which takes the value of 1 for occluded region and otherwise 0, G is generator(or decoder) and is the original image before contamination.
Since there are two discriminators, local and global scale gan loss is summation of local gan loss and global gan loss.
GAN loss for local and global scale are defined as follows:
where and is weighting hyperparameter and is summation of and .
V Implementation Details
In order to obtain the clue, saliency map, we use a convolutional deconvolutional network (CNN-DecNN)  as shown in Figure 6. CNN-DecNN is pre-trained with the MSRA10k dataset, which is by far the largest publicly available saliency detection dataset, containing 10,000 annotated saliency images. This CNN-DecNN is trained with Adam in default learning settings. In training and inference period, rough saliency(or clue) is obtained from the pre-trained CNN-DecNN.
As mentioned earlier, encoder consists of 4 subnetworks: context network, spatial transformer network, glimpse network, and core RNN. Image and clue is 4 times downsampled and used an input of context network. Each passes 3 layer-CNN(3 x 3 kernel size, 1 x 1 stride, same zero padding) each followed by max-pooling layer(3 x 3 kernel size, 2 x 2 stride, same zero padding) and outputs vectors. These vectors are concatenated and once again passes 2 layer MLP and outputs the initial state for second layer of core RNN. Localization part of spatial transformer network consists of CNN and MLP. For image and clue input, 3 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) is applied. 2 layer-MLP is applied in second core RNN output input. Output vectors of CNN and MLP are concatenated and once again pass through 2-layer MLP for , the parameters of . Glimpse network receives glimpse patch and above as an input. 1-layer MLP is applied on while Glimpse patch passes through 3-layer CNN and 1-layer MLP to match the vector length with the vector after 1-layer MLP. Glimpse vector is obtained by element-wise vector multiplication operation of above output vectors. Core RNN is composed of 2 layer with Long-Short-Term Memory (LSTM) units for  for of its ability to learn long-range dependencies and stable learning dynamics. Decoder is quite simple, only made up of 3-layer MLP. Filter number of CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image. All CNN and MLP except last layer includes batch normalization and elu activation. We used Adam optimizer  with learning rate 1e-4.
Encoder settings are identical with image classification case. Decoder(or generator) consists of fractionally-strided CNN(3 x 3 kernel size, 1/2 stride) until the original image size are recovered. Both local and global discriminators are based on CNN, extracts the features from the image to judge the input genuineness. Local discriminator is composed of 4 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) and 2-layer MLP. Global discriminator consists of 3-layer CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding)and 2-layer MLP. Sigmoid function is applied on the last outputs of local and global discriminator in order to ensure the output value to be between 0 and 1. All CNN, fractionally-strided CNN, and MLP except last layer includes batch normalization and elu activation. Same as classification settings, filter number of CNNs, filter number of fractionally-strided CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.
Vi-a Image Classification
Work in progress.
Vi-B Image Inpainting
Street View House Numbers (SVHN) dataset is a real world image dataset for object recognition obtained from house numbers in Google street view image. SVHN dataset contains 73257 training digits and 26032 testing digits size of 32 x 32 in RGB color scale.
Figure 7 showed the result of inpainting with SVHN dataset. 6.25% pixels of image at the center are occluded. Even though the result is not excellent, it is enough to show the possibility and scalability of CRAM. With better generative model, it is expected to show better performance.
Work in progress.
-  Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, 1989, pp. 541–551.
-  V. Mnih, N. Heess, A. Graves et al., “Recurrent models of visual attention,” in Advances in neural information processing systems, 2014, pp. 2204–2212.
-  T. Mikolov, M. Karafiát, L. Burget, J. Černockỳ, and S. Khudanpur, “Recurrent neural network based language model,” in Eleventh Annual Conference of the International Speech Communication Association, 2010.
-  J. Ba, V. Mnih, and K. Kavukcuoglu, “Multiple object recognition with visual attention,” arXiv preprint arXiv:1412.7755, 2014.
-  K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “Draw: A recurrent neural network for image generation,” arXiv preprint arXiv:1502.04623, 2015.
-  M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spatial transformer networks,” in Advances in neural information processing systems, 2015, pp. 2017–2025.
-  J. Kuen, Z. Wang, and G. Wang, “Recurrent attentional networks for saliency detection,” arXiv preprint arXiv:1604.03227, 2016.
-  H. Larochelle and G. E. Hinton, “Learning to combine foveal glimpses with a third-order boltzmann machine,” in Advances in neural information processing systems, 2010, pp. 1243–1251.
-  A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
-  S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, 2017, p. 107.
-  H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1520–1528.
-  M.-M. Cheng, N. J. Mitra, X. Huang, P. H. Torr, and S.-M. Hu, “Global contrast based salient region detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, 2015, pp. 569–582.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, 1997, pp. 1735–1780.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
-  D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.
-  Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, no. 2, 2011, p. 5.