Object-Level Context Modeling For Scene Classification with Context-CNN

Object-Level Context Modeling For Scene Classification with Context-CNN

Syed Ashar Javed and Anil Kumar Nelakanti
IIIT Hyderabad, Amazon

Convolutional Neural Networks (CNNs) have been used extensively for computer vision tasks and produce rich feature representation for objects or parts of an image. But reasoning about scenes requires integration between the low-level feature representations and the high-level semantic information. We propose a deep network architecture which models the semantic context of scenes by capturing object-level information. We use Long Short Term Memory(LSTM) units in conjunction with object proposals to incorporate object-object relationship and object-scene relationship in an end-to-end trainable manner. We evaluate our model on the LSUN dataset and achieve results comparable to the state-of-art. We further show visualization of the learned features and analyze the model with experiments to verify our model’s ability to model context.

1 Introduction

The task of classifying a scene requires assimilation of complex, inter-connected information. With the great success of large convolutional networks, deep features have replaced the low-level hand crafted features. CNN-only models for scene classification  [56, 40] show improvement in performance over methods using hand-engineered features and have been used to set baseline performance. But for challenging scenes, the holistic scene information is not distilled into the CNN model as the layers are locally connected and do not make use of the high order semantic context of the scene. Thus vanilla CNNs by design, are not suitable for capturing contextual knowledge like the complex interaction of objects in a scene. Other more sophisticated approaches from the recent literature either involve multiple networks with high number of parameters trained for weeks or models involving components which are learned separately. This either leads to models with very high complexity or models which incoherently fuse together information from different components, thus limiting the effectiveness of the training process.

Figure 1: Distinguishing between complex scenes with similar global attributes and similar objects requires contextual reasoning. A bedroom scene and a living room scene both contain pillows and table lamps which by themselves are non-discriminative objects for their scene category even if their spatial position is taken into consideration.

In this work, we propose the Context-CNN model which encodes object-level context using object proposals and LSTM units on top of a CNN which extracts deep image features. This architecture attempts to bridge the semantic gap in scenes by modeling object-object and scene-object relationships within a single system. The intuition that the joint existence of a set of objects in a scene highly influences the final scene category has been regularly highlighted and verified empirically in various works in computer vision [24, 25, 29]. Additionally, the LSTM units are capable of modeling the relationship between the objects by taking in object bounding boxes at each time step. In effect, this makes the network learn about the scene class probability distribution given it has seen a specific set of objects through time. For example, as shown in Figure 1, seeing a bed after a pillow cushion should hint towards the scene being a bedroom scene while seeing a sofa after a pillow cushion would suggest it’s a living room scene. Note that even though the model captures object-level context, it does not need any labeled objects in the dataset as the objects are represented through their CNN features and the dependencies between them are stored within the LSTM units without explicitly needing to know the class of the object.

Our model builds on earlier work before deep learning took off where context was explicitly modeled in the form of semantic context (object co-occurrence), spatial context and scale context  [38, 36]. But unlike these approaches, our model can take into account the semantic context of a set of objects instead of a pair, does not involve separate terms for the classifier probability and context probability which are difficult to fuse and is end-to-end learned. We benchmark the model on the LSUN dataset [52] which contains 10 million images across 10 categories. The Context-CNN model achieves an accuracy of 89.03% on the validation set which makes it one of the top performing models on this dataset. We also compare our base network with some standard models and with variations of our model which aim to verify the various assumptions we make about our architecture through control experiments. Additionally, we also analyse the CNN and LSTM features and perform experiments to highlight the context modeling capacity and the discriminative capacity of the model. To summarise, the main contributions of this paper are:

  1. We propose a new model for scene classification that we refer to as the Context-CNN model which exploits the joint presence of a set of objects in the image to infer the scene category, thus modelling semantic context. We test our model on the LSUN dataset and it produces results comparable to the state-of-art. Additionally, it requires only a small fraction of the LSUN dataset (only  200k out of the total 10 million images) for the training to converge.

  2. Unlike the previous methods that model context, our model is learned end-to-end. It also models the dependencies between multiple objects at the same time without requiring any object labels.

  3. We perform extensive experiments to demonstrate that the LSTM units used for capturing object-level information are responsible for improving the accuracy. We also analyse various layers of our model empirically and visually to understand the behavior of the network.

The rest of this paper is organized as follows. In the next section we survey related work in the literature. In Section 3 we describe our Context-CNN model with details of the architecture and network training. We present the experimental results in Section 4 with an analysis of the results through some control experiments. In Section 5 we analyse our model through visualisations and finally conclude the paper in Section 6.

2 Background & related work

Earlier work in image understanding involved numerous low-level image representations like image contours, high contrast points, histogram of oriented gradients etc[44] which were carefully hand crafted and proved effective for tasks like object recognition. Other image-level features like GIST [34] that capture global image statistics and the spatial information were shown to be effective in recognising scenes. More recently, deep features derived from CNNs have fared extremely well on object recognition tasks, but these CNN architectures have not had the same success with scene classification. The cause of this under-performance is often attributed to the semantic gap between the (local or global) statistical information captured by low-level features and the semantic information required for making scene level decisions. We briefly review some approaches for scene recognition followed by the past work done to bridge this semantic gap in vision tasks finally followed by a review of the previous methods which use the CNN-LSTM model.

Scene classification. Both global scene descriptors like GIST [34] and spatial pyramids [22] and local, low-level features like SIFT [30] have been used in the past for scene classification. Others part based models like [35, 19] try to obtain mid-level information from deformable parts. Although image-level features capture the holistic information of the scene, and low and mid level features capture the object information in a scene, the above methods concentrate only on the image statistics and don’t attach any clear semantic meaning to a scene or its constituents.

Alternatively, some other methods build on an object-centric view of a scene where a set of objects are the discriminative characteristics of the scene. [24] uses a scene representation built from pre-trained object detectors. [55] introduces a measure for object-class distance to generalise the idea of an object bank and uses it for classification. In the more recent literature, [49] uses discriminative clustering of the deep CNN features of scene patches to form meta objects which pooled together at different scales makes the final scene representation. Various other CNN architectures for scene classification have been proposed in the last few years. MOP-CNN model [11] pools deep CNN features at different scales for smaller image patches and obtains the VLAD descriptor for the entire image. [47] uses supervision from auxiliary branch classifiers to decide whether or not to increase the depth of the network. A similar technique of using auxiliary supervision layer along with fisher convolution vectors is used for learning the final feature representation in [13].

Context modeling. The task of utilizing context information for scene understanding has seen a lot of attention. [42] builds contextual priors based on the position, scale and object categories for learning priming of objects while [43] uses an HMM model to incorporate global context. Co-occurrence of objects, regions and even labels are often used to constrain the learning for various tasks[17, 25, 33]. [18] uses the scene layout constraint to learn the topology of a scene and perform categorisation. [4] uses a graphical model to exploit co-occurrence, position, scale and global context which together is used to identify out-of-context objects in a scene. Similar definitions of context are used in [38, 36, 6] to model semantic, geometrical and scale context. [23] considers the unlabeled area around a labeled bounding box as contextually relevant and adaptively adjusts the granularity of the regions surrounding a bounding box. In the more recent vision literature involving deep learning, context has been used in a variety of tasks including pose estimation [51], event recognition [48], activity recognition [10] and object detection [14]. R*CNN [10] uses a primary bounding box for recognising the person involved in the activity and a secondary bounding box chosen from multiple contending regions which provides a contextual cue for the identification of the activity. The scores from both boxes are learned together in an end-to-end manner through a R-CNN architecture. Similar to R*CNN, we too use object proposals to obtain object bounding boxes and extract context from them. [48] builds a deep hierarchical model which exploits context at three levels, namely the semantic level, the prior level and the feature level. This is one of the few models which tries to learn the interaction between the various contextual cues. A multiple instance learning based approach with a VGG network is used in [14] to identify regions within an image which may be contextually relevant to the presence of a certain object. These selected regions are then used to reason about the category of the object.

CNN-LSTM models. CNNs have been very successful in learning discriminative features for vision problems and recurrent neural networks have been shown to effectively model the dependencies between its inputs. Many recent architectures use a combination of CNN and LSTM to jointly learn the feature representations and their dependencies. Multi-modal tasks like image captioning [45, 32, 20] and visual question answering [1, 39, 8] use CNN for the image features while the LSTM generates the language for the caption or the answer. Some recent approaches to scene labeling and semantic segmentation use CNN-LSTM architecture [37, 3, 28, 27] as CNN-only architectures contain larger receptive fields which do not allow for finer pixel-level label assignment. LSTMs also incorporate dependencies between pixels and improve agreement among their labels. Tasks involving videos also employ LSTM after extracting deep CNN features of individual frames [50, 7, 53] since the temporal component of videos are suitable inputs for LSTM units. But as some other very recent works show, even in absence of temporal information, CNN-LSTM models can be used effectively to model relationships between image regions or object labels  [2, 26, 46]. We borrow from these works to use a CNN-LSTM combination to model context.

Figure 2: Context-CNN model architecture

3 Context-CNN model

The goal of our model is to complement the deep CNN features with high-level semantic context from objects within a scene. The following sections provide the details of the Context-CNN model and its training procedure.

3.1 Model architecture

Our model (see Figure 2) uses a pre-trained VGG16 network [41] to extract CNN features but other choices like AlexNet [21] or ResNet [15] would work just as well. The input size of the images are fixed at and the last convolutional layer produces feature maps of size . Bounding boxes are extracted using edge boxes [57] and the feature maps of these object boxes are passed through an RoI pooling layer [9] to generate a fixed size vector of size per feature map. These object vectors are passed as input to two subsequent layers of LSTM units containing 1024 and 512 units respectively which model the interaction between these object vectors. The output of all time steps are concatenated to build the final feature vector and fed into the dense layers and then through a softmax layer for prediction.

Extraction of deep scene features. The pre-trained VGG16 model computes convolutional feature vector from which the object RoIs are extracted. The features extracted are learned such that these features exploit object-containing bounding boxes to gain greater discriminative power for scene classification. This makes the approach different from training a vanilla CNN on the complete image and then using a separately learned system to model the relationship between the object features.

Modeling of object-level context. Object proposals are obtained from edge boxes using the default parameters as mentioned in their paper  [57]. The bounding box features are fed into the LSTM in decreasing order of their confidence score with increasing time steps.

Let be output of the RoI pooling layer with feature maps each of a fixed size . This will be the input to the LSTM at the time step. The definition of our LSTM unit follows that of [12]. Let weights = { (), (), (), () } parametrise the four gates of the LSTM unit, namely the input gate, the forget gate, the output gate and the memory cell gate respectively. With  denoting the sigmoid function and denoting the hyperbolic tangent function, the mapping applied by the LSTM to its inputs at the four gates are,


where and are the values of input, forget, and output gates at time instance . is the intermediate value at memory cell and given the above gate outputs, the updated memory cell at is given by,


and the output of the hidden unit is given by,


A shortened functional form of the whole unit can be summarised as:


Thus, with each passing time step, the LSTM reads in an individual object feature vector and updates its memory. This memory helps the model capture scene context by relating objects occurring in that given scene and distinguishing it from other scenes. The discriminative capacity of the network improves as the LSTM receives more information with increasing time steps. LSTMs or Recurrent Neural Networks, in general, are typically used to capture recurrence relationship as is common with sequence data like natural language or speech. It is interesting to note that LSTMs perform well to capture even the co-occurrence relationship of the various objects appearing in the context of a scene where there is no such recurrence.

3.2 Training details

The VGG16 CNN is initialised with weights trained on the ImageNet dataset[5] while the rest of the layers are initialised with the method suggested in [16]. Stochastic gradient descent with coefficient for momentum and a batch size of were used to fine tune the model on images with scene categories as targets. The learning rate is initially set to and decayed by a decay factor of according to the following standard policy,


where denotes the number of iterations that have passed. The learning rate and decay factor are changed to and after iterations depending on the validation loss. No data augmentation or regularisation is applied and the training is done on a single Nvidia Titan GPU.

The number of object proposals used is fixed to bounding boxes and we use those with the highest confidence scores. Though the selection of this value can also be done heuristically, we limit our analysis to this value as the primary intention of this work is to highlight the use of CNN-LSTM to model context in scenes. Also, increasing the number of object boxes hinders the analysis of the role of the edge box algorithm in the pipeline. This happens because with large number of bounding boxes, almost the entire feature map is covered and it is difficult to evaluate the significance of edge boxes as an object proposal mechanism (see Section 4.3 for the corresponding experiment).

Figure 3: Models comparison: (a) is the base Context-CNN model. (b) shows the the first variation with the output of LSTM coming only from the last time step. (c) shows the second variation with the LSTM units replaced by dense units. (d) is a VGG16 network

4 Experiments & results

We next describe the experiments and their results comparing our model’s performance on the LSUN dataset with the other state-of-art models. We also design specific experiments to evaluate and analyze the contribution of object proposals in Section 4.3 and contribution of LSTMs in the network in Section 4.2.

4.1 Results on LSUN dataset

We train and test our model on the LSUN dataset. The best performing variant of our model achieves an accuracy of 89.03% which is among the best results for this dataset.111Note that we test the accuracy on the validation set while the official challenge website reports results on the testing set. Additionally, some models from the leaderboard (see Table 1) use large ensembles, fusing predictions from multiple architectures while we just use a single end-to-end trained model.

Method Accuracy (%)
Google 91.20
SJTU-ReadSense(ensemble) 90.43
TEG Rangers(ensemble) 88.70
Our model 89.03
Table 1: Evaluation on the LSUN dataset

To empirically verify the ability of our network to model context, we train variations of our model on the LSUN dataset as control experiments to compare against the base Context-CNN model. The details and results of these experiment are discussed in the following sections.

Figure 4: t-SNE visualisation (please view in colour): In (a), each of the data point is a CNN feature vector of a single bounding box obtained from the RoI pooling layer. (b), (c) and (d) show the output feature vector from the , and time step of the LSTM respectively. The two axis represent the 2-d plane and the 10 scene classes are denoted by their respective color(see Figure 5 for the names of all classes and their ID). The plot clearly shows how the discriminative ability of the feature vectors of the object bounding boxes change across the CNN and LSTM and also across the various time steps of the LSTM.

4.2 Significance of LSTM

We test the base model against three other variations. For the first variation, we only feed the last LSTM time step into the next layer. Since the original model concatenates vectors from all time steps, this variation highlights the importance of the information obtained from the high confidence score objects fed in earlier time steps. It is to be noted that the difference in accuracy between these models is very small as partial effect of the information from the object features of the earlier time steps is reused to compute the later ones.

For the second variation, we replace the LSTM layers by dense layers after the VGG16 model. This setup also includes the RoI pooling layer. So in effect, only the object features are modeled, but through dense layers instead of the LSTMs. This model highlights the difference in performance between fully connected units versus LSTM units. The results show that the LSTM units are better at scene discrimination or more precisely, that the LSTM units model dependencies between the deep object features and produce a more discriminative final representation.

For the third variation, we train a simple VGG16 for benchmarking purposes. As expected, it achieves the lowest accuracy out of all the compared variations. We note that even though both VGG16 and Context-CNN share the same convolution layers, they differ in the subsequent layers. So our model outperforms a VGG16 network by 5.6% with 8 million fewer parameters.

Model Variation Accuracy(%)
Context-CNN base model (Figure 3.a) 89.03
Context-CNN with last time step (Figure 3.b) 87.34
Context-CNN with LSTM replaced (Figure 3.c) 85.47
VGG16 (Figure 3.d) 83.41
Table 2: Model comparison-role of LSTM
Figure 5: Analysis through obscuration: Systematic blacking out of the object bounding boxes one by one before passing the image through the model and then comparing the obtained softmax distribution to the one obtained with the complete image is used as a measure of significance of the bounding box. The blacked out bounding box which most adversely affects the softmax activation of the correct class are shown for selected images from the LSUN dataset.

4.3 Significance of object proposals

To verify the hypothesis that the proposed network improves scene classification by modeling object-level context, we replace the object proposal method by a mechanism that generates adversarial random boxes. This mechanism generates random bounding boxes which have similar average size as the original object boxes, but have less than 10% overlap with any of the original object boxes. An important modification which is made to the Context-CNN model is the increase in size of its feature maps from which the RoIs are pooled. We use transposed convolution layers (also known as fractionally-strided convolution) to upscale the feature map to for this experiment. This makes it easier to sample non-overlapping bounding boxes from the feature map and makes the analysis easier. Due to computational limitations, we reduce the number of LSTM and dense units (which together with the upscaling leads to worsening of the results in comparison to the base Context-CNN model), but keep it fixed across this experiment. We report the model with these two changes (the upscaling and the reduction in units) in Table 3. We train two such models, one with bounding box proposals from edge boxes and the other from the adversarial random box generating system.

Model Variation Accuracy(%)
Context-CNN (modified) with edge boxes 81.56
Context-CNN (modified) with non-overlapping random boxes 48.73
Table 3: Model comparison-role of object proposals

The results clearly show the difference in performance between the two models. We posit that this large gap in accuracy arises out of the lack of object information in random bounding boxes. We note that a similar drop in performance with random boxes is also reported in R*CNN [10], a model which also relies on the presence of objects within bounding box proposals. The drop is much more severe in our case since we use adversarial random boxes which have almost no overlap with the original object bounding boxes. This experiment verifies the intuition that good object proposals are critical to the modeling of semantic context. These results also imply that the original Context-CNN model is indeed modeling object-level semantic context.

5 Analysis and visualisation

Experiments from previous section quantitatively measure the contribution of the layers stacked on top of convolution layers. We next give visualizations of the Context-CNN model’s feature space and semantically informative image parts that help discriminate between different scenes.

5.1 Comparison of CNN and LSTM features

We visualise feature vectors obtained from the CNN and compare it with features obtained from various time steps of the LSTM. We employ t-SNE visualisation [31] to embed the feature vectors obtained from the trained model to a 2-dimensional space. Figure 4 shows these embeddings. In Figure 4(a), each data point represents the CNN feature vector of each RoI-pooled object bounding box. In Figures 4(b), 4(c) and 4(d), each data point is the output from the , and LSTM time step respectively. The category of the scene from where the object feature vector is taken is used as the category for the feature vector too. Since all features are extracted from a trained network and any particular object feature will encode information about the scene itself, this choice of category for each data point is loosely justified.

The data points in the visualisation can be interpreted as the CNN and LSTM object features. Since each time step takes as input an object feature and modifies its output based on the previous object features, it is expected that with increase in the number of time steps, the capacity of the LSTM features to discriminate among scene classes should also increase. This very intuition is verified visually here.

Figure 6: (a)Softmax confidence degradation heatmap: The average drop in softmax scores across the categories and the position of the obscured bounding box with respect to the LSTM time step are plotted as a heatmap(see Figure 5 for the names of all classes and their ID). (b)Accuracy of the model as a function of the occluded bounding box position

5.2 Response to obscuration

Occlusion of various parts of the image was used as a visualisation technique in [54] to understand which areas of the image contribute how much to the final classification score.

We take a similar approach to evaluate an object bounding box by defining a measure to quantify its significance for the scene classification task. The significance of a bounding box is measured by the reduction in the softmax score of the correct class if the bounding box were obscured and the corresponding object occluded. The most significant bounding box is the one that leads to maximum reduction in the softmax score. The best performing Context-CNN model is used for this visualisation. Select representative images of scenes from all categories are shown in Figure 5 and the corresponding observations are as follows:

  • Bounding boxes which cover a large area, usually, tend to cause the largest reduction in the softmax scores of the correct class when obscured.

  • Each scene category contains a small set of distinct characteristic objects that help discriminate its images from that of others e.g. bed for a bedroom, projector for a conference room, sofa for a living room and so on. Each object from within the characteristic set of a given scene category could, however, vary widely in appearance and pose.

  • It is sometimes the case with certain scenes that the most significant bounding box is small in size but contains some contextual information which can be exploited in the absence of other discriminatory features. e.g. a glass of water for a restaurant scene or a pencil and paper for a classroom scene.

We also use obscuration to visualise how the order of feeding bounding boxes into the LSTM could affect the final softmax scores. The drop in scores due to obscuring each bounding box for a given scene category is plotted in Figure 6.a. As expected, obscuration of edge boxes being fed in the first few time steps reduces the softmax score of the correct class by a greater value than the later time steps.

This can be attributed to the fact that the initial time steps take as input the bounding box with the highest ’objectness’ score as measured by the edge boxes algorithm. Additionally, some classes like the tower scene and bridge scene show uniform degradation of confidence across LSTM time steps.

Figure 6.b plots classification accuracy against the time step at which the corresponding occluded bounding box was fed into LSTM layer. As apparent, occlusion of the first bounding box most severely affects performance, dropping the accuracy from 89.03% (which corresponds to the base Context-CNN model) to 63.3% (which corresponds to the base model with bounding box at the first time step occluded).

6 Conclusion

In this paper, we propose a deep model for embedding high-level semantic context of scenes. We evaluate it on the task of scene classification on the LSUN dataset producing results comparable with the best performing methods currently available in the literature. Additional experiments to understand the proposed model point to its effectiveness in modeling relationships between object-level patches of the scene. The results indicate that complex scenes which do not have any globally discriminative features, need to rely on a principled way of joint learning of multi-level representations and objects or image patches are a good way to incorporate these features. The model we propose can also be adopted to other tasks in vision to capture contextual information.


  • [1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433, 2015.
  • [2] S. Bell, C. L. Zitnick, K. Bala, and R. Girshick. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. arXiv preprint arXiv:1512.04143, 2015.
  • [3] W. Byeon, T. M. Breuel, F. Raue, and M. Liwicki. Scene labeling with lstm recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3547–3555, 2015.
  • [4] M. J. Choi, A. Torralba, and A. S. Willsky. Context models and out-of-context objects. Pattern Recognition Letters, 33(7):853–862, 2012.
  • [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
  • [6] S. K. Divvala, D. Hoiem, J. H. Hays, A. A. Efros, and M. Hebert. An empirical study of context in object detection. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1271–1278. IEEE, 2009.
  • [7] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2625–2634, 2015.
  • [8] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016.
  • [9] R. Girshick. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1440–1448, 2015.
  • [10] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with rcnn. 2015.
  • [11] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In European Conference on Computer Vision, pages 392–407. Springer, 2014.
  • [12] A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE, 2013.
  • [13] S. Guo, W. Huang, and Y. Qiao. Locally-supervised deep hybrid model for scene recognition. arXiv preprint arXiv:1601.07576, 2016.
  • [14] S. Gupta, B. Hariharan, and J. Malik. Exploring person context and local scene context for object detection. arXiv preprint arXiv:1511.08177, 2015.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 1026–1034, 2015.
  • [17] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In European conference on computer vision, pages 30–43. Springer, 2008.
  • [18] H. Izadinia, F. Sadeghi, and A. Farhadi. Incorporating scene context and object layout into appearance modeling. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 232–239. IEEE, 2014.
  • [19] M. Juneja, A. Vedaldi, C. Jawahar, and A. Zisserman. Blocks that shout: Distinctive parts for scene classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 923–930, 2013.
  • [20] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137, 2015.
  • [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [22] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 2169–2178. IEEE, 2006.
  • [23] C. Li, D. Parikh, and T. Chen. Extracting adaptive contextual cues from unlabeled regions. In 2011 International Conference on Computer Vision, pages 511–518. IEEE, 2011.
  • [24] L.-J. Li, H. Su, L. Fei-Fei, and E. P. Xing. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In Advances in neural information processing systems, pages 1378–1386, 2010.
  • [25] X. Li and Y. Guo. An object co-occurrence assisted hierarchical model for scene understanding.
  • [26] M. Liang and X. Hu. Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3367–3375, 2015.
  • [27] X. Liang, X. Shen, J. Feng, L. Lin, and S. Yan. Semantic object parsing with graph lstm. arXiv preprint arXiv:1603.07063, 2016.
  • [28] X. Liang, X. Shen, D. Xiang, J. Feng, L. Lin, and S. Yan. Semantic object parsing with local-global long short-term memory. arXiv preprint arXiv:1511.04510, 2015.
  • [29] Y. Liao, S. Kodagoda, Y. Wang, L. Shi, and Y. Liu. Understand scene categories by objects: A semantic regularized scene classifier using convolutional neural networks. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 2318–2325. IEEE, 2016.
  • [30] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
  • [31] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
  • [32] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014.
  • [33] T. Mensink, E. Gavves, and C. G. Snoek. Costa: Co-occurrence statistics for zero-shot classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2441–2448, 2014.
  • [34] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International journal of computer vision, 42(3):145–175, 2001.
  • [35] M. Pandey and S. Lazebnik. Scene recognition and weakly supervised object localization with deformable part-based models. In 2011 International Conference on Computer Vision, pages 1307–1314. IEEE, 2011.
  • [36] D. Parikh, C. L. Zitnick, and T. Chen. From appearance to context-based recognition: Dense labeling in small images. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
  • [37] P. H. Pinheiro. Recurrent convolutional neural networks for scene labeling.
  • [38] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007.
  • [39] M. Ren, R. Kiros, and R. Zemel. Exploring models and data for image question answering. In Advances in Neural Information Processing Systems, pages 2953–2961, 2015.
  • [40] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 806–813, 2014.
  • [41] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
  • [42] A. Torralba. Contextual priming for object detection. International journal of computer vision, 53(2):169–191, 2003.
  • [43] A. Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin. Context-based vision system for place and object recognition. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pages 273–280. IEEE, 2003.
  • [44] T. Tuytelaars and K. Mikolajczyk. Local invariant feature detectors: a survey. Foundations and trends® in computer graphics and vision, 3(3):177–280, 2008.
  • [45] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3156–3164, 2015.
  • [46] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu. Cnn-rnn: A unified framework for multi-label image classification. arXiv preprint arXiv:1604.04573, 2016.
  • [47] L. Wang, C.-Y. Lee, Z. Tu, and S. Lazebnik. Training deeper convolutional networks with deep supervision. arXiv preprint arXiv:1505.02496, 2015.
  • [48] X. Wang and Q. Ji. Video event recognition with deep hierarchical context model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4418–4427, 2015.
  • [49] R. Wu, B. Wang, W. Wang, and Y. Yu. Harvesting discriminative meta objects with deep CNN features for scene classification. CoRR, abs/1510.01440, 2015.
  • [50] Z. Wu, Y.-G. Jiang, X. Wang, H. Ye, X. Xue, and J. Wang. Fusing multi-stream deep networks for video classification. arXiv preprint arXiv:1509.06086, 2015.
  • [51] B. Yao and L. Fei-Fei. Modeling mutual context of object and human pose in human-object interaction activities. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 17–24. IEEE, 2010.
  • [52] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
  • [53] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4694–4702, 2015.
  • [54] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818–833. Springer, 2014.
  • [55] L. Zhang, X. Zhen, and L. Shao. Learning object-to-class kernels for scene classification. IEEE Transactions on image processing, 23(8):3241–3253, 2014.
  • [56] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In Advances in neural information processing systems, pages 487–495, 2014.
  • [57] C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In ECCV, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description