A Discriminative Representation of Convolutional Features for Indoor Scene Recognition

A Discriminative Representation of Convolutional Features for Indoor Scene Recognition

Abstract

Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.

1Introduction

This paper proposes a novel method which captures the discriminative aspects of an indoor scene to correctly predict its semantic category (e.g., bedroom, kitchen etc.). This categorization can greatly assist in context aware object and action recognition, object localization, robotic navigation and manipulation [49]. However, owing to the large variabilities between images of the same class and the confusing similarities between images of different classes, the automatic categorization of indoor scenes is a very challenging problem [34]. Consider, for example, the images shown in Figure 1. The images of the top row (Fig. Figure 1 a) belong to the same class ‘bookstore’ and exhibit a large data variability in the form of object occlusions, cluttered regions, pose changes and varying appearances. The images in the bottom row (Fig. Figure 1 b) are of three different classes and have large visual similarities. A high performance classification system should therefore be able to cope with the inherently challenging nature of indoor scenes.

Figure 1:  Where am I located indoors?, we want to answer this question by assigning a semantic class label to a given color image. An indoor scene categorization framework must take into account high intra-class variablity and should be able to tackle confusing inter-class similarities. This paper introduces a methodology to achieve these challenging requisites. (example images from MIT-67 dataset)
Figure 1: ‘Where am I located indoors?’, we want to answer this question by assigning a semantic class label to a given color image. An indoor scene categorization framework must take into account high intra-class variablity and should be able to tackle confusing inter-class similarities. This paper introduces a methodology to achieve these challenging requisites. (example images from MIT-67 dataset)

To deal with the challenges of indoor scenes, previous works [6] propose to encode either local or global spatial and appearance information. In this paper, we argue that neither of those two representations provide the best answer to effectively handle indoor scenes. The global representations are unable to model the subtle details, and the low-level local representations cannot capture object-to-object relations and the global structures [34]. We therefore devise mid-level representations that carry the necessary intermediate level of detail. These mid-level representations neither ignore the local cues nor lose the important scene structure and object category relationships.

Our proposed mid-level representations are derived from densely and uniformly extracted image patches. In order to extract a rich feature representation from these patches, we use deep Convolutional Neural Networks (CNNs). CNNs provide excellent generic mid-level feature representations and have recently shown great promise for large-scale classification and detection tasks [30]. They however tend to preserve the global spatial structure of the images [52], which is not desirable when there are large intra-class variations e.g., in the case of indoor scene categorization (Fig. Figure 1). We therefore propose a method to discount this global spatial structure, while simultaneously retaining the intermediate scene structure which is necessary to model the mid-level scene elements. For this purpose, we encode the extracted mid-level representations in terms of their association with codebooks1 of Scene Representative Patches (SRPs). This enhances the robustness of the convolutional feature representations, while keeping intact their discriminative power.

It is interesting to note that some previous works hint towards the incorporation of ‘wide context’ [20] for scene categorization. Such high-level context-aware reasoning has been shown to improve the classification performance. However in this work, we show that for the case of highly variant indoor-scenes, mid-level context relationships prove to be the most decisive factor in classification. The intermediate level of the scene details help in learning the subtle differences in the scene composition and its constituent objects. In contrast, global structure patterns can confuse the learning/classification algorithm due to the high inter-class similarities (Sec. Section 4.3).

As opposed to existing feature encoding schemes, we propose to form multiple codebooks of SRPs. We demonstrate that forming multiple smaller codebooks (instead of one large codebook) proves to be more efficient and produces a better performance (Section 4.4). Another key aspect of our feature encoding approach is the combination of supervised and unsupervised SRPs in our codebooks. The unsupervised SRPs are collected from the training data itself, while the supervised SRPs are extracted from a newly introduced dataset of ‘Object Categories in Indoor Scenes’ (OCIS). The supervised SRPs provide semantically meaningful information, while the unsupervised SRPs relate more to the discriminative aspects of the different scenes that are present in the target dataset. The efficacy of the proposed approach is demonstrated through extensive experiments on five challenging scene classification datasets. Our experimental results show that the proposed approach consistently achieves state of the art performance.

The major contributions of this paper are: 1). We propose a new mid-level feature representation for indoor scene categorization using large-scale deep neural nets (Section 3), 2) Our feature description incorporates not only the discriminative patches of the target dataset but also the general object categories that are semantically meaningful (Section 3.3), 3). We collect the first large-scale dataset of object categories that are commonly present in indoor scenes. This dataset contains more than 1300 indoor object classes (Section 4.1), 4). To improve the efficiency and performance of our approach, we propose to generate multiple smaller codebooks and a feasible feature encoding (Section 3.3), and 5). We introduce a novel method to encode feature associations using max-margin hyper-planes (Section 3.4).

2Related Work

Based upon the level of image description, existing scene classification techniques can be categorized into three types: 1). those which capture low level appearance cues, 2). those which capture the high level spatial structure of the scene and 3). those which capture mid-level relationships. The techniques which capture low-level appearance cues [6] perform poorly on the majority of indoor scene types since they fail to incorporate the high level spatial information. The techniques which model the human perceptible global spatial envelope [28] also fail to cope with the high variability of indoor scenes. The main reason for the low performance of these approaches is their neglect of the fine-grained objects, which are important for the task of scene classification.

Considering the need to extract global features as well as the characteristics of the constituent objects, Quattoni et al. [34] and Pandey et al. [32] represented a scene as a combination of root nodes (which capture the global characteristics of the scene) and a set of regions of interest (which capture the local object characteristics). However, the manual or automatic identification of these regions of interest makes their approach indirect and thus complicates the scene classification task. Another example of indirect approach to scene recognition is the one proposed by Gupta et al. [9], where the grouping, segmentation and labeling outcomes are combined to recognize scenes. Learned mid-level patches are employed for scene categorization by Juneja et al. [13], Doersch et al. [4] and Sun et al. [41]. However these works involve a lot of effort in learning the distinctive primitives which includes a discriminative patch ranking and selection. In contrast, our mid-level representation does not require any learning. Instead, we uniformly extract the mid-level patches densely from the images and show that these perform best when combined with supervised object representations.

Deep Convolutional Neural Networks have recently shown great promise in large-scale visual recognition and classification [31]. Although CNN features have demonstrated their discriminative power for images with one or multiple instances of the same object, they do preserve the spatial structure of the image, which is not desirable when dealing with the variability of indoor scenes [10]. CNN architectures involve max-pooling operations to deal with the local spatial variability in the form of rotation and translation [15]. However, these operations are not sufficient to cope with the large-scale deformations of objects and parts that are commonly present in indoor scenes [10]. In this work, we propose a novel representation which is robust to variations in the spatial structure of indoor scenes. It represents an image in terms of the association of its mid-level patches with the codebooks of the SRPs.

Figure 2: Deep Un-structured Convolutional Activations: Given an input image, we extract from it dense mid-level patches, represent the extracted patches by their convolutional activations and encode them in terms of their association with the codebooks of Scene Representative Patches (SRPs). The designed codebooks have both supervised and unsupervised SRPs. The resulting associations are then pooled and the class belonging decisions are predicted using a linear classifier.
Figure 2: Deep Un-structured Convolutional Activations: Given an input image, we extract from it dense mid-level patches, represent the extracted patches by their convolutional activations and encode them in terms of their association with the codebooks of Scene Representative Patches (SRPs). The designed codebooks have both supervised and unsupervised SRPs. The resulting associations are then pooled and the class belonging decisions are predicted using a linear classifier.

3The Proposed Method

The block diagram of our proposed pipeline called ‘Deep Un-structured Convolutional Activations (DUCA)’ is shown in Figure 2. Our proposed method first densely and uniformly extracts mid-level patches (Sec Section 3.1), represents them by their convolutional activations (Sec Section 3.2) and then encodes them in terms of their association with the codebooks of SRPs (Sec Section 3.4), which are generated in supervised and unsupervised manners (Sec Section 3.3). The detailed description of each component of the proposed pipeline is presented next.

3.1Dense Patch Extraction

To deal with the high variability of indoor scenes, we propose to extract mid-level instead of global [28] or local [6] feature representations. Mid-level representations do not ignore object level relationships and the discriminative appearance based local cues (unlike the high level global descriptors), and do not ignore the holistic shape and scene structure information (unlike the low level local descriptors). For each image, we extract dense mid-level patches using a sliding window of pixels with a fixed step size of . In order to extract a reasonable number of patches, the smaller dimension of the image is re-scaled to an appropriate length (700 pixels in our case). Note that the idea of dense patch extraction is analogous to dense key-point extraction [27], which has shown very promising performance over well-designed key-point extraction methods in a number of tasks (e.g., action recognition [46]).

Before the dense patch extraction, we augment the images of the dataset with their flipped, cropped and rotated versions to the enhance generalization of our feature representation. First, five cropped images (four from the corners and one from the center) of size are extracted from the original image. Each original image is also subjected to CW and CCW rotations of radians and the resulting images are included in the augmented set. The horizontally flipped versions of all these eight images (1 original + 5 cropped + 2 rotated) are also included. The proposed data augmentation results in a reasonable performance boost (see Section 4.4).

3.2Convolutional Feature Representations

We need to map the raw image patches to a discriminative feature space where scene categories are easily separable. For this purpose, instead of using shallow or local feature representations, we use the convolutional activations from a trained deep CNN architecture. Learned representations based on CNNs have significantly outperformed hand-crafted representations in nearly all major computer vision tasks [?]. Our CNN architecture is similar to the ‘AlexNet’ [15] (trained on ILSVRC 2012) and consists of 5 convolutional and 3 fully-connected layers. The main difference compared to AlexNet is the dense connections between each pair of consecutive layers in the 8-layered network (in our case). The densely and uniformly extracted patches from the images, are fed to the network’s input layer after mean normalization. The processed output from the network is taken from an intermediate fully connected layer ( layer). The resulting feature representation of each mid-level patch has a dimension of 4096.

Although, CNN activations capture rich discriminative information, they are inherently highly structured. The main reason is the sequence of operations involved in the hierarchical layers of CNN which preserve the global spatial structure of the image. This constraining structure is a limitation when dealing with highly variable indoor scene images. To address this, we propose to encode our patches (represented by their convolutional activations) to an alternate feature space which turns out to be even more discriminative (Sec. Section 3.4). Specifically, an image is encoded in terms of the association of its extracted patches with the codebooks of the Scene Representative Patches (SRPs).

3.3Scene Representative Patches (SRPs)

An indoor scene is a collection of several distinct objects and concepts. We are interested in extracting a set of image patches of these objects and concepts, which we call ‘Scene Representative Patches’ (SRPs). The SRPs can then be used as elements of a codebook to characterize any instance of an indoor scene. Examples of these patches for a bedroom scene include a bed, wardrobe, sofa or a table. Designing a comprehensive codebook of these patches is a very challenging task. There can be two possible solutions: first, automatically learn to discover a number of discriminative patches from the training data and second, manually prepare an exhaustive vocabulary of all objects which can be present in indoor scenes. These solutions are quite demanding. First, because of the possibility of a very large number of objects, and second, this may require automatic object detection, localization or distinctive patch selection, which in itself is very challenging and computationally expensive.

In this work, we propose a novel approach to compile a comprehensive set of SRPs. Our proposed approach avoids the drawbacks of the above mentioned strategies and successfully combines their strengths i.e., it is computationally very efficient while being highly discriminative and semantically meaningful. Our set of SRPs has two main components, compiled in a supervised and an unsupervised manner. These components are described next.

Supervised SRPs

A codebook of supervised SRPs is generated from images of well known object categories expected to be present in a particular indoor scene (e.g., a microwave in a kitchen, a chair in a classroom). The codebook contains human-understandable elements which carry well-defined semantic meanings (similar to attributes [5] or object banks [20]). In this regard, we introduce the first large-scale database of objects categories in indoor scenes (Section 4.1). The introduced database includes an extensive set of indoor objects (more than ). The codebook of supervised SRPs is generated from images of the database by extracting dense mid-level patches after re-sizing the smallest dimension of each image to pixels. The number of SRPs in the compiled codebook is equal to the object categories in the OCIS database. For this purpose, in the feature space, each SRP is a max-pooled version of convolutional activations (Sec Section 3.2) of all the mid-level patches extracted from that object category. The supervised codebook is then used in Section 3.4 to characterize a given scene image in terms of its constituent objects.

Unsupervised SRPs

The codebook of unsupervised SRPs is generated from the patches extracted from the training data. First, we densely and uniformly extract patches from training images by following the procedure described in Section 3.1. The SRPs can then be generated from these patches using any unsupervised clustering technique. However, in our case, we randomly sample the patches as our unsupervised SRPs. This is because, we are dealing with a very large number of extracted patches and an unsupervised clustering can be computationally prohibitive. We demonstrate in our experiments (Sec. Section 4.3, Section 4.4) that random sampling does not cause any noticeable performance degradation, while achieving significant computational advantages.

Ideally, the codebook of SRPs should be all-inclusive and cover all discriminative aspects of indoor scenes. One might therefore expect a large number of SRPs in order to cover all the possible aspects of various scene categories. While this is indeed the case, feature encoding from a single large codebook would be computationally burdensome (Section 4.4). We therefore propose to generate multiple codebooks of relatively smaller sizes. The association vectors from each of these codebooks can then be concatenated to generate a high dimensional feature vector. This guarantees the incorporation of a large number of SRPs at a low computational cost. To this end, we generate three unsupervised codebooks, each with SRPs. The codebook size was selected empirically on a small validation set.

The SRPs in the supervised codebook are semantically meaningful, however, they do not include all possible aspects of the different scene categories. The unsupervised codebook compensates this shortcoming and complements the supervised codebook. The combinations of both supervised and unsupervised codebooks, results therefore in an improved discrimination and accuracy (see Section 4.4).

3.4Feature Encoding from SRPs

Given an RGB image , our task is to find its feature representation in terms of the previously generated codebooks of SRPs (Sec. ? and ?). For this purpose, we first densely extract patches from the image using the procedure explained in Section 3.1. Next, the patches are represented by their convolutional activations as discussed in Section 3.2. The patches are then encoded in terms of their association with the SRPs of the codebooks. The following two strategies are devised for this purpose.

Sparse Linear Coding

Let be a codebook of SRPs, a mid-level patch is sparsely reconstructed from the SRPs of the codebook using:

is the regularization constant. The sparse coefficient vector is then used as the final feature representation of the patch.

Proposed Classifier Similarity Metric Coding

We propose a new soft encoding method which uses the maximum margin hyper-planes to measure feature associations. Given a codebook of SRPs, we train linear binary one-vs-all SVMs. An SVM finds the maximum margin hyperplane which optimally discriminates an SRP from all others. Let be the learnt weight matrix of all learnt SVMs. A patch can then be encoded in terms of the trained SVMs using: . Since we have multiple codebooks ( in total), the patch is separately encoded from all of them. The final representation of is then achieved by concatenating the encoded feature representation from all codebooks into a single feature vector .

3.5Classification

The encoded feature representations from all mid-level patches of the image are finally pooled to produce the overall feature representation of the image. Two commonly used pooling strategies (mean pooling and max pooling) are explored in our experiments (see Section 4.4). Finally, in order to perform classification, we use one-vs-one linear SVMs.

Where is the normal vector to the learned max-margin hyper-plane, is the regularization parameter and is the binary class label of the feature vector .

4Experiments and Evaluation

We evaluate our approach on three indoor scene classification datasets. These include MIT-67 dataset, 15 Category Scene data set and NYU indoor scene dataset. Confusing inter-class similarities and high within-class variabilities make these datasets very challenging. Specifically, MIT-67 is the largest dataset of indoor scene images containing 67 classes. The images of many of these classes are very similar looking e.g., inside-subway and inside-bus (see Figure 5 for example confusing and challenging images). Moreover, we also report results on two event and object classification datasets (Graz-02 dataset and 8-Sports event dataset) to demonstrate that the proposed technique is applicable to other related tasks. A detailed description of each of these datasets, followed by our experimental setups and the corresponding results are presented in Section 4.2 and Section 4.3. First, we provide a description of our introduced OCIS dataset below.

Figure 3: A word cloud of the top 300 most frequently occurring classes in our introduced Object Categories in Indoor Scenes (OCIS) database.
Figure 3: A word cloud of the top 300 most frequently occurring classes in our introduced Object Categories in Indoor Scenes (OCIS) database.
Figure 4: Example images from the Object Categories in Indoor Scenes dataset. This dataset contains a diverse set of object classes with different sizes and scales (e.g, Alcove and Melon). Each category includes a rich set of images with differences in appearance, shape, viewpoint and background.
Figure 4: Example images from the ‘Object Categories in Indoor Scenes’ dataset. This dataset contains a diverse set of object classes with different sizes and scales (e.g, Alcove and Melon). Each category includes a rich set of images with differences in appearance, shape, viewpoint and background.
Table 1: Mean accuracy on the MIT-67 Indoor Scenes Dataset. Comparisons with the previous state-of-the-art methods are also shown. Our approach performs best in comparison to techniques which use a single or multiple feature representations.
Method Accuracy (%) Method Accuracy (%)
ROI + GIST [CVPR’09] OTC [ECCV’14]
MM-Scene [NIPS’10] Discriminative Patches [ECCV’12]
SPM [CVPR’06] ISPR [CVPR’14]
Object Bank [NIPS’10] D-Parts [ICCV’13]
RBoW [CVPR’12] VC + VQ [CVPR’13]
Weakly Supervised DPM [ICCV’11] IFV [CVPR’13]
SPMSM [ECCV’12] MLRep [NIPS’13]
LPR-LIN [ECCV’12] CNN-MOP [ECCV’14]
BoP [CVPR’13] CNNaug-SVM [CVPRw’14]
Hybrid Parts + GIST + SP [ECCV’12] Proposed DUCA 71.8

4.1A Dataset of Object Categories in Indoor Scenes

There is an exhaustive list of scene elements (including objects, structures and materials) that can be present in indoor scenes. Any information about these scene elements can prove crucial for the scene categorization task (and even beyond - e.g., for the semantic labeling or attribute identification). However, to the best of our knowledge, there is no publicly available dataset of these indoor scene elements. In this paper, we introduce the first large-scale OCIS (Object Categories in Indoor Scenes) database. The database contains a total of images spanning more than frequently occurring indoor object categories. The number of images in each category is about . The database can potentially be used for fine-grained scene categorization, high-level scene understanding and attribute based reasoning. In order to collect the data, a comprehensive list of indoor objects was manually chosen from the labelings provided with the MIT-67 [34] dataset. This taxonomy includes a diverse set of objects classes ranging from a ‘house’ to a ‘handkerchief’. A word cloud of the top 300 most frequently occurring classes is shown in Figure 3. The images for each class are then collected using an online image search (Google API). Each image contains one or more instances of a specific object category. In order to illustrate the diverse intra-class variability of this database, we show some example images in Figure 4. Our in-house annotated database will be made freely available to the research community.

For the benchmark evaluation, we represent the images of the database by their convolutional features and feed them to a linear classifier (SVM). A train-test split of - is defined for each class. The classification results in terms of the Cumulative Match Curve (CMC) are shown in Fig. ?. The rank-1 and rank-20 identification rates turn out to be only and respectively. These modest classification rates suggest that indoor object categorization is a very challenging task.

4.2Evaluated Datasets

The performance of our proposed method is evaluated on MIT-67 dataset, 15 Category Scene data set, NYU Indoor Scene dataset, Graz-02 dataset and 8-Sports event dataset. Below, we present a brief description of each of these datasets followed by an analysis on the achieved performance.

MIT-67 Dataset

It contains 15620 images of 67 indoor categories. For performance evaluation and comparison, we followed the standard evaluation protocol in [34] in which a subset of data is used (100 images per class) and a train-test split is defined to be for each class.

15 Category Scene Dataset

It contains images of 15 urban and natural scene categories. The number of images in each category ranges from 200-400. For our experiments, we use the same evaluation setup as in [18], where 100 images per class are used for training and the rest for testing.

NYU v1 Indoor Scene Dataset

It consists of 7 indoor scene categories with a total of 2347 images. Following the standard experimental protocol [39], we used a train/test split for evaluation. Care has been taken while splitting the data to ensure that a minimal or no overlap of the consecutive frames exists between the training and testing sets.

Inria Graz-02 Dataset

It consists of 1096 images belonging to 3 classes (bikes, cars and people) in the presence of heavy clutter, occlusions and pose variations. For performance evaluation, we used the protocol defined in [25]. Specifically, for each class, the first 150 odd images are used for training and the 150 even images are used for testing.

UIUC 8-Sports Event Dataset

It contains 1574 images of 8 sports categories. Following the protocol defined in [19], we used 70 randomly sampled images for training and 60 for testing.

Table 2: Mean accuracy on the 15 Category Scene Dataset. Comparisons with the previous best techniques are also shown.
Method Accuracy(%) Method Accuracy (%)
GIST-color [IJCV’01] ISPR [CVPR’14]
RBoW [CVPR’12] VC + VQ [CVPR’13]
Classemes [ECCV’10] LMLF [CVPR’10]
Object Bank [NIPS’10] LPR-RBF [ECCV’12]
SPM [CVPR’06] Hybrid Parts + GIST + SP [ECCV’12]
SPMSM [ECCV’12] CENTRIST+LCC+Boosting [CVPR’11]
LCSR [CVPR’12] RSP [ECCV’12]
SP-pLSA [PAMI’08] IFV
CENTRIST [PAMI’11] LScSPM [CVPR’10]
HIK [ICCV’09]
OTC [ECCV’14] Proposed DUCA 94.5
Table 3: Mean accuracy on the UIUC 8-Sports Dataset.
Method Accuracy (%)
GIST-color [IJCV’01]
MM-Scene [NIPS’10]
Graphical Model [ICCV’07]
Object Bank [NIPS’10]
Object Attributes [ECCV’12]
CENTRIST [PAMI’11]
RSP [ECCV’12]
SPM [CVPR’06]
SPMSM [ECCV’12]
Classemes [ECCV’10]
HIK [ICCV’09]
LScSPM [CVPR’10]
LPR-RBF [ECCV’12]
Hybrid Parts + GIST + SP [ECCV’12]
LCSR [CVPR’12]
VC + VQ [CVPR’13]
IFV
ISPR [CVPR’14]

Proposed DUCA

98.7

4.3Experimental Results

The quantitative results of the proposed method for the task of indoor scene categorization are presented in Tables Table 1, Table 2 and Table 4. The proposed method achieves the highest classification rate on all three datasets. Compared with the existing state of the art, a relative performance increment of 4.1%, 5.4% and 1.3% is achieved for MIT-67, Scene-15 and NYU datasets respectively. Amongst the compared methods, the mid-level feature representation based methods [4] perform better than the others. Our proposed mid-level features based method not only outperforms their accuracy but is also computationally efficient (e.g., [4] takes weeks to train several part detectors). Furthermore, once compared with existing methods, our proposed method uses a lower dimensional feature representation for classification (e.g., the Juneja et al. [13] Improved Fisher Vector (IFV) has dimensionality K; the Gong et al. [8] MOP representation has K).

In addition to indoor scene classification, we also evaluate our approach on other scene classification tasks where large variations and deformations are present. To this end, we report the classification results on the UIUC 8-Sports dataset and the Graz-02 dataset (see Tables Table 3 and Table 5). It is interesting to note that the Graz-02 dataset contains heavy clutter, pose and scale variations (e.g., for some ‘car’ images only 5% of the pixels are covered by the car in a scene). Our approach achieved high accuracies of and respectively on the UIUC 8-Sports and Graz-02 datasets. These performances are and higher than the previous best methods on UIUC 8-Sports and Graz-02 datasets respectively.

Table 4: Mean Accuracy for the NYU v1 Dataset.
Method Accuracy (%)
BoW-SIFT [ICCVw’11]
RGB-LLC [TC’13]
RGB-LLC-RPSL [TC’13]

Proposed DUCA

Table 5: Equal Error Rates (EER) for the Graz-02 dataset.
Cars People Bikes Overall
OLB [SCIA’05] 70.7 81.0 76.5 76.1
VQ [ICCV’07] 80.2 85.2 89.5 85.0
ERC-F [PAMI’08] 79.9 - 84.4 82.1
TSD-IB [BMVC’11] 87.5 85.3 91.2 88.0
TSD-k [BMVC’11] 84.8 87.3 90.7 87.6

Proposed DUCA

98.7 98.0 99.0 98.6
Figure 5: Examples mistakes and the limitations of our method. Most of the incorrect predictions are due to ambiguous cases. The actual and predicted class names are shown in blue and red respectively. (Best viewed in color)
Figure 5: Examples mistakes and the limitations of our method. Most of the incorrect predictions are due to ambiguous cases. The actual and predicted class names are shown in ‘blue’ and ‘red’ respectively. (Best viewed in color)

to 40mm

to 39mm

to 40mm

Figure 6: The contribution of distinctive patches for the correct class prediction of a scene are shown in the form of a heat map (red means more contribution). These examples show that our approach captures the discriminative properties of distinctive mid-level patches and uses them to predict the correct class. (Best viewed in color)
Figure 6: The contribution of distinctive patches for the correct class prediction of a scene are shown in the form of a heat map (’red’ means more contribution). These examples show that our approach captures the discriminative properties of distinctive mid-level patches and uses them to predict the correct class. (Best viewed in color)
Figure 7: Confusion Matrix for MIT-67 dataset. It can be seen that the proposed method confuses similar looking classes with each others e.g., library with bookstore; bedroom with living room and dental office with operating room. (Best viewed in color)
Figure 7: Confusion Matrix for MIT-67 dataset. It can be seen that the proposed method confuses similar looking classes with each others e.g., library with bookstore; bedroom with living room and dental office with operating room. (Best viewed in color)

The class-wise classification accuracies of the MIT-67, UIUC 8-Sports, Scene-15 and NYU datasets are shown in the form of confusion matrices in Fig. ? and Figure 7 respectively. Note the very strong diagonal in all confusion matrices. The majority () of the mistakes are made for the closely related classes e.g., coast-opencountry (Fig. ?), croquet-bocce (Fig. ?), bedroom-livingroom (Fig. ?), dentaloffice-operatingroom (Figure 7) and library-bookstore (Figure 7). We also show examples of miss-classified images in Figure 5. The results show that the classes with significant visual and semantic similarities are confused amongst each others e.g., childrenroom-kindergarten and movietheatre-auditorium (Fig. Figure 5).

In order to visualize which patches contributed most towards a correct classification, we plot the heat map of the patch contribution scores in Figure 6. It turns out that the most distinctive patches, which carry valuable information, have a higher contribution towards the correct prediction of a scene class. Moreover, mid-level patches carry an intermediate level of scene details and contextual relationships between objects which help in the scene classification process.

4.4Ablative Analysis

To analyze the effect of the different components of the proposed scheme on the final performance, we conduct an ablation study. Table 6 summarizes the results achieved on the MIT-67 Scenes dataset when different components were replaced or removed from the final framework.

It turns out that the supervised and unsupervised codebooks individually perform reasonably well. However, their combination gives state of the art performance. For the unsupervised codebook, k-means clustering performs slightly better however, at the cost of a considerable amount of computational resources (GB RAM for the MIT-67 dataset) and processing time ( day for the MIT-67 dataset). In contrast, the random sampling of MRPs gives a comparable performance with a big boost in computational efficiency. Feature encoding from a single large codebook does not only produce a lower performance, but it also requires more computational time and memory. In our experiments, feature encoding from one single large codebook requires almost twice the time ( sec/image) taken by multiple smaller codebooks ( sec/image). The resulting features performed best when the max-pooling operation was applied to combine them.

Table 6: Ablative Analysis on MIT-67 Scene Dataset.
Variants of Our Approach Accuracy (%)
Supervised codebook
Unsupervised codebook
Supervised + Unsupervised
K-means clustering
Random sampling
Single large codebook
Multiple smaller codebooks
Sparse linear coding
Classifier similarity metric coding
Mean-poling
Max-pooling
Original data
Data augmentation

5Conclusion

This paper proposed a robust feature representation based on discriminative mid-level convolutional activations, for highly variable indoor scenes. To suitably contrive the convolutional activations for indoor scenes, the paper proposed to break their inherently preserved global spatial structure by encoding them in a number of of multiple codebooks. These codebooks are composed of distinctive patches and of the semantically labeled elements. For the labeled elements, we introduced the first large-scale dataset of object categories of indoor scenes. Our approach achieves state-of-the-art performance on five very challenging datasets.

Acknowledgements

This research was supported by the SIRF and IPRS scholarships from the University of Western Australia (UWA) and the Australian Research Council (ARC) grants DP110102166, DP150100294 and DE120102960.

Footnotes

  1. A codebook is a collection of distinctive mid-level patches.

References

  1. A. Bosch, A. Zisserman, and X. Muoz, “Scene classification using a hybrid generative/discriminative approach,” Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 4, pp. 712–727, 2008.
  2. Y.-L. Boureau, F. Bach, Y. LeCun, and J. Ponce, “Learning mid-level features for recognition,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 2559–2566.
  3. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” in British Machine Vision Conference, 2014.
  4. C. Doersch, A. Gupta, and A. A. Efros, “Mid-level visual element discovery as discriminative mode seeking,” in Advances in Neural Information Processing Systems, 2013, pp. 494–502.
  5. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, “Describing objects by their attributes,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 1778–1785.
  6. L. Fei-Fei and P. Perona, “A bayesian hierarchical model for learning natural scene categories,” in International Conference on Computer Vision and Pattern Recognition, vol. 2.1em plus 0.5em minus 0.4emIEEE, 2005, pp. 524–531.
  7. S. Gao, I. W. Tsang, L.-T. Chia, and P. Zhao, “Local features are not lonely–laplacian sparse coding for image classification,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 3555–3561.
  8. Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer International Publishing, 2014, pp. 392–407.
  9. S. Gupta, P. Arbelaez, and J. Malik, “Perceptual organization and recognition of indoor scenes from rgb-d images,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 564–571.
  10. K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2014, pp. 346–361.
  11. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
  12. Y. Jiang, J. Yuan, and G. Yu, “Randomized spatial partition for scene recognition,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 730–743.
  13. M. Juneja, A. Vedaldi, C. Jawahar, and A. Zisserman, “Blocks that shout: Distinctive parts for scene classification,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 923–930.
  14. J. Krapac, J. Verbeek, F. Jurie et al., “Learning tree-structured descriptor quantizers for image categorization,” in British Machine Vision Conference, 2011.
  15. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
  16. S. Kumar and M. Hebert, “A hierarchical field framework for unified context-based classification,” in Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, vol. 2.1em plus 0.5em minus 0.4emIEEE, 2005, pp. 1284–1291.
  17. R. Kwitt, N. Vasconcelos, and N. Rasiwasia, “Scene recognition on the semantic manifold,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 359–372.
  18. S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in International Conference on Computer Vision and Pattern Recognition, vol. 2.1em plus 0.5em minus 0.4emIEEE, 2006, pp. 2169–2178.
  19. L.-J. Li and L. Fei-Fei, “What, where and who? classifying events by scene and object recognition,” in International Conference on Computer Vision.1em plus 0.5em minus 0.4emIEEE, 2007, pp. 1–8.
  20. L.-J. Li, H. Su, L. Fei-Fei, and E. P. Xing, “Object bank: A high-level image representation for scene classification & semantic feature sparsification,” in Advances in Neural Information Processing Systems, 2010, pp. 1378–1386.
  21. L.-J. Li, H. Su, Y. Lim, and L. Fei-Fei, “Objects as attributes for scene classification,” in Trends and Topics in Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 57–69.
  22. Q. Li, J. Wu, and Z. Tu, “Harvesting mid-level visual concepts from large-scale internet images,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 851–858.
  23. D. Lin, C. Lu, R. Liao, and J. Jia, “Learning important spatial pooling regions for scene classification,” 2014.
  24. R. Margolin, L. Zelnik-Manor, and A. Tal, “Otc: A novel local descriptor for scene classification,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2014, pp. 377–391.
  25. M. Marszatek and C. Schmid, “Accurate object localization with shape masks,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2007, pp. 1–8.
  26. F. Moosmann, E. Nowak, and F. Jurie, “Randomized clustering forests for image classification,” Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 9, pp. 1632–1646, 2008.
  27. E. Nowak, F. Jurie, and B. Triggs, “Sampling strategies for bag-of-features image classification,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2006, pp. 490–503.
  28. A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” International Journal of Computer Vision, vol. 42, no. 3, pp. 145–175, 2001.
  29. A. Opelt and A. Pinz, “Object localization with boosting and weak supervision for generic object recognition,” in SCIA.1em plus 0.5em minus 0.4emSpringer, 2005, pp. 862–871.
  30. M. Oquab, L. Bottou, I. Laptev, J. Sivic et al., “Learning and transferring mid-level image representations using convolutional neural networks,” 2014.
  31. W. Ouyang, P. Luo, X. Zeng, S. Qiu, Y. Tian, H. Li, S. Yang, Z. Wang, Y. Xiong, C. Qian et al., “Deepid-net: multi-stage and deformable deep convolutional neural networks for object detection,” arXiv preprint arXiv:1409.3505, 2014.
  32. M. Pandey and S. Lazebnik, “Scene recognition and weakly supervised object localization with deformable part-based models,” in International Conference on Computer Vision.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 1307–1314.
  33. S. N. Parizi, J. G. Oberlin, and P. F. Felzenszwalb, “Reconfigurable models for scene recognition,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 2775–2782.
  34. A. Quattoni and A. Torralba, “Recognizing indoor scenes,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2009.
  35. A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “Cnn features off-the-shelf: an astounding baseline for recognition,” arXiv preprint arXiv:1403.6382, 2014.
  36. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” 2014.
  37. F. Sadeghi and M. F. Tappen, “Latent pyramidal regions for recognizing scenes,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 228–241.
  38. A. Shabou and H. LeBorgne, “Locality-constrained and spatially regularized coding for scene categorization,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 3618–3625.
  39. N. Silberman and R. Fergus, “Indoor scene segmentation using a structured light sensor,” in International Conference on Computer Vision Workshops, 2011.
  40. S. Singh, A. Gupta, and A. A. Efros, “Unsupervised discovery of mid-level discriminative patches,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 73–86.
  41. J. Sun and J. Ponce, “Learning discriminative part detectors for image classification and cosegmentation,” in International Conference on Computer Vision.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 3400–3407.
  42. D. Tao, L. Jin, Z. Yang, and X. Li, “Rank preserving sparse learning for kinect based scene classification.” IEEE transactions on cybernetics, vol. 43, no. 5, p. 1406, 2013.
  43. L. Torresani, M. Szummer, and A. Fitzgibbon, “Efficient object category recognition using classemes,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2010, pp. 776–789.
  44. T. Tuytelaars and C. Schmid, “Vector quantizing feature space with a regular lattice,” in International Conference on Computer Vision.1em plus 0.5em minus 0.4emIEEE, 2007, pp. 1–8.
  45. A. Vedaldi and B. Fulkerson, “VLFeat: An open and portable library of computer vision algorithms,” http://www.vlfeat.org/, 2008.
  46. H. Wang, M. M. Ullah, A. Klaser, I. Laptev, C. Schmid et al., “Evaluation of local spatio-temporal features for action recognition,” in British Machine Vision Conference, 2009.
  47. J. Wu and J. M. Rehg, “Beyond the euclidean distance: Creating effective visual codebooks using the histogram intersection kernel,” in International Conference on Computer Vision.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 630–637.
  48. ——, “Centrist: A visual descriptor for scene categorization,” Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1489–1501, 2011.
  49. J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 3485–3492.
  50. J. Yao, S. Fidler, and R. Urtasun, “Describing the scene as a whole: Joint object detection, scene classification and semantic segmentation,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 702–709.
  51. J. Yuan, M. Yang, and Y. Wu, “Mining discriminative co-occurrence patterns for visual recognition,” in International Conference on Computer Vision and Pattern Recognition.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 2777–2784.
  52. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2014, pp. 818–833.
  53. M. D. Zeiler, G. W. Taylor, and R. Fergus, “Adaptive deconvolutional networks for mid and high level feature learning,” in International Conference on Computer Vision.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 2018–2025.
  54. Y. Zheng, Y.-G. Jiang, and X. Xue, “Learning hybrid part filters for scene recognition,” in European Conference on Computer Vision.1em plus 0.5em minus 0.4emSpringer, 2012, pp. 172–185.
  55. J. Zhu, L.-J. Li, L. Fei-Fei, and E. P. Xing, “Large margin learning of upstream scene understanding models,” in Advances in Neural Information Processing Systems, 2010, pp. 2586–2594.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
10538
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description