Visual descriptors for content-based retrieval of remote sensing images
In this paper we present an extensive evaluation of visual descriptors for the content-based retrieval of remote sensing (RS) images. The evaluation includes global hand-crafted, local hand-crafted, and Convolutional Neural Network (CNNs) features coupled with four different Content-Based Image Retrieval schemes. We conducted all the experiments on two publicly available datasets: the 21-class UC Merced Land Use/Land Cover (LandUse) dataset and 19-class High-resolution Satellite Scene dataset (SceneSat). The content of RS images might be quite heterogeneous, ranging from images containing fine grained textures, to coarse grained ones or to images containing objects. It is therefore not obvious in this domain, which descriptor should be employed to describe images having such a variability. Results demonstrate that CNN-based features perform better than both global and and local hand-crafted features whatever is the retrieval scheme adopted. Features extracted from SatResNet-50, a residual CNN suitable fine-tuned on the RS domain, shows much better performance than a residual CNN pre-trained on multimedia scene and object images. Features extracted from NetVLAD, a CNN that considers both CNN and local features, works better than others CNN solutions on those images that contain fine-grained textures and objects.
ontent-Based Image Retrieval (CBIR); Visual Descriptors; Convolutional Neural Networks (CNNs); Relevance Feedback (RF); Active Learning (AL); Remote Sensing (RS)
The recent availability of a large amount of remote sensing (RS) images is boosting the design of systems for their management. A conventional RS image management system usually exploits high-level features to index the images such as textual annotations and metadata Datta et al. (2008). In the recent years, researchers are focusing their attention on systems that exploit low-level features extracted from images for their automatic indexing and retrieval Jain and Healey (1998). These types of systems are known as Content-Based Image Retrieval (CBIR) systems and they have demonstrated to be very useful in the RS domain Demir and Bruzzone (2015); Aptoula (2014); Ozkan et al. (2014); Yang and Newsam (2013); Zajić et al. (2007).
The CBIR systems allow to search and retrieve images that are similar to a given query image Smeulders et al. (2000); Datta et al. (2008). Usually their performance strongly depends on the effectiveness of the features exploited for representing the visual content of the images Smeulders et al. (2000). The content of RS images might be quite heterogeneous, ranging from images containing fine grained textures, to coarse grained ones or to images containing objects Yang and Newsam (2010); Dai and Yang (2011). It is therefore not obvious in this domain, which descriptor should be employed to describe images having such a variability.
In this paper we compare several visual descriptors in combination with four different retrieval schemes. Such descriptors can be grouped in two classes. The first class includes traditional global hand-crafted descriptors that were originally designed for image analysis and local hand-crafted features that were originally designed for object recognition. The second class includes features that correspond to intermediate representations of Convolutional Neural Networks (CNNs) trained for generic object and/or scene and RS image recognition.
To reduce the influence of the retrieval scheme on the evaluation of the features we investigated the features coupled with four different image retrieval schemes. The first one, that is also the simplest one, is a basic image retrieval system that takes one image as input query and returns a list of images ordered by their degree of feature similarity. The second and the third ones, named pseudo and manual Relevance Feedback (RF), extend the basic approach by expanding the initial query. The Pseudo RF scheme uses the most similar images to the initial query, for re-querying the image database. The final result is obtained by combining the results of each single query. In the manual RF, the set of relevant images is suggested by the user which evaluates the result of the initial query. The last scheme considered is named active-learning-based RF Demir and Bruzzone (2015). It exploits Support Vector Machines (SVM) to classify relevant and not relevant images on the basis of the user feedback.
For the sake of completeness, for the first three retrieval schemes we considered different measure of similarity, such as Euclidean, Cosine, Manhattan, and -square, while for the active-learning-based RF scheme we considered the histogram intersection as similarity measure, as proposed by the original authors Demir and Bruzzone (2015).
We conducted all the experiments on two publicly available datasets: the 21-class UC Merced Land Use/Land Cover dataset Yang and Newsam (2010) (LandUse) and 19-class High-resolution Satellite Scene dataset Dai and Yang (2011) (SatScene). Evaluations exploit several computational measures in order to quantify the effectiveness of the features. To make the experiments replicable, we made publicly available all the visual descriptors calculated as well as the scripts for making the evaluation of all the image retrieval schemes
The rest of the paper is organized as follows: Section 2 reviews the most relevant visual descriptors and retrieval schemes; Section 3 describes the data, visual descriptors, retrieval schemes evaluated and the experimental setup; Section 4 reports and analyzes the experimental results; finally, Section 5 presents our final considerations and discusses some new directions for our future research.
2 Background and Related Works
The Indexing, also called feature extraction, module computes the visual descriptors that characterize the image content. Given an image, these features are usually pre-computed and stored in a database of features;
The Retrieval module, given a query image, finds the images in the database that are most similar by comparing the corresponding visual descriptors.
The Visualization module shows the images that are most similar to a given query image ordered by the degree of similarity.
The Relevance Feedback module makes it possible to select relevant images from the subset of images returned after an initial query. This selection can be given manually by a user or automatically by the system.
A huge variety of features have been proposed in literature for describing the visual content. They are often divided into hand-crafted features and learned features. Hand-crafted descriptors are features extracted using a manually predefined algorithm based on the expert knowledge. Learned descriptors are features extracted using Convolutional Neural Networks (CNNs).
Global hand-crafted features describe an image as a whole in terms of color, texture and shape distributions Mirmehdi, Xie, and Suri (2009). Some notable examples of global features are color histograms Novak, Shafer et al. (1992), spatial histogram Wang, Wu, and Yang (2010), Gabor filters Manjunath and Ma (1996), co-occurrence matrices Arvis et al. (2004); Haralick (1979), Local Binary Patterns (LBP) Ojala, Pietikäinen, and Mänepää (2002), Color and Edge Directivity Descriptor (CEDD) Chatzichristofis and Boutalis (2008), Histogram of Oriented Gradients (HOG) Junior et al. (2009), morphological operators like granulometries information Bosilj et al. (2016); Aptoula (2014); Hanbury, Kandaswamy, and Adjeroh (2005), Dual Tree Complex Wavelet Transform (DT-CWT) Bianconi et al. (2011); Barilla and Spann (2008) and GIST Oliva and Torralba (2001). Readers who would wish to deepen the subject can refer to the following papers Rui, Huang, and Chang (1999); Deselaers, Keysers, and Ney (2008); Liu and Yang (2013); Veltkamp, Burkhardt, and Kriegel (2013).
Local hand-crafted descriptors like Scale Invariant Feature Transform (SIFT) Lowe (2004); Bianco et al. (2015) provide a way to describe salient patches around properly chosen key points within the images. The dimension of the feature vector depends on the number of chosen key points in the image. A great number of key points can generate large feature vectors that can be difficult to be handled in the case of a large-scale image retrieval system. The most common approach to reduce the size of feature vectors is the Bag-of-Visual Words (BoVW) Sivic and Zisserman (2003); Yang and Newsam (2010). This approach has shown excellent performance not only in image retrieval applications Deselaers, Keysers, and Ney (2008) but also in object recognition Grauman and Leibe (2010), image classification Csurka et al. (2004) and annotation Tsai (2012). The idea underlying is to quantize by clustering local descriptors into visual words. Words are then defined as the centers of the learned clusters and are representative of several similar local regions. Given an image, for each key point the corresponding local descriptor is mapped to the most similar visual word. The final feature vector of the image is represented by the histogram of the its visual words.
CNNs are a class of learnable architectures used in many domains such as image recognition, image annotation, image retrieval etc Schmidhuber (2015). CNNs are usually composed of several layers of processing, each involving linear as well as non-linear operators, that are learned jointly, in an end-to-end manner, to solve a particular tasks. A typical CNN architecture for image classification consists of one or more convolutional layers followed by one or more fully connected layers. The result of the last full connected layer is the CNN output. The number of output nodes is equal to the number of image classes Krizhevsky, Sutskever, and Hinton (2012).
A CNN that has been trained for solving a given task can be also adapted to solve a different task. In practice, very few people train an entire CNN from scratch, because it is relatively rare to have a dataset of sufficient size. Instead, it is common to take a CNN that is pre-trained on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories Deng et al. (2009)), and then use it either as an initialization or as a fixed feature extractor for the task of interest Razavian et al. (2014); Vedaldi and Lenc (2014). In the latter case, given an input image, the pre-trained CNN performs all the multilayered operations and the corresponding feature vector is the output of one of the fully connected layers Vedaldi and Lenc (2014). This use of CNNs have demonstrated to be very effective in many pattern recognition applications Razavian et al. (2014).
A basic retrieval scheme takes as input the visual descriptor corresponding to the query image perfomed by the user and it computes the similarity between such a descriptor and all the visual descriptors of the database of features. As a result of the search, a ranked list of images is returned to the user. The list is ordered by a degree of similarity, that can be calculated in several ways Smeulders et al. (2000): Euclidean distance (that is the most used), Cosine similarity, Manhattan distance, -square distance, etc. Brinke, Squire, and Bigelow (2004).
2.3 Relevance Feedback
In some cases visual descriptors are not able to completely represent the image semantic content. Consequently, the result of a CBIR system might be not completely satisfactory. One way to improve the performance is to allow the user to better specify its information need by expanding the initial query with other relevant images Rui et al. (1998); Hong, Tian, and Huang (2000); Zhou and Huang (2003); Li and Allinson (2013). Once the result of the initial query is available, the feedback module makes it possible to automatically or manually select a subset of relevant images. In the case of automatic relevance feedback (pseudo-relevance feedback) Baeza-Yates, Ribeiro-Neto et al. (1999), the top images retrieved are considered relevant and used to expand the query. In the case of manual relevance feedback (explicit relevance feedback (RF)) Baeza-Yates, Ribeiro-Neto et al. (1999), it is the user that manually selects relevant of images from the results of the initial query. In both cases, the relevance feedback process can be iterated several times to better capture the information need. Given the initial query image and the set of relevant images, whatever they are selected, the feature extraction module computes the corresponding visual descriptors and the corresponding queries are performed individually. The final set of images is then obtained by combining the ranked sets of images that are retrieved. There are several alternative ways in which the relevance feedback could implemented to expand the initial query. Readers who would wish to deepen on this topic can refer the following papers Zhou and Huang (2003); Li and Allinson (2013); Rui and Huang (2001).
The performance of the system when relevance feedback is used, strongly depends on the quality of the results achieved after the initial query. A system using effective features for indexing, returns a high number of relevant images in the first ranking positions. This makes the pseudo-relevance feedback effective and, in the case of manual relevance feedback, it makes easier to the user selecting relevant images within the result set.
Although there are several examples in the literature of manual RF Thomee and Lew (2012); Ciocca and Schettini (1999); Ciocca, Gagliardi, and Schettini (2001), since human labeling task is enormously boring and time consuming, these schemes are not practical and efficient in a real scenario, especially when huge archives of images are considered. Apart from the pseudo-RF, other alternatives to manual RF approach are the hybrid systems such as the systems based on supervised machine learning Demir and Bruzzone (2015); Pedronette, Calumby, and Torres (2015). This learning method aims at finding the most informative images in the archive that, when annotated and included in the set of relevant and irrelevant images (i.e., the training set), can significantly improve the retrieval performance Demir and Bruzzone (2015); Ferecatu and Boujemaa (2007). The Active-Learning-based RF scheme presented by Demir et al. Demir and Bruzzone (2015) is an example of hybrid scheme. Given a query, the user selects a small number of relevant and not relevant images that are used as training examples to train a binary classifier based on Support Vector Machines. The system iteratively proposes images to the user that assigns the relevance feedback. At each RF iteration the classifier is re-trained using a set of images composed of the initial images and the images from the relevance feedback provided by the user. After some RF iterations, the classifier is able to retrieve images that are similar to the query with a higher accuracy with respect to the initial query. At each RF iteration, the system suggests images to the user by following this strategy: 1) the system selects the most uncertain (i.e. ambiguous) images by taking the ones closest to the classifier hyperplanes; 2) the system selects the (with ) most diverse images from the highest density regions of the future space.
3 Methods and materials
Given an image database composed of images, the most relevant images of to a given query are the images that have the smallest distances between their feature vectors and the feature vector extracted from the query image. Let us consider and as the feature vectors extracted from the query image and a generic image of respectively. The distance between two vectors can be calculated by using several distance functions, here we considered: Euclidean, Cosine, Manhattan, and -square.
In this work we evaluated:
We conducted all the experiments on two publicly available datasets described in Sec. 3.3 for which the ground truth is known.
3.1 Visual descriptors
In this work we compared visual descriptors for content-based retrieval of remote sensing images. We considered a few representative descriptors selected from global and local hand-crafted and Convolutional Neural Networks approaches. In some cases we considered both color and gray-scale images. The gray-scale image is defined as follows: . All feature vectors have been normalized (they have been divided by its -norm):
Global hand-crafted descriptors
256-dimensional gray-scale histogram (Hist L) Novak, Shafer et al. (1992);
512-dimensional Hue and Value marginal histogram obtained from the HSV color representation of the image (Hist H V) Novak, Shafer et al. (1992);
768-dimensional RGB and rgb marginal histograms (Hist RGB and Hist rgb) Pietikainen et al. (1996);
1536-dimensional spatial RGB histogram achieved from a RGB histogram calculated in different part of the image (Spatial Hist RGB) Novak, Shafer et al. (1992);
144-dimensional Color and Edge Directivity Descriptor (CEDD) features. This descriptor uses a fuzzy version of the five digital filters proposed by the MPEG-7 Edge Histogram Descriptor (EHD), forming 6 texture areas. CEDD uses 2 fuzzy systems that map the colors of the image in a 24-color custom palette;
512-dimensional Gist features obtained considering eight orientations and four scales for each channel (Gist RGB) Oliva and Torralba (2001);
264-dimensional opponent Gabor feature vector extracted as Gabor features from several inter/intra channel combinations: monochrome features extracted from each channel separately and opponent features extracted from couples of colors at different frequencies (Opp. Gabor RGB) Jain and Healey (1998);
580-dimensional Histogram of Oriented Gradients feature vector Junior et al. (2009). Nine histograms with nine bins are concatenated to achieve the final feature vector (HoG);
78-dimensional feature vector obtained calculating morphological operators (granulometries) at four angles and for each color channel (Granulometry) Hanbury, Kandaswamy, and Adjeroh (2005);
18-dimensional Local Binary Patterns (LBP) feature vector for each channel. We considered LBP applied to gray images and to color images represented in RGB Mäenpää and Pietikäinen (2004). We selected the LBP with a circular neighbourhood of radius 2 and 16 elements, and 18 uniform and rotation invariant patterns. We set and for the LandUse and SceneSat datasets respectively (LBP L and LBP RGB).
Local hand-crafted descriptors
SIFT: We considered four variants of the Bag of Visual Words (BoVW) representation of a 128-dimensional Scale Invariant Feature Transform (SIFT) calculated on the gray-scale image. For each variant, we built a codebook of \num1024 visual words by exploiting images from external sources.
The four variants are:
SIFT: \num1024-dimensional BoVW of SIFT descriptors extracted from regions at given key points chosen using the SIFT detector (SIFT);
Dense SIFT: \num1024-dimensional BoVW of SIFT descriptors extracted from regions at given key points chosen from a dense grid.
Dense SIFT (VLAD): \num25600-dimensional vector of locally aggregated descriptors (VLAD) Cimpoi et al. (2014).
Dense SIFT (FV):\num40960-dimensional Fisher’s vectors (FV) of locally aggregated descriptors Jégou et al. (2010).
LBP: We considered the Bag of Visual Words (BoVW) representation of Local Binary Patterns descriptor calculated on each channel of the RGB color space separately and then concatenated. LBP has been extracted from regions at given key points sampled from a dense grid every 16 pixels. We considered the LBP with a circular neighbourhood of radius 2 and 16 elements, and 18 uniform and rotation invariant patterns Cusano, Napoletano, and Schettini (2015). We set and for the LandUse and SceneSat respectively. Also in this case the codebook was built using an external dataset (Dense LBP RGB).
The CNN-based features have been obtained as the intermediate representations of deep convolutional neural networks originally trained for scene and object recognition. The networks are used to generate a visual descriptor by removing the final softmax nonlinearity and the last fully-connected layer. We selected the most representative CNN architectures in the state of the art Vedaldi and Lenc (2014); Szegedy et al. (2015); He et al. (2016); Arandjelovic et al. (2016) by considering a different accuracy/speed trade-off. All the CNNs have been trained on the ILSVRC-2015 dataset Russakovsky et al. (2015) using the same protocol as in Krizhevsky, Sutskever, and Hinton (2012). In particular we considered \num4096, \num2048, \num1024 and \num128-dimensional feature vectors as follows Razavian et al. (2014); Marmanis et al. (2016):
BVLC AlexNet (BVLC AlexNet): this is the AlexNet trained on ILSVRC 2012 Krizhevsky, Sutskever, and Hinton (2012).
Medium CNN (Vgg M-2048-1024-128): three modifications of the Vgg M network, with lower dimensional last fully-connected layer. In particular we used a feature vector of 2048, 1024 and 128 size Chatfield et al. (2014).
Vgg Very Deep 19 and 16 layers (Vgg VeryDeep 16 and 19): the configuration of these networks has been achieved by increasing the depth to 16 and 19 layers, that results in a substantially deeper network than the ones previously Simonyan and Zisserman (2014).
GoogleNet Szegedy et al. (2015) is a 22 layers deep network architecture that has been designed to improve the utilization of the computing resources inside the network.
ResNet 50 is Residual Network. Residual learning framework are designed to ease the training of networks that are substantially deeper than those used previously. This network has 50 layers He et al. (2016).
ResNet 101 is Residual Network made of 101 layers He et al. (2016).
ResNet 152 is Residual Network made of 101 layers He et al. (2016).
Besides traditional CNN architectures, we evaluated the NetVLAD Arandjelovic et al. (2016). This architecture is a combination of a Vgg VeryDeep 16 Simonyan and Zisserman (2014) and a VLAD layer Delhumeau et al. (2013). The network has been trained for place recognition using a subset of a large dataset of multiple panoramic images depicting the same place from different viewpoints over time from the Google Street View Time Machine Torii et al. (2013).
To further evaluate the power of CNN-based descriptors, we have fine-tuned a CNN to the remote sensing domain. We have chosen the ResNet-50 which represents a good trade-off between depth and performance. This CNN demonstrated to be very effective on the ILSVRC 2015 (ImageNet Large Scale Visual Recognition Challenge) validation set with a top 1- recognition accuracy of about 80% He et al. (2016).
For the fine-tuning procedure we considered a very recent RS database Xia et al. (2017), named AID, that is made of aerial image dataset collected from Google Earth imagery. This dataset is made up of the following 30 aerial scene types: airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks and viaduct. The AID dataset has a number of 10000 images within 30 classes and about 200 to 400 samples of size 600 600 in each class.
We did not train the ResNet-50 from the scratch on AID because the number of images for each class is not enough. We started from the pre-trained ResNet-50 on ILSVRC2015 scene image classification dataset Russakovsky et al. (2015). From the AID dataset we have selected 20 images for each class and the rest has been using for training. During the fine-tuning stage each image has been resized to and a random crop has been taken of size. We augmented data with the horizontal flipping. During the test stage we considered a single central crop from the -resized image.
The ResNet-50 has been trained via stochastic gradient descent with a mini-batch of 16 images. We set the initial learning rate to 0.001 with learning rate update at every 2K iterations. The network has been trained within the Caffe Jia et al. (2014) framework on a PC equipped with a Tesla NVIDIA K40 GPU. The classification accuracy of the resulting SatResNet-50 fine-tuned with the AID dataset is 96.34% for the Top-1, and 99.34% for the Top-5.
In the following experiments, the SatResNet-50 is then used as feature extractor. The activations of the neurons in the fully connected layer are used as features for the retrieval of food images. The resulting feature vectors have size 2048 components.
3.2 Retrieval schemes
We evaluated and compared three retrieval schemes exploiting different distance functions, namely Euclidean, Cosine, Manhattan, and -square and an active-learning-based RF scheme using the histogram intersection as distance measure. In particular, we considered:
A basic IR. This scheme takes a query as input and outputs a list of ranked similar images.
Pseudo-RF. This scheme considers the first images returned after the initial query as relevant. We considered different values of ranging between 1 and 10.
Manual RF. Since the ground truth is known, we simulated the human interaction by taking the first actual relevant images from the result set obtained after the initial query. We evaluated performance at different values of ranging between 1 and 10.
Active-Learning-based RF. We considered an Active-Learning-based RF scheme as presented by Demir et al. Demir and Bruzzone (2015). The RF scheme requires the interaction with the user that we simulated taking relevant and not relevant images from the ground-truth.
3.3 Remote Sensing Datasets
The 21-Class Land Use/Land Cover Dataset (LandUse) is a dataset of images
of 21 land-use classes selected from aerial orthoimagery with a pixel
resolution of 30 cm Yang and Newsam (2010). The images were downloaded
from the United States Geological Survey (USGS) National Map of some
The 19-Class Satellite Scene (SceneSat) dataset consists of 19 classes of satellite scenes collected from Google Earth
Differences between LandUse and SceneSat
The datasets used for the evaluation are quite different in terms of image size and resolution. LandUse images are of size pixels while SceneSat images are of size pixels. Fig. 4 displays some images from the same category taken from the two datasets. It is quite evident that the images taken from the LandUse dataset are at a different zoom level with respect to the images taken from the SatScene dataset. It means that objects in the LandUse dataset will be more easily recognizable than the objects contained in the SceneSat dataset, see the samples of harbour category in Fig. 4. The SceneSat images depict a larger land area than LandUse images. It means that the SceneSat images have a more heterogeneous content than LandUse images, see some samples from harbour, residential area and parking categories reported in Fig. 2 and Fig. 3. Due to these differences between the two considered datasets, we may expect that the same visual descriptors will have different performance across datasets, see Sec. 4.
3.4 Retrieval measures
Image retrieval performance has been assessed by using three state of the art measures: the Average Normalized Modified Retrieval Rank (ANMRR), Precision (Pr) and Recall (Re), Mean Average Precision (MAP) Manning, Raghavan, and Schütze (2008); Manjunath et al. (2001). We also adopted the Equivalent Query Cost (EQC) to measure the cost of making a query independently of the computer architecture.
Average Normalized Modified Retrieval Rank (ANMRR)
The ANMRR measure is the MPEG-7 retrieval effectiveness measure commonly accepted by the CBIR community Manjunath et al. (2001) and largely used by recent works on content-based remote sensing image retrieval Ozkan et al. (2014); Aptoula (2014); Yang and Newsam (2013). This metric considers the number and rank of the relevant (ground truth) items that appear in the top images retrieved. This measure overcomes the problem related to queries with varying ground-truth set sizes. The ANMRR ranges in value between zero to one with lower values indicating better retrieval performance and is defined as follows:
indicates the number of queries performed. is the size of ground-truth set for each query . is a constant penalty that is assigned to items with a higher rank. is commonly chosen to be . AVR is the Average Rank for a single query and is defined as
where is the th position at which a ground-truth item is retrieved. is defined as:
Precision and Recall
Precision is the fraction of the images retrieved that are relevant to the query
It is often evaluated at a given cut-off rank, considering only the topmost results returned by the system. This measure is called precision at or .
Recall is the fraction of the images that are relevant to the query that are successfully retrieved:
In a ranked retrieval context, precision and recall values can be plotted to give the interpolated precision-recall curve Manning, Raghavan, and Schütze (2008). This curve is obtained by plotting the interpolated precision measured at the 11 recall levels of 0.0, 0.1, 0.2, …, 1.0. The interpolated precision at a certain recall level is defined as the highest precision found for any recall level :
Mean Average Precision (MAP)
Given a set of queries, Mean Average Precision is defined as,
where the average precision for each query is defined as,
where is the rank in the sequence of retrieved images, is the number of retrieved images, is the precision at cut-off in the list (), and is an indicator function equalling 1 if the item at rank is a relevant image, zero otherwise.
Equivalent Query Cost
Several previous works, such as Aptoula (2014), report a table that compares the computational time needed to execute a query when different indexing techniques have been used. This comparison can not be replicated because the computational time strictly depends on the computer architecture. To overcome this problem, we defined the Equivalent Query Cost () that measures the computational cost needed to execute a given query independently of the computer architecture. This measure is based on the fact that the calculation of the distance between two visual descriptors is linear in number of components and on the definition of basic cost . The basic cost is defined as the amount of computational effort that is needed to execute a single query over the entire database when a visual descriptor of length is used as indexing technique. The of a generic visual descriptor of length can be obtained as follows:
where the symbol stands for the integer part of the number , while is set to 5, that corresponds to the length of the co-occurrence matrices, that is the shortest descriptor evaluated in the experiments presented in this work.
|Hist. H V||0.781||15.98||54.22||43.49||23.41||16.84||6.27||102|
|Spatial Hist. RGB||0.808||14.36||37.70||31.13||19.09||14.62||5.95||307|
|Opp. Gabor RGB||0.744||18.76||53.81||44.89||26.18||19.69||6.99||52|
|Dense LBP RGB||0.744||19.01||60.10||51.89||29.12||20.30||6.33||204|
|Dense SIFT (VLAD)||0.649||28.01||74.93||65.25||38.20||28.10||7.18||5120|
|Dense SIFT (FV)||0.639||29.18||75.34||66.28||39.09||28.54||7.88||8192|
|Vgg M 2048||0.388||53.16||85.04||80.26||62.77||50.14||9.52||409|
|Vgg M 1024||0.400||51.66||84.43||79.41||61.40||48.88||9.50||204|
|Vgg M 128||0.498||40.94||73.82||68.30||50.67||39.92||9.18||25|
|Vgg VeryDeep 16||0.394||52.46||83.91||78.34||61.38||49.78||9.60||819|
|Vgg VeryDeep 19||0.398||51.95||82.84||77.60||60.69||49.16||9.63||819|
|Hist. H V||0.704||23.23||43.98||37.29||23.10||17.05||5.21||102|
|Spatial Hist. RGB||0.720||22.21||38.85||33.36||21.81||16.30||5.21||307|
|Opp. Gabor RGB||0.638||28.08||48.14||42.48||28.61||21.01||5.20||52|
|Dense LBP RGB||0.660||24.81||51.12||44.29||26.55||19.67||5.21||204|
|Dense SIFT (VLAD)||0.552||35.89||71.30||62.78||36.19||25.03||5.20||5120|
|Dense SIFT (FV)||0.518||39.44||72.34||64.69||38.84||27.23||5.20||8192|
|Vgg M 2048||0.431||47.14||71.08||67.52||47.33||31.83||5.21||409|
|Vgg M 1024||0.443||45.86||70.51||66.61||46.05||31.23||5.21||204|
|Vgg M 128||0.551||34.54||59.30||54.08||36.05||25.65||5.20||25|
|Vgg VeryDeep 16||0.440||46.18||70.67||66.71||46.22||31.46||5.20||819|
|Vgg VeryDeep 19||0.455||44.34||69.17||64.65||44.84||30.66||5.20||819|
4.1 Feature evaluation using the basic retrieval scheme
In this section we compare visual descriptors listed in Sec. 3.1 by using the basic retrieval scheme. In order to make the results more concise, in this section we show only the experiments performed employing the Euclidean distance. Given an image dataset, in turn, we used each image as query image and evaluated the results according to the metrics discussed above, i.e. , , , , , , and . In the case of the LandUse dataset we performed 2100 queries while in the case of SatScene dataset we evaluated 1005 queries in total.
The results obtained on the LandUse dataset are showed in Table 1, while those obtained on the SatScene dataset are showed in Table 2. Regarding the LandUse dataset, the best results are obtained by using the CNN-based descriptors and in particular the ResNet CNN architectures and the SatResNet-50 that is the fine-tuned ResNet-50. The global hand-crafted descriptors have the lowest performance, with the co-occurrence matrices being the worst one. The local hand-crafted descriptors achieve better results than global hand-crafted descriptors but worse than CNN-based descriptors. In particular, the SatResNet-50, compared with Bag of Dense SIFT and DT-CWT, achieves an value that is lower of about 50%, a value that is higher of about 50% , a that is higher of about 50%, a value that is higher of about 50%. The same behavior can be observed for the remaining precision levels. In particular, looking at we can notice that only the SatResNet-50 descriptor is capable of retrieving about 65% of the existing images for each class (). Regarding the SatScene dataset, the best results are obtained by the CNN-based descriptors and in particular the ResNet CNN architectures and the SatResNet-50. The global hand-crafted descriptors have the lowest performance, with the co-occurrence matrices being the worst one. The local hand-crafted descriptors achieve better results than global hand-crafted descriptors but worse than CNN-based descriptors. In particular the SatResNet-50, compared with Bag of Dense SIFT (FV), achieves an value that is lower of about 60%, a value that is higher of about 50% , a that is lower of about 20%, a value that is higher of about 30%. Similar behavior can be observed for the remaining precision levels. In particular, looking at we can notice that only SatResNet-50 is capable of retrieving about 70% of the existing images for each class ().
The first columns of tables 6 and 7 show the best performing visual descriptor for each remote sensing image class. For both LandUse and SceneSat datasets, the CNN-based descriptors are the best in the retrieval of all classes. SatResNet-50 performs better than other CNN architectures on most classes apart some classes containing objects rotated and translated on the image plane. In this case, NetVLAD demonstrated to perform better. Looking at Fig. 5 it is interesting to note that NetVLAD, which considers CNN features combined with local features, works better on object-based classes and more important that the SatResNet-50 network clearly outperforms the ResNet-50 thus demonstrating that the domain adaptation of the network to the remote sensing domain helped to handle with the heterogeneous content of remote sensing images.
In Fig. 6 the interpolated 11-points precision-recall curves achieved by a selection of visual descriptors are plotted. It is clear that, in this experiments CNN-based descriptors outperform again other descriptors. It is interesting to note that the SatResNet-50 network clearly outperforms the ResNet-50 thus confirming that the domain adaption has been very effective especially in the case of the SceneSat dataset. This is mostly due to the fact that both the AID and SceneSat datasets are made of pictures taken from Google Earth and then the image content is more similar. In contrast the LandUse dataset is made of picture taken from an aerial device and then the content is quite different in terms of resolution as already discussed in Section 3.3.1.
Concerning the computational cost, the Bag Dense SIFT (FV) is the most costly solution with the worst cost-benefit trade-off. Early after the Bag Dense SIFT (FV), the Vgg M is the other most costly descriptor that is about 200 more costly than the DT-CWT, that is among the global hand-crafted descriptors the best performing one.
One may prefer a less costly retrieval strategy that is less precise and then choose for the DT-CWT. Among the CNN-based descriptors, the Vgg M 128 has better values than the DT-CWT for both datasets. The Vgg M 128 is six times more costly than DT-CWT. Concluding, the Vgg M 128 descriptor has the best cost-benefit trade-off.
4.2 Feature evaluation using the pseudo-RF retrieval scheme
In the case of pseudo RF, we used the top images retrieved after the initial query for re-querying the system. The computational cost of such a system is times higher than the cost of a basic system.
Results obtained choosing are showed in Table 3(a) and 3(b) for the LandUse and SatScene datasets respectively. It can be noticed that, in both cases, the employment of the pseudo RF scheme gives an improvement with respect to the basic retrieval system whatever is the visual descriptor employed. The CNN-based and local hand-crafted descriptors that, when used in a basic system, obtained the highest precision at level 5 (), have the largest improvement of performance.
Figures 7(a) and (b) show the difference of MAP between the pseudo RF scheme and the basic retrieval scheme, when the Vgg visual descriptor is employed. The value ranges from 0 (that corresponds to the basic system) to 10. It can be noticed that the improvement of performance, when is equal to 5, is of about in the case of LandUse and of about in the case of SceneSat dataset.
4.3 Feature evaluation using the manual-RF retrieval scheme
In manual RF, we used the first actually relevant images retrieved after the initial query for re-querying the system. The computational cost of such a system is times higher than the cost of a basic system. The first five relevant images appear, in the worst case (co-occurrence matrix), within the top 50 images, while in the best case (SatResNet-50), within the top 6 or 7 images (cfr. table 1).
Results obtained choosing are showed in Table 4(a) and 4(b) for the LandUse and SatScene datasets respectively. It can be noticed that, in both cases, the employment of the manual RF scheme gives an improvement with respect to both the basic retrieval and the pseudo RF systems. The CNN-based and local hand-crafted descriptors that, when used in a basic system, obtained the highest precision at level 5 (), have also in this case the largest improvement of performance.
Figures 7(a) and (b) show the difference between the MAP of the manual RF scheme and the basic retrieval scheme, when the Vgg visual descriptor is employed, The value ranges from 0 (that corresponds to the basic system) to 10. It can be noticed that for both datasets the improvement of performance is, when is equal to 5, of about in the case of LandUse and of about in the case of SceneSat. The manual RF scheme, when is equal to 1, achieves the same performance of the pseudo RF when is equal to 2.
4.4 Feature evaluation using the active-learning-based-RF retrieval scheme
We considered the Active-Learning-based RF scheme as presented by Demir et al. Demir and Bruzzone (2015). As suggested by the original authors, we considered the following parameters: 10 RF iterations; an initial training set made of 2 relevant and 3 not relevant images; ambiguous images; diverse images; the histogram intersection as measure of similarity between feature vectors. The histogram intersection distance is defined as follows:
where and are the feature vectors of two generic images and is the size of the feature vector.
Results are showed in Table 5(a) and 5(b) for the LandUse and SatScene datasets respectively. Regarding the LandUse dataset, it can be noticed that the employment of this RF scheme gives an improvement with respect to the other retrieval schemes for all the visual descriptors. In the case of CNN-based descriptors the improvement is of about 20%. Surprisingly, in the case of SceneSat dataset, the employment of the Active-Learning-based RF scheme gives a performance improvement only in the cases of hand-crafted descriptors and most recent CNN architectures like ResNet or NetVLAD. In the best case, that is NetVLAD, the improvement is of about 80%. It is very interesting to note that for both datasets, the best performing descriptor is the NetVLAD. This is mostly due to the fact that this feature vector compared with the others extracted from different CNN architecture is less sparse. The degree of sparseness of feature vectors makes the Support Vector Machine, that is employed in the case of Active-Learning-based RF scheme, less or more effective.
The fourth columns of tables 6 and 7 show the best performing visual descriptor for each remote sensing image class. In the case of LandUse dataset, the best performing visual descriptors are the CNN-based descriptors, while in the case of SceneSat dataset, the best performing are the local hand-crafted descriptors apart from a few number of classes.
|categories||image||basic IR||pseudo RF||manual RF||act. learn. RF|
|agricultural||Vgg M||ResNet-101||Vgg VeryDeep 19||NetVLAD|
|forest||SatResNet-50||Vgg M||Vgg M||NetVLAD|
|categories||image||basic IR||pseudo RF||manual RF||act. learn. RF|
|Desert||SatResNet-50||SatResNet-50||SatResNet-50||Opp. Gabor RGB|
|basic IR||pseudo RF||manual RF||act. learn. RF||Overall|
|Vgg M 2048||0.410||409||0.367||2045||0.327||2045||0.375||8180||8.87|
|Vgg M 1024||0.422||204||0.378||1020||0.337||1020||0.380||4080||9.19|
|Vgg M 128||0.525||25||0.494||125||0.435||125||0.463||500||10.81|
|Vgg VeryDeep 19||0.427||819||0.394||4095||0.351||4095||0.307||16380||11.81|
|Vgg VeryDeep 16||0.416||819||0.383||4095||0.345||4095||0.403||16380||12.02|
|Opp. Gabor RGB||0.692||52||0.698||260||0.671||260||0.553||1040||13.56|
|Dense SIFT (FV)||0.579||8192||0.582||40960||0.535||40960||0.498||163840||13.82|
|Dense SIFT (VLAD)||0.601||5120||0.603||25600||0.559||25600||0.384||102400||13.93|
|Dense LBP RGB||0.702||204||0.708||1020||0.676||1020||0.631||4080||14.10|
|Hist. H V||0.741||102||0.747||510||0.715||510||0.630||2040||15.55|
|Spatial Hist. RGB||0.763||307||0.783||1535||0.735||1535||0.621||6140||18.67|
4.5 Average rank of visual descriptors across RS datasets
In table 8 we show the average rank of all the visual descriptors evaluated. The average rank is represented in the last column and obtained by averaging the ranks achieved by each visual descriptor across datasets, retrieval schemes and measures: , , at 5,10,50,100 levels, and EQC. For sake of completeness, for each retrieval scheme, we displayed the average ANMRR across datasets and the EQC for each visual descriptor. From this table is quite clear that across datasets, the best performing visual descriptors are the CNN-based ones. The first 13 positions out of 38 are occupied by CNN-based descriptors. The global hand-crafted descriptor DT-CWT is at 14th position mostly because of the length of the vector that is very short. After some other CNN-based descriptors, we find the local hand-crafted descriptors that despite their good performance, they are penalized by the size of the vector of feature that is very long, in the case of Dense SIFT (FV) is 40960 that is 2048 times higher than the size of DT-CWT.
Looking at the EQC columns of each retrieval schemes of table 8, it is quite evident that the use of Active-Learning-based RF is not always convenient. For instance, in the case of the top 5 visual descriptors of the table, the Active-Learning-based RF achieves globally worse performance than pseudo-RF with a much more higher EQC. This is not true in all other cases, where the performance achieved with the Active-Learning-based RF is better than pseudo-RF.
Notwithstanding this, the employment of techniques to speed-up the nearest image search process makes the AL-RF scheme not as computationally expensive as argued in the previous paragraph. Large amount of data and high dimensional feature vector, makes the nearest image search process very slow. The main bottleneck of the search is the access to the memory. The employment of a compact representation of the feature vectors, such as hash Zhao et al. (2015) or polysemous codes Douze, Jégou, and Perronnin (2016), is likely to offer a better efficiency than the use of full vectors thus accelerating the image search process. Readers who would wish to deepen the subject can refer to the following papers Zhao et al. (2015); Lu, Liong, and Zhou (2017); Douze, Jégou, and Perronnin (2016); Zhao et al. (2015).
4.6 Comparison with the state of the art
According to our results, one of the best performing visual descriptor is the ResNet and in particular SatResNet-50, while the best visual descriptor, when the computational cost is taken into account, is the Vgg M 128. We compared these descriptors, coupled with the four scheme described in Sec. 3.2, with some recent methods Bosilj et al. (2016); Aptoula (2014); Ozkan et al. (2014); Yang and Newsam (2013). All these works used the basic retrieval scheme and the experiments have been conducted on the LandUse dataset. Aptoula proposed several global morphological texture descriptors Bosilj et al. (2016); Aptoula (2014). Ozkan et al. used bag of visual words (BoVW) descriptors, the vector of locally aggregated descriptors (VLAD) and the quantized VLAD (VLAD-PQ) descriptors Ozkan et al. (2014). Yang et al. Yang and Newsam (2013) investigated the effects of a number of design parameters on the BoVW representation. They considered: saliency-versus grid-based local feature extraction, the size of the visual codebook, the clustering algorithm used to create the codebook, and the dissimilarity measure used to compare the BOVW representations.
The results of the comparison are shown in Table 9. The Bag of Dense SIFT (VLAD) presented in Ozkan et al. (2014) achieves performance that is close to the CNN-based descriptors. This method achieves with . This result has been obtained considering a codebook built by using images from the LandUse dataset. Concerning the computational cost, the texture features Yang and Newsam (2013); Aptoula (2014) are better than SatResNet-50 and Vgg M 128. In terms of trade-off between performance and computational cost, the Vgg M 128 descriptor achieves a value that is about 25% lower than the one achieved by the CCH+RIT+FPS+FPS descriptor used in Aptoula (2014) with a computational cost that is about 2 times higher.
|features||Hist. Inters.||Euclidean||Cosine||Manhattan||-square||Length||Time (sec)||EQC|
|CCH RIT FPS FPS Aptoula (2014)||0.609||0.640||-||0.589||0.575||62||-||12|
|CCH Aptoula (2014)||0.677||0.726||-||0.677||0.649||20||1.9||4|
|RIT Aptoula (2014)||0.751||0.769||-||0.751||0.757||20||2.3||4|
|FPS Aptoula (2014)||0.798||0.731||-||0.740||0.726||14||1.6||2|
|FPS Aptoula (2014)||0.853||0.805||-||0.790||0.783||8||1.6||1|
|pLPS-aug Bosilj et al. (2016)||-||0.472||-||-||-||12288||-||2458|
|Texture Yang and Newsam (2013)||-||0.630||-||-||-||-||40.4||-|
|Local features Yang and Newsam (2013)||0.591||-||-||-||-||193.3||-|
|Dense SIFT (BoVW) Ozkan et al. (2014)||-||-||0.540||-||-||1024||9.4||204|
|Dense SIFT (VLAD) Ozkan et al. (2014)||-||-||0.460||-||-||25600||129.3||5120|
|B-IR Vgg M 128||0.544||0.488||0.488||0.493||0.488||128||-||25|
|P-RF Vgg M 128||0.550||0.470||0.470||0.466||0.458||128||-||125|
|M-RF Vgg M 128||0.497||0.422||0.422||0.416||0.410||128||-||125|
|AL-RF Vgg M 128||0.333||-||-||-||-||128||-||500|
In this work we presented an extensive evaluation of visual descriptors for content-based retrieval of remote sensing images. We evaluated global hand-crafted, local hand-crafted and Convolutional Neural Networks features coupled with four different content-based retrieval (CBIR) schemes: a basic CBIR, a pseudo relevance feedback (RF), a manual RF and an active-learning-based RF. The experimentation has been conducted on two publicly available datasets that are different in terms of image size and resolution. Results demonstrated that:
CNN-based descriptors proved to perform better, on average, than both global hand-crafted and local hand-crafted descriptors whatever is the retrieval scheme adopted and on both the datasets considered, see the summary table 8;
The RS domain adaptation of the ResNet-50 has led to a notable improvement of performance with respect to CNNs pre-trained on multimedia scene and object images. This demonstrated the importance of domain adaptation in the field of remote sensing images;
Pseudo and manual relevance feedback schemes demonstrated to be very effective only when coupled with a visual descriptor that is high performing in a basic retrieval system, such as CNN-based and local hand-crafted descriptors. This is quite evident looking at 7a and b;
Active-Learning-based RF demonstrated to be very effective on average and the best performing among retrieval schemes. The computational cost required to perform one query is, on average, at least 4 times higher than the computational cost required to perform a query with the other considered RF schemes and at least 20 times higher than a basic retrieval scheme.
As future works, it would be interesting to experiments the efficiency of techniques to speed up the image search process by exploiting compact feature vector representations such as has, or polysemous codes.
The author declare that he has no competing interests.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for doing part of the experiments included in this research.
The author is grateful to Prof. Raimondo Schettini for the valuable comments and stimulating discussions and he would like to thank the reviewers for their valuable comments and effort to improve the manuscript.
- Aptoula, Erchan. 2014. “Remote sensing image retrieval with global morphological texture descriptors.” Geoscience and Remote Sensing, IEEE Transactions on 52 (5): 3023–3034.
- Arandjelovic, Relja, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. 2016. “NetVLAD: CNN architecture for weakly supervised place recognition.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5297–5307.
- Arvis, Vincent, Christophe Debain, Michel Berducat, and Albert Benassi. 2004. “Generalization of the cooccurrence matrix for colour images: Application to colour texture.” Image Analysis & Stereology 23 (1).
- Baeza-Yates, Ricardo, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval. Vol. 463. ACM press New York.
- Barilla, M.E., and M. Spann. 2008. “Colour-based texture image classification using the complex wavelet transform.” In Electrical Engineering, Computing Science and Automatic Control, 2008. CCE 2008. 5th International Conference on, nov., 358 –363.
- Bianco, S, D Mazzini, DP Pau, and R Schettini. 2015. “Local detectors and compact descriptors for visual search: A quantitative comparison.” Digital Signal Processing 44: 1–13.
- Bianconi, F, and A. Fernández. 2007. “Evaluation of the effects of Gabor filter parameters on texture classification.” Pattern Recognition 40 (12): 3325 – 3335.
- Bianconi, F., R. Harvey, P. Southam, and A. Fernández. 2011. “Theoretical and experimental comparison of different approaches for color texture classification.” Journal of Electronic Imaging 20 (4).
- Bosilj, Petra, Erchan Aptoula, Sébastien Lefèvre, and Ewa Kijak. 2016. “Retrieval of Remote Sensing Images with Pattern Spectra Descriptors.” ISPRS International Journal of Geo-Information 5 (12): 228.
- Brinke, Walter, David McG Squire, and John Bigelow. 2004. “Similarity: measurement, ordering and betweenness.” In Knowledge-Based Intelligent Information and Engineering Systems, 996–1002. Springer.
- Chatfield, Ken, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. “Return of the devil in the details: Delving deep into convolutional nets.” arXiv preprint arXiv:1405.3531 .
- Chatzichristofis, Savvas A, and Yiannis S Boutalis. 2008. “CEDD: color and edge directivity descriptor: a compact descriptor for image indexing and retrieval.” In Computer Vision Systems, 312–322. Springer.
- Cimpoi, M., S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. 2014. “Describing Textures in the Wild.” In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, 3606–3613.
- Ciocca, Gianluigi, Isabella Gagliardi, and Raimondo Schettini. 2001. “Quicklook 2: An integrated multimedia system.” Journal of Visual Languages & Computing 12 (1): 81–103.
- Ciocca, Gianluigi, and Raimondo Schettini. 1999. “A relevance feedback mechanism for content-based image retrieval.” Information processing & management 35 (5): 605–632.
- Csurka, Gabriella, Christopher Dance, Lixin Fan, Jutta Willamowski, and Cédric Bray. 2004. “Visual categorization with bags of keypoints.” In Workshop on statistical learning in computer vision, ECCV, Vol. 1, 1–2. Prague.
- Cusano, Claudio, Paolo Napoletano, and Raimondo Schettini. 2015. “Remote Sensing Image Classification Exploiting Multiple Kernel Learning.” IEEE Geoscience and Remote Sensing Letters 12 (11): 2331–2335.
- Dai, Dengxin, and Wen Yang. 2011. “Satellite image classification via two-layer sparse coding with biased image representation.” Geoscience and Remote Sensing Letters 8 (1): 173–176.
- Datta, Ritendra, Dhiraj Joshi, Jia Li, and James Z Wang. 2008. “Image retrieval: Ideas, influences, and trends of the new age.” ACM Computing Surveys (CSUR) 40 (2): 5.
- Delhumeau, Jonathan, Philippe-Henri Gosselin, Hervé Jégou, and Patrick Pérez. 2013. “Revisiting the VLAD image representation.” In Proceedings of the 21st ACM international conference on Multimedia, 653–656. ACM.
- Demir, Begum, and Lorenzo Bruzzone. 2015. “A Novel Active Learning Method in Relevance Feedback for Content-Based Remote Sensing Image Retrieval.” Geoscience and Remote Sensing, IEEE Transactions on 53 (5): 2323–2334.
- Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. “Imagenet: A large-scale hierarchical image database.” In IEEE Conference on Computer Vision and Pattern Recognition, 248–255. IEEE.
- Deselaers, Thomas, Daniel Keysers, and Hermann Ney. 2008. “Features for image retrieval: an experimental comparison.” Information Retrieval 11 (2): 77–107.
- Douze, Matthijs, Hervé Jégou, and Florent Perronnin. 2016. “Polysemous codes.” In European Conference on Computer Vision, 785–801. Springer.
- Ferecatu, Marin, and Nozha Boujemaa. 2007. “Interactive remote-sensing image retrieval using active relevance feedback.” Geoscience and Remote Sensing, IEEE Transactions on 45 (4): 818–826.
- Grauman, Kristen, and Bastian Leibe. 2010. Visual object recognition. Morgan & Claypool Publishers.
- Hanbury, Allan, Umasankar Kandaswamy, and DonaldA. Adjeroh. 2005. “Illumination-Invariant Morphological Texture Classification.” In Mathematical Morphology: 40 Years On, edited by Christian Ronse, Laurent Najman, and Etienne Decencire, Vol. 30 of Computational Imaging and Vision, 377–386. Springer Netherlands.
- Haralick, R.M. 1979. “Statistical and structural approaches to texture.” Proc. of the IEEE 67 (5): 786–804.
- Hauta-Kasari, M., J. Parkkinen, T. Jaaskelainen, and R. Lenz. 1996. “Generalized co-occurrence matrix for multispectral texture analysis.” In Pattern Recognition, 1996., Proceedings of the 13th International Conference on, Vol. 2, aug, 785 –789 vol.2.
- He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep residual learning for image recognition.” In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
- Hong, Pengyu, Qi Tian, and Thomas S Huang. 2000. “Incorporate support vector machines to content-based image retrieval with relevance feedback.” In International Conference on Image Processing, Vol. 3, 750–753. IEEE.
- Jain, A., and G. Healey. 1998. “A multiscale representation including opponent color features for texture recognition.” Image Processing, IEEE Transactions on 7 (1): 124 –128.
- Jégou, Hervé, Matthijs Douze, Cordelia Schmid, and Patrick Pérez. 2010. “Aggregating local descriptors into a compact image representation.” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3304–3311.
- Jia, Yangqing, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. “Caffe: Convolutional Architecture for Fast Feature Embedding.” arXiv preprint arXiv:1408.5093 .
- Junior, Oswaldo Ludwig, David Delgado, Valter Gonçalves, and Urbano Nunes. 2009. “Trainable classifier-fusion schemes: an application to pedestrian detection.” In Intelligent Transportation Systems, .
- Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. 2012. “Imagenet classification with deep convolutional neural networks.” In Advances in neural information processing systems, 1097–1105.
- Li, Jing, and Nigel M Allinson. 2013. “Relevance feedback in content-based image retrieval: a survey.” In Handbook on Neural Information Processing, 433–469. Springer.
- Liu, Guang-Hai, and Jing-Yu Yang. 2013. “Content-based image retrieval using color difference histogram.” Pattern Recognition 46 (1): 188–198.
- Lowe, D.G. 2004. “Distinctive Image Features from Scale-Invariant Keypoints.” Int’l J. Computer Vision 60 (2): 91–110.
- Lu, Jiwen, Venice Erin Liong, and Jie Zhou. 2017. “Deep Hashing for Scalable Image Search.” IEEE Transactions on Image Processing 26 (5): 2352–2367.
- Mäenpää, T., and M. Pietikäinen. 2004. “Classification with color and texture: jointly or separately?” Pattern Recognition 37 (8): 1629–1640.
- Manjunath, Bangalore S, and Wei-Ying Ma. 1996. “Texture features for browsing and retrieval of image data.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 18 (8): 837–842.
- Manjunath, Bangalore S, J-R Ohm, Vinod V Vasudevan, and Akio Yamada. 2001. “Color and texture descriptors.” Circuits and Systems for Video Technology, IEEE Transactions on 11 (6): 703–715.
- Manning, Christopher D, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to information retrieval. Vol. 1. Cambridge university press Cambridge.
- Marmanis, D., M. Datcu, T. Esch, and U. Stilla. 2016. “Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks.” IEEE Geoscience and Remote Sensing Letters 13 (1): 105–109.
- Mirmehdi, M., X. Xie, and J. Suri. 2009. Handbook of Texture Analysis. Imperial College Press.
- Novak, Carol L, Steven Shafer, et al. 1992. “Anatomy of a color histogram.” In Computer Vision and Pattern Recognition, 1992. Proceedings CVPR’92., 1992 IEEE Computer Society Conference on, 599–605. IEEE.
- Ojala, T., M. Pietikäinen, and T. Mänepää. 2002. “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns.” IEEE Trans. Pattern Anal. Mach. Intell. 24 (7): 971–987.
- Oliva, A., and A. Torralba. 2001. “Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope.” Int’l J. Computer Vision 42 (3): 145–175.
- Ozkan, Savas, Tayfun Ates, Engin Tola, Medeni Soysal, and Ersin Esen. 2014. “Performance Analysis of State-of-the-Art Representation Methods for Geographical Image Retrieval and Categorization.” Geoscience and Remote Sensing Letters, IEEE 11 (11): 1996–2000.
- Pedronette, Daniel Carlos Guimarães, Rodrigo T Calumby, and Ricardo da S Torres. 2015. “A semi-supervised learning algorithm for relevance feedback and collaborative image retrieval.” EURASIP Journal on Image and Video Processing 2015 (1): 1–15.
- Pietikainen, M., S. Nieminen, E. Marszalec, and T. Ojala. 1996. “Accurate color discrimination with classification based on feature distributions.” In Pattern Recognition, 1996., Proceedings of the 13th International Conference on, Vol. 3, aug, 833 –838 vol.3.
- Razavian, Ali S, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. “CNN features off-the-shelf: an astounding baseline for recognition.” In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, 512–519.
- Rui, Yong, and Thomas S Huang. 2001. “Relevance feedback techniques in image retrieval.” In Principles of visual information retrieval, 219–258. Springer.
- Rui, Yong, Thomas S Huang, and Shih-Fu Chang. 1999. “Image retrieval: Current techniques, promising directions, and open issues.” Journal of visual communication and image representation 10 (1): 39–62.
- Rui, Yong, Thomas S Huang, Michael Ortega, and Sharad Mehrotra. 1998. “Relevance feedback: a power tool for interactive content-based image retrieval.” Circuits and Systems for Video Technology, IEEE Transactions on 8 (5): 644–655.
- Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. 2015. “ImageNet Large Scale Visual Recognition Challenge.” International Journal of Computer Vision (IJCV) 115 (3): 211–252.
- Schmidhuber, Jürgen. 2015. “Deep learning in neural networks: An overview.” Neural Networks 61: 85–117.
- Sermanet, Pierre, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. 2014. “OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks.” In International Conference on Learning Representations (ICLR 2014), April. CBLS.
- Simonyan, Karen, and Andrew Zisserman. 2014. “Very deep convolutional networks for large-scale image recognition.” arXiv preprint arXiv:1409.1556 .
- Sivic, Josef, and Andrew Zisserman. 2003. “Video Google: A text retrieval approach to object matching in videos.” In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, 1470–1477. IEEE.
- Smeulders, Arnold WM, Marcel Worring, Simone Santini, Amarnath Gupta, and Ramesh Jain. 2000. “Content-based image retrieval at the end of the early years.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 22 (12): 1349–1380.
- Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. “Going deeper with convolutions.” In Proceedings of the IEEE conference on computer vision and pattern recognition, 1–9.
- Thomee, Bart, and Michael S Lew. 2012. “Interactive search in image retrieval: a survey.” International Journal of Multimedia Information Retrieval 1 (2): 71–86.
- Torii, Akihiko, Josef Sivic, Tomas Pajdla, and Masatoshi Okutomi. 2013. “Visual place recognition with repetitive structures.” In Proceedings of the IEEE conference on computer vision and pattern recognition, 883–890.
- Tsai, Chih-Fong. 2012. “Bag-of-words representation in image annotation: A review.” ISRN Artificial Intelligence 2012.
- Vedaldi, A., and K. Lenc. 2014. “MatConvNet – Convolutional Neural Networks for MATLAB.” CoRR abs/1412.4564.
- Veltkamp, Remco, Hans Burkhardt, and Hans-Peter Kriegel. 2013. State-of-the-art in content-based image and video retrieval. Vol. 22. Springer Science & Business Media.
- Wang, Xiang-Yang, Jun-Feng Wu, and Hong-Ying Yang. 2010. “Robust image retrieval based on color histogram of local feature regions.” Multimedia Tools and Applications 49 (2): 323–345.
- Xia, Gui-Song, Jingwen Hu, Fan Hu, Baoguang Shi, Xiang Bai, Yanfei Zhong, Liangpei Zhang, and Xiaoqiang Lu. 2017. “AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification.” IEEE Transactions on Geoscience and Remote Sensing .
- Xia, Gui-Song, Wen Yang, Julie Delon, Yann Gousseau, Hong Sun, Henri Maître, et al. 2010. “Structural high-resolution satellite image indexing.” In ISPRS TC VII Symposium-100 Years ISPRS, Vol. 38, 298–303.
- Yang, Yi, and Shawn Newsam. 2010. “Bag-of-visual-words and spatial extensions for land-use classification.” In Proc. of the Int’l Conf. on Advances in Geographic Information Systems, 270–279.
- Yang, Yi, and Shawn Newsam. 2013. “Geographic image retrieval using local invariant features.” Geoscience and Remote Sensing, IEEE Transactions on 51 (2): 818–832.
- Zajić, Goran, Nenad Kojić, Vladan Radosavljević, Maja Rudinac, Stevan Rudinac, Nikola Reljin, Irini Reljin, and Branimir Reljin. 2007. “Accelerating of image retrieval in CBIR system with relevance feedback.” EURASIP Journal on Advances in Signal Processing 2007 (1): 1–13.
- Zeiler, Matthew D, and Rob Fergus. 2014. “Visualizing and understanding convolutional networks.” In Computer Vision–ECCV 2014, 818–833. Springer.
- Zhao, Fang, Yongzhen Huang, Liang Wang, and Tieniu Tan. 2015. “Deep semantic ranking based hashing for multi-label image retrieval.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1556–1564.
- Zhou, Xiang Sean, and Thomas S Huang. 2003. “Relevance feedback in image retrieval: A comprehensive review.” Multimedia systems 8 (6): 536–544.