CNN Feature boosted SeqSLAM for Real-Time Loop Closure Detection

CNN Feature boosted SeqSLAM for Real-Time Loop Closure Detection

Dongdong Bai Chaoqun Wang Bo Zhang Xiaodong Yi Xuejun Yang College of Computer, National University of Defense Technology, ChangSha, 410073, China State Key Laboratory of High Performance Computing, National University of Defense Technology, ChangSha, 410073, China

Loop closure detection (LCD) is an indispensable part of simultaneous localization and mapping systems (SLAM); it enables robots to produce a consistent map by recognizing previously visited places. When robots operate over extended periods, robustness to viewpoint and condition changes as well as satisfactory real-time performance become essential requirements for a practical LCD system.

This paper presents an approach to directly utilize the outputs at the intermediate layer of a pre-trained convolutional neural network (CNN) as image descriptors. The matching location is determined by matching the image sequences through a method called SeqCNNSLAM. The utility of SeqCNNSLAM is comprehensively evaluated in terms of viewpoint and condition invariance. Experiments show that SeqCNNSLAM outperforms state-of-the-art LCD systems, such as SeqSLAM and Change Removal, in most cases. To allow for the real-time performance of SeqCNNSLAM, an acceleration method, A-SeqCNNSLAM, is established. This method exploits the location relationship between the matching images of adjacent images to reduce the matching range of the current image. Results demonstrate that acceleration of 4-6 is achieved with minimal accuracy degradation, and the method’s runtime satisfies the real-time demand. To extend the applicability of A-SeqCNNSLAM to new environments, a method called O-SeqCNNSLAM is established for the online adjustment of the parameters of A-SeqCNNSLAM.

Loop closure detection, CNN, SeqCNNSLAM, A-SeqCNNSLAM, O-SeqCNNSLAM.
journal: Computers & Graphics


1 Introduction

Large-scale navigation in a changing environment poses a significant challenge to robotics, because during this process, a robot inevitably encounters severe environmental changes. Such changes are mainly divided into condition and viewpoint. Condition change is caused by changes in the external environment, such as changes in illumination, weather, and even season. The appearance of a single area could differ depending on the external environment. Meanwhile, robots encountering a viewpoint change may view the same area from various perspectives as they move around.

Aside from robustness to viewpoint and condition change, real-time performance is another inevitable re-quirement for loop closure detection (LCD). Evidently, a robot should be able to determine without a high overhead cost whether its current location has been visited. However, a robot is equipped with a computer, the computing power of which is close to that of a personal computer. The computer usually implements other robotic applications simultaneously, along with the LCD algorithm. Therefore, the LCD algorithm should not have significant computing requirements.

Many researchers have successfully addressed view-point and condition changes. For example, bag of wordsSivic and Zisserman (2003) was introduced into FAB-MAPCummins and Newman (2008). Excellent performance was achieved with regard to viewpoint change, and the method has become one of the state-of-the-art approaches for LCD based on single-image matching. Recently, Milford et al. proposed a method of matching sequence images called SeqSLAMMilford and Wyeth (2012) and achieved improved robustness against condition change.

The features used by these approaches are hand crafted and designed by experts with domain-specific knowledge. However, robots may face various complexes and uncertain environments during localization. Hence, considering all the factors that affect the performance of LCD is difficult. Such as, SeqSLAM is only condition invariance, but it shows poor robustness against viewpoint change.

Recently, convolutional neural networks (CNN) has been achieved great successKrizhevsky et al. (2012)Chatfield et al. (2014) and received much interest in applying CNN features to robotic fieldsHou et al. (2015)Sünderhauf et al. (2015)Suenderhauf et al. (2015)Lowry et al. (2016). Hou et al.Hou et al. (2015) and Sunderhauf et al.Sünderhauf et al. (2015) were pioneers of these researchers. Instead of using hand-crafted features, they respectively analyzed the utility of each layer of two pre-trained CNN models with an identical architecture—places-CNNZhou et al. (2014) and AlexNetKrizhevsky et al. (2012) (the architecture is shown in Fig.1). Their results showed that conv3 and pool5 are representative layers that demonstrate beneficial condition and viewpoint invariance.

However, CNN descriptors, such as conv3 and pool5, exhibit strong robustness to either viewpoint change or condition change. Simultaneously demonstrating robustness to both condition and viewpoint change remains an open problem. Furthermore, the dimensions of CNN descriptors are high, and the similarity degree between images is measured by their features’ Euclidean distance. So the operation of a CNN-based LCD algorithm requires a large amount of computation, which poses a difficulty in meeting the real-time demand for robots.

In this paper, we present a robust LCD method by fusing the CNN features and sequences matching method. And, we optimize its real-time performance by reducing image matching range. Our contributions are two fold:

  • First, we present SeqCNNSLAM to combine CNN descriptors and sequence matching to enable the algorithm to cope with viewpoint and condition change simultaneously. Comprehensive experiments are conducted to demonstrate that SeqCNNSLAM exhibits more robust performance than several state-of-the-art methods, such as SeqSLAMLowry and Milford (2015) and Change RemovalMilford and Wyeth (2012).

  • Second, we reduce the computational complexity of SeqCNNSLAM to determine matching images for the current image by limiting the matching range based on the sequential information underlined between images. The approach is called A-SeqCNNSLAM, which obtains 4-6 times of acceleration and still presents a performance that is comparable to that of SeqCNNSLAM. Meanwhile, we provide an approach for the online adjustment of several parameters in A-SeqCNNSLAM. The approach is called O-SeqCNNSLAM; it enhances the practical performance of the algorithm.

The paper proceeds as follows. Section II provides a brief background of LCD as well as CNN development and status quo. The image descriptors, LCD performance metrics, and datasets used in the experiments are provided in Section III. Sections IV presents the implementation of algorithms and the tests conducted on SeqCNNSLAM. In Section V, we provide the detailed introduction on A-SeqCNNSLAM and O-SeqCNNSLAM, and do comprehensive research about their performance and runtime. Section VI presents the results, conclusions, and suggestions for future work.

2 Related Work

This section provides a brief introduction of LCD, CNN, and several of the latest studies on applying pre-trained CNN descriptors to LCD.

2.1 Loop Closure Detection

The capability to localize and generate consistent maps of dynamic environments is vital to long-term robot autonomyBarfoot et al. (2013). LCD is a technique to determine the locations of a loop closure of a mobile robot’s trajectory, which is critical to building a consistent map of an environment by correcting the localization errors that accumulate over time. Given the change in the environment and movement of the robot, images corresponding to the same place collected by a robot may present an entirely different appearance because of the existence of condition and viewpoint change.

To cope with these challenges, researchers have designed many methods for various situations. Speeded-up robust features (SURF)Bay et al. (2006) are typical examples. Since the bag-of-words Sivic and Zisserman (2003) model was proposed, SURFBay et al. (2006) descriptors have been widely used in LCD. The bag-of-wordsSivic and Zisserman (2003) model was first introduced into LCD by FAB-MAPCummins and Newman (2008), which is based on the SURFBay et al. (2006) descriptor. Considering the invariance properties of SURFBay et al. (2006) in generating bag-of-wordsSivic and Zisserman (2003) descriptors and that the bag-of-words model ignores the geometric structure of the image it describes, FAB-MAPCummins and Newman (2008) demonstrates robust performance in viewpoint change and has become a state-of-the-art algorithm based on a single image match in LCD research.

Instead of searching the single location most similar to the current image, Milford et al. proposed an approach that calculates the best candidate matching the location based on a sequence of images. Their approach, which was coined SeqSLAMMilford and Wyeth (2012), achieves remarkable results with regard to coping with condition change and even season changeNiko et al. (2013). Searching for matching sequences is deemed the core of SeqSLAMNiko et al. (2013).

2.2 Convolutional Neural Networks

Since Alex et al.Krizhevsky et al. (2012) became the champion of the Imagenet Large Scale Visual Recognition Competition 2012 (ILSVRC2012), algorithms based on CNN have dominated over traditional solutions that use hand-crafted features or operate on raw pixel levels in the computer vision and machine learning communitybabenko2014neuralWan et al. (2014). Much interest has been devoted to applying convolutional network features to robotic fields, such as visual navigation and SLAMHou et al. (2015)Sünderhauf et al. (2015)Suenderhauf et al. (2015)Lowry et al. (2016). The methods of using CNN for robotic applications fall into two categories: training a CNN for a robotic application or directly using outputs of several layers in a pre-trained CNN for related robotic applications.

Directly training a CNN for a specific application is a complex process not only because of the need for a large amount of data in this field, but also because obtaining a CNN with a remarkable performance demands that researchers have plenty of experience to adjust the architecture and parameters of the CNN. That is, many tricks can be applied to train a CNN, and this presents great difficulties in using the CNN for robotics.

Given the versatility and transferable property of CNNRazavian et al. (2014), although they were trained for a highly specific target task, they can be successfully used to solve different but related problems.

Hou et al.Hou et al. (2015) investigated the feasibility of the public pre-trained model (Places-CNN)Zhou et al. (2014) as an image descriptor generator for visual LCD. Places-CNN was implemented by the open-source software CaffeJia et al. (2014) and trained on the scene-centric dataset PlacesZhou et al. (2014), which contains over 2.5 million images of 205 scene categories for place recognition. The researchers comprehensively analyzed the performance and characteristics of descriptors generated from each layer of the Places-CNN model and demonstrated that the conv3 layer presents some condition invariance, whereas the pool5 layer presets some viewpoint invariance.

Compared with hand-crafted features, CNN descriptors can easily deal with complex and changeable environments and do not require researchers to have much domain-specific knowledge.

Figure 1: Architecture of the Places-CNN/AlexNet model$^{[6]}$.
Convolutional Fully-Connected
Dimension 290400 69984 186624 43264 64896 64896 43264 9216 4096 4096 1000
Table 1: Dimensions of each layer of the Places-CNN/AlexNet model.

2.3 Combination of SeqSLAM with CNN Descriptors

Given the advantages of SeqSLAM and CNN, Lowry et al. considered combining the sequence matching method with CNN descriptors to fuse their respective advantages and construct an LCD system that is robust to both viewpoint and condition changes. Their method is called Change RemovalLowry and Milford (2015). Change Removal involves two main processes. First, it removes a certain number of the earliest principal components of images to remove information on images that is related to condition change. The rest of the principal components of images are used as input for CNN to obtain a robust feature against viewpoint and condition changes. However, the performance of Change RemovalSivic and Zisserman (2003) depends largely on a dataset-specific parameter, that is, the number of principal components of images to be removed. Therefore, selecting the setting for unseen scenes and environments is difficult.

In this study, we present a new means to maximize both CNN descriptors and the sequence matching method. Preprocessing of images is not needed, and the images are directly used as input to CNN. Compared with Change RemovalLowry and Milford (2015), the proposed method is more general and does not depend on any dataset-specific parameters.

3 Preliminaries

3.1 Obtain CNN descriptors

For a pre-trained CNN model, the output vector of each layer can be regarded as an image descriptor . A descriptor of each layer can be obtained by traveling through the CNN networks. Before using , it is normalized to become unit vector according to the following equation:


where is a descriptor with dimensions and is normalized descriptor with an identical dimension as .

Algorithm 1 shows the process of obtaining CNN descriptors.

Require: :dataset for LCD containing images, : input images,: the ground truth of matching image’s serial number for; : normalize vector based on Eq.(1).

Ensure: : conv3 descriptors of , : pool5 descriptors of .


2Put into Places-CNN to get the output of conv3 and pool5 ;



5end for

Algorithm 1 Obtain CNN descriptors

3.2 Performance Measures

The performance of the LCD algorithm is typically evaluated according to precision, recall metrics and precision-recall curve. The matches consistent with the ground truth are regarded as true positives (TP), the matches inconsistent with the ground truth are false positives (FP), and the matches erroneously discarded by the LCD algorithm are regarded as false negative matches (FN). Precision is the proportion of matches recognized by the LCD algorithm as TP matches, and recall is the proportion of TP matches to the total number of actual matches in the ground truth, that is,


For LCD, the maximum recall at 100% precision is also an important performance indicator; it is widely used by many researchers to evaluate LCD algorithms and is also used in our subsequent experiments. Although this criterion may cause the algorithms to reject several correct matches in the ground truth, the cost of adopting an FP in the LCD system is extremely large and often results in an erroneous map in the SLAM system. For the subsequent test, we accepted a proposed match if it was within two frames of the ground truth.

3.3 Datasets used in the Evaluation

In the subsequent experiments, we used two datasets with different properties to evaluate the utility of the algorithms.

3.3.1 Nordland Dataset(winter-spring)

The Nordland dataset was produced by extracting still frames from a TV documentary “Nordlandsbanen-Minutt for Minutt” by the Norwegian Broadcasting Corporation. The TV documentary records the 728 km train ride in northern Norway from the same perspective in the front of a train in four different seasons for 10 hours. The dataset has been used to evaluate the performance of OpenSeqSLAMNiko et al. (2013), an open source implementation of SeqSLAM that copes with season changes. The dataset captured in four different seasons exhibits simple condition changes because of the same running path of the train and orientation of the camera. Fig.2 shows an intuitionistic impression of the severe appearance change between seasons. As illustrated in the figure, a severe seasonal change from full snow cover in winter to over-green vegetation in spring occurs and is the most severe change among all pairs, such as spring to summer. Hence, we adopted the spring and winter seasons of the Nordland dataset for our subsequent experiments.

The TV documentary is recorded at 25 fps with a resolution of 1920*1080, and GPS information is recorded in conjunction with the video at 1 Hz. The videos and GPS information are publicly available online 00footnotetext: The full HD recordings have been time synchronized; the position of the train in an arbitrary frame from one video corresponds to a frame with the same serial number in any of the other three videos.

Figure 2: Sample Nordland images from matching locations: winter (top) and spring (bottom).

For our experiments, we extract image frames at a rate of 1 fps from the video start to time stamps 1:30 h. We down-sample frames to 640*320 and excluded all images obtained inside tunnels or when the train was stopped. By now, we obtained 3476 images for each season.

3.3.2 Gardens Point Dataset

The Gardens Point dataset was recorded on a route through the Gardens Point Campus of Queensland University of Technology. The route was traversed three times: twice during the day and once at night. One day route (day-left) was traversed on the left-hand side of the path, and the other day route (day-right) and the night route (night-right) were traversed on the right-hand aspect of the road. Two hundred images were collected from each traversal, and an image name corresponds to the location in each traversal. Thus, the dataset exhibits both condition and viewpoint changes, as illustrated in Fig.3. The dataset is publicly available online 00footnotetext: +Pose+Change+Datasets.

Figure 3: Sample Gardens Point images from matching locations: day-right (left), day-left (middle) and night-right (right).

4 SeqCNNSLAM Loop Closure Detection

This section presents the comprehensive design of SeqCNNSLAM, which realizes a robust LCD algorithm combining CNN descriptors and SeqSLAMMilford and Wyeth (2012).

SeqSLAMMilford and Wyeth (2012) has been described by Milford et al. in detail. For comparison with the proposed approach, the important ideas and algorithmic steps of SeqSLAMMilford and Wyeth (2012) are summarized below. SeqSLAMMilford and Wyeth (2012) mainly involves three steps. First, in the preprocessing step, incoming images are drastically down-sampled to, for example, 64*32 pixels, and divided into patches of 8*8 pixels. The pixel values are then normalized between 0 and 255. The image difference matrix is obtained by calculating all the preprocessed images using the sum of absolute differences. Second, the distance matrix is locally contrast enhanced. Finally, when looking for a match to a query image, SeqSLAMMilford and Wyeth (2012) performs a sweep through the contrast-enhanced difference matrix to find the best matching sequence of frames based on the sum of sequence differences. The process to calculate the sequence differences is illustrated in Algorithm 2.

Require: : sequence length; : current image’s serial number; : difference matrix, : an element of ; : trajectory speed; : middle image’s serial number of a matching sequence.

Ensure: : the sum of -th sequence differences for -th image.



3end for


Algorithm 2 Cal-Seq-Dif(,,)

SeqSLAMMilford and Wyeth (2012) has a large number of parameters that control the behavior of the algorithm and the quality of its results. The parameter ds is presumably the most influential one; it controls the length of image sequences that are considered for matching. SeqSLAMMilford and Wyeth (2012) is expected to perform better with the increase in because longer sequences are more distinct and less likely to result in FP matches.

The remarkable performance of SeqSLAMMilford and Wyeth (2012) under condition change demands a relatively stable viewpoint in the different traversals of the environment, which are caused by directly using the sum of absolute difference to measure the difference between images. But the matching sequence, which is the core of SeqSLAM, is a significant contribution of SeqSLAM to LCDNiko et al. (2013).

Several achieved CNN layers outperform hand-crafted descriptors in terms of condition and viewpoint changes, so we developed the method SeqCNNSLAM to combine CNN descriptors that are invariant to viewpoint changes with the core of SeqSLAMNiko et al. (2013).

In our work, we also adopted the pre-trained CNN, Places-CNN, as a descriptor generator. This CNN model is a multi-layer neural network that mainly consists of five convolutional layers, three max-pooling layers, and three fully connected layers. A max-pooling layer follows only the first, second, and fifth convolutional layers but not the third and fourth convolutional layers. The architecture is shown in Fig.1. Given the remarkable performance of conv3 and pool5Hou et al. (2015), we selected these two patterned layers as representatives to combine with SeqSLAM in our subsequent comparative studies. These methods are respectively named as SeqCNNSLAM (conv3) and SeqCNNSLAM (pool5) correspond to conv3 and pool5 layers.

Unlike in SeqSLAMMilford and Wyeth (2012) that mainly involves three steps, in SeqCNNSLAM, the first two preprocessing steps of SeqSLAM are no longer applied. As illustrated in Algorithm 3 (we only select one and (trajectory speed) for illustration), SeqCNNSLAM uses the normalized output of conv3 and pool5 layers directly as the image descriptor. From steps 2 to 4, the difference matrix is obtained merely by directly calculating the Euclidean distance between the two layers. From steps 5 to 10, SeqCNNSLAM sweeps through the entire difference matrix to find the best matching sequence for the current frame. All experiments were based on OpenSeqSLAM implementationNiko et al. (2013). Default values were used for all parameters, except for sequence length ds (see Table 2), which varied from 10 to 100. It should be noted that the first images can not be matched, so the images’ serial numbers should start from . But in order to make readers focus on the algorithm, we ignore this detail and set start from 1, which is default setting for all subsequent algorithms.

Parameter Value Description
10 Recent template range
0.8 Minimum trajectory speed
1.2 Maximum trajectory speed
Sequence length
Table 2: OpenSeqSLAM Parameters
Figure 4: Performance of SeqSLAM, SeqCNNSLAM(conv3) and SeqCNNSLAM(pool5) in the Nordland dataset with only condition change (top line) as well as with both condition and viewpoint change by 4.2%, 8.3%, and 12.5% shift (the second, third and bottom lines). SeqSLAM(left line) and SeqCNNSLAM(conv3) (middle line) achieve comparable performance on only condition change (0% shift), and out-performs SeqCNNSLAM(pool5) (right line). However, with the increment of in the viewpoint, SeqCNNSLAM(pool5) presents a more robust performance than SeqSLAM and SeqCNNSLAM(conv3), especially at 12.5% shift.

To evaluate our LCD algorithm’s actual performance, we thoroughly investigate SeqCNNSLAM’s robustness against viewpoint and condition changes. To experiment on the robustness of the algorithm against viewpoint change, we crop the 640*320 down-sampled images of the spring and winter datasets to 480*320 pixels. For the winter images, we simulate viewpoint change by shifting these down-sampled crops to 4.2%, 8.3%, and 12.5% denoted as winter 4.2%, winter 8.3% and winter 12.5%, respectively. The shift pixels of winter images are 20, 40, and 60 pixels. Meanwhile, the cropped images of the spring dataset are called spring 0%, as illustrated in Fig.5. To compare with SeqSLAM, we also cropped the 64*32 pixel down-sampled images of SeqSLAM to 48*32 pixels and shifted the images at an equal proportion as in SeqCNNSLAM, resulting in 2, 4, and 6 pixels shift.

Require: : dataset for LCD containing images, : input images, the ground truth of matching image’s serial number of ; : CNN descriptors of ; : sequence length; : difference matrix, : a element of ; : Euclidean distance between X and Y; : trajectory speed.

Ensure:: the matching image’s serial number of determined by LCD algorithm.

01Initial: ,

02for , :


04end for


07 for


09end for


11end for

Algorithm 3 SeqCNNSLAM
Figure 5: Cropped image samples in Nordland. Each row of images is cropped from the same image. The left to right columns represent spring 0%, winter 4.2%, winter 8.3%, and winter 12.5%.

We conduct these experiments on the following set of datasets.

  • Nordland: Spring and winter datasets are used to form a pair with only condition change as the baseline of the Nordland dataset.

  • Nordland: Spring 0% are used to form pairs with winter 4.2%, winter 8.3% or winter 12.5% to construct a set of datasets with both viewpoint and condition change.

  • Gardens Point: Day-left and night-right are used to form a pair with both condition change and viewpoint change.

For comparison, we also conducted several tests on SeqSLAM in the first two set of datasets and on Change Removal in the last dataset.

Fig.4 shows the resulting precision-recall curves in Nordland dataset, and Table 3 shows the runtime for each method with the variation in .

Fig.4(a) and Fig.4(c) show that SeqSLAMMilford and Wyeth (2012) and SeqCNNSLAM (conv3) exhibit comparable performance against condition changes only and present slightly better robustness than SeqCNNSLAM (pool5) when is set to the same value. However, with the increment in viewpoint, SeqCNNSLAM (pool5) achieves overwhelming performance compared with SeqSLAM and SeqCNNSLAM (conv3), as illustrated in Fig.4(d) to Fig.4(l), especially at 12.5% shift.

Fig.6 shows the best performance of Change RemovalLowry and Milford (2015) (green line) on the day-left and night-right parts of the Gardens Point dataset. Evidently, SeqCNNSLAM(pool5) achieves better performance than Change RemovalLowry and Milford (2015). Because when is set to the same value, SeqCNNSLAM(pool5) achieves better recall when the precision is maintained at 100%.

From these experiments, we conclude that SeqSLAMMilford and Wyeth (2012) and SeqCNNSLAM (conv3) are suitable for dealing with scenes of severe condition change but minor viewpoint change. When a scene contains severe condition and viewpoint change, SeqCNNSLAM (pool5) is the more sensible choice compared with the other methods.

Figure 6: Performance of SeqCNNSLAM and Change Removal in the Gardens Point dataset with both viewpoint and condition change.

5 Approaches to Realize Real-Time SeqCNNSLAM

Besides viewpoint and condition invariance, real-time performance is another important performance metric for LCD. For SeqCNNSLAM, the time to calculate the difference matrix is the key limiting factor for large-scale scene, as its runtime is proportional to the square of the number of images in the dataset. In this section, we provide an efficient acceleration method of SeqCNNSLAM (A-SeqCNNSLAM) by reducing the number of candidate matching images for each image. Then, we present an online method to select the parameters of A-SeqCNNSLAM to enable its applicability for unseen scenes.

5.1 A-SeqCNNSLAM: Acceleration method of SeqCNNSLAM

SeqCNNSLAM (pool5) shows a satisfactory performance when facing viewpoint and condition change simultaneously, but the runtime of this method (illustrated in Table 3) is too long. The reason for this phenomenon is that for any image in the dataset, SeqCNNSLAM (pool5) performs a literal sweep through the entire difference matrix to find the best matching sequence, similar to SeqSLAMMilford and Wyeth (2012). To obtain the entire difference matrix, the computer needs to calculate the Euclidean distance between any two images’ CNN descriptors in the dataset. So if the dataset contains images, the time complexity of obtaining the difference matrix is proportional to the square of . Furthermore, the LCD algorithm must perform searches to find the matching image for any image. Evidently, with the increase in the number of images, the increasing rate of overhead of SeqCNNSLAM(pool5) is formidable. Hence, directly using the SeqCNNSLAM(pool5) may not be suitable to deal with the large-scale scene.

Algorithms \backslashboxViewpointds 10 20 60 80 100
SeqCNNSLAM(conv3) 0 4111.034s 4131.820s 3988.761s 4035.855s 4051.425s
20 4161.219s 4155.817s 4155.309s 4163.036s 4160.891s
40 4153.573s 4159.118s 4163.875s 4164.591s 4164.014s
60 4160.501s 4154.367s 4162.638s 4157.301s 4162.374s
SeqCNNSLAM(pool5) 0 626.554s 641.893s 649.681s 653.731s 654.196s
20 640.814s 641.955s 644.552s 646.598s 647.785s
40 640.289s 641.947s 644.278s 645.876s 646.537s
60 640.013s 641.483s 644.072s 646.107s 646.798s
Table 3: Runtime of SeqCNNSLAM

The movement of a robot is continuous in both time and space, hence the adjacently collected images are of a high similarity. Therefore, we may infer that if an image A is matched with another image B, A’s adjacent image is also very likely to find its matching images in the adjacent range of image B.

Given the relationship between adjacent images, we greatly accelerate the execution of SeqCNNSLAM(pool5) by reducing the number of candidate matching images. For a certain image, we first choose images as its reference images. Then, we select a collection of images for each reference images, where is denoted as the size of the matching range. In this way, we may get at most candidate matching images, considering the matching ranges of different reference images may overlap. These candidate matching images are corresponding to images’ flag value in Algorithm 4.

To be more specific, for the -th image, its reference images are the first images which are of the shortest distance with to -th image. By setting the -th reference image as the middle point, as shown in Fig.7, we choose images on both sides of the reference image and construct the -th matching range. For example, when we set and , as shown in Fig.7, the location of candidate matching ranges for the current image depends only on the first two images’ serial number that is most similar to that of the -th image.

As illustrated in Algorithm 4, from steps 2 to 4, A-SeqCNNSLAM is initialized by calculating the first columns of the difference matrix and sweeping through these columns to determine the best matching sequence of the first image. Meanwhile, the first images that are most similar to it are recorded, and these images are set as the middle image of matching ranges containing images for the next image by setting their flags to 1, as shown in steps 6 to 16, while is equal to 1.

Require: : dataset for LCD containing images, : input images, : the ground truth of matching image’s serial number of ; : CNN descriptors of ; : sequence length; : difference matrix, : a element of ; : Euclidean distance between X and Y; : the number of matching ranges for an image; : serial number of a matching range; : the lengths of each matching sequence; : sort vector from small to large; : flag value of CNN descriptor ; : trajectory speed.

Ensure: : the matching image’s serial number of determined by LCD algorithm.

01Initial: , , ;

02for ,


04end for


06for and


08end for





13for ;


15end for

16end for


Algorithm 4 A-SeqCNNSLAM

As illustrated in steps 6 to 16 of Algorithm 4, when the image number is greater than 1 ( is larger than one), the algorithm determines the matching image only from their matching ranges rather than the entire dataset. Hence, for a value of , , and , regardless of the number images the dataset contains, we only need to search at most times to find the matching image for an image. Besides, because of the robustness of SeqCNNSLAM (pool5), reference images are likely to be in an adjacent location, which results to some overlap in the matching ranges, such as the instance shown in Fig.8. Therefore, the actual number of candidate matching images for an image is likely to be less than . Thus, the accelerated algorithm has good scalability and can easily deal with large-scale scenes. Additionally, in order to reduce the cumulative error, we reinitialize the algorithm every images by calculating the whole columns.

Figure 7: Example of candidate matching ranges for the -th image and are set to 2.

To verify our method, we implement it and compare its performance with that of SeqCNNSLAM(pool5) with two typical values of (ds=80 and 100). Satisfactory performance is achieved in the Nordland dataset. Fig.9 shows the result of these experiments. Table 4 summarizes the runtime for the different , and . With the increment in and , the runtime of the algorithm increases, but the increment rate is gradual.

\backslashbox 6 16 40
80 6 98.373s 103.865s 114.847s
10 106.392s 111.421s 122.642s
30 120.258s 129.351s 145.465s
100 6 130.137s 137.691s 147.377s
10 139.198s 141.221s 155.016s
30 153.255s 160.601s 183.555s
Table 4: Runtime of A-SeqCNNSLAM

The experiments consistently show that our accelerated method can achieve comparable performance even though and are set to a small value only. For instance, when , and , A-SeqCNNSLAM (pool5) and SeqCNNSLAM (pool5) exhibit consistent performances, and the best matching image among 3476 candidates for an image is identified within 40 ms on a standard desktop machine with Intel i7 processor and 8 GB memory. This condition corresponds to a speed-up factor of 4.65 using non-optimized Matlab implementation based on OpenSeqSLAM implementationNiko et al. (2013). Table 4 summarizes the required time for the main algorithmic steps. We can see that the A-SeqCNNSLAM (pool5) achieves significantly better real-time performance in large scale maps.

Figure 8: An instance of matching ranges exists some overlaps
Figure 9: Performance of A-SeqCNNSLAM (pool5) in the Nordland dataset with changed condition and changed viewpoint by 12.5% shift and with variable , and . A-SeqCNNSLAM (pool5) achieves a performance that is comparable to that of SeqCNNSLAM (pool5) when and are set to a small value, such as =10 and =6.

5.2 O-SeqCNNSLAM: Online Learning Algorithm to Choose for A-SeqCNNSLAM

Although we provide an efficient method to accelerate the LCD algorithm by reducing the number of candidate matching images for an image with parameters and , these two parameters are highly dataset-specific and depend largely on the trajectory of the robot. Thus, they are difficult to apply in unseen scene data or different robot configurations. A-SeqCNNSLAM is thus not yet practical. The same dilemma of parameter selection also exists in Change RemovalLowry and Milford (2015) and SeqSLAMMilford and Wyeth (2012)Niko et al. (2013).

This section provides the O-SeqCNNSLAM method for online parameter selection that allows the A-SeqCNNSLAM algorithm to tune its parameters by observing the unknown environment directly.

Fig.9 shows that when is equal or greater than 16, the performance of accelerated SeqCNNSLAM is almost marginally affect by . Hence, we can set to a large value and provide a method to adjust parameter . The historical information of an image’s matching location can be exploited to solve this problem.

In A-SeqCNNSLAM, we provide matching ranges for a image, but we find that the serial number of the matching range where the best matching image is located is often less than . For instance, an image A’s matching image is located in its -th matching range, and is less than . With this clue, we provide an approach to adjust the value of online based on the location of the last few images’ matching images. That is, for each image, we record the serial number of the matching range where its matching image located in, and the serial number is regard as its image matching label (IML), corresponding to step 5 of Algorithm 5. Then, for the current image, the value of is set as the maximum value of its last few images’ IML.

Require: : dataset for LCD containing images, : input images, : the ground truth of matching images’ serial number for ; : CNN descriptors of ; : sequence length; : the number of matching ranges for an image; : serial number of a matching range; : the lengths of each matching sequence.

Ensure: .

01Initial: ;






07end if

08end for





13end if




17end if

Algorithm 5 Online Adjustment of K
80 100
time 121.494s 151.932s
Table 5: Runtime of Online SeqCNNSLAM

However, when the scene changes drastically, the historical information of an image’s matching location can not be used as the basis to set the value for the next batch of images. Thus, we defined a metric called Change Degree to measure the scene change which is defined in Eq.(4). The numerator of Eq.(4) is the sum of Euclidean distance between the current image and its last 10 images, and the denominator is the sum of Euclidean distance between the last image and its last 10 images.


Given that most steps of O-SeqCNNSLAM are identical to those of A-SeqCNNSLAM (except for the method of online adjustment of ), only the concrete steps about how to adjust online in Algorithm 5 are provided to avoid duplication. In the first step, the change degree of the current images is calculated. If the change degree is larger than 0.9 and smaller than 1.1, the is set to maximum value of its last images’ IML in step 11. However, if the change degree is larger than 1.1 or smaller than 0.9, is reinitialized with the initial value, as in step 15.

In order to compare with A-SeqCNNSLAM, we also test the utility of O-SeqCNNSLAM to on the Nordland dataset with 12.5% shift. Meanwhile, the is set to 80 as well as 100. Fig.10 and Table 5 show the resulting precision-recall curves and runtime for O-SeqCNNSLAM when is set to 16 and the initial is set to a large value of 30. Compared with the A-SeqCNNSLAM method, O-SeqCNNSLAM also presents a robust performance, and the runtime of the two are close. So the O-SeqCNNSLAM is an effective way to be employed for actual scene.

Figure 10: Performance of O-SeqCNNSLAM(pool5) in Nordland dataset with changed conditions and changed viewpoint by 12.5% shift. O-SeqCNNSLAM(pool5) achieves a performance that is comparable to that of A-SeqCNNSLAM(pool5).

6 Conclusions and Future Work

Thorough research was conducted on the utility of SeqCNNSLAM, which is a combination of CNN features (especially pool5) and sequence matching method, for the task of LCD. We demonstrate that directly using the pool5 descriptor can result in robust performance against combined viewpoint and condition change with the aid of SeqSLAM. A-SeqCNNSLAM was also presented to make large-scale SeqCNNSLAM possible by reducing the matching range. In O-SeqCNNSLAM, online adjustment of A-SeqCNNSLAM’s parameter makes it applicable to unseen places.

In our subsequent work, we plan to apply the insights gained from this study and provide a complete method to adjust all the parameters of A-SeqCNNSLAM simultaneously based on its operating status and provide a new distance metric to replace the Euclidean distance to avoid curse of dimensionalityBeyer et al. (1999). Additionally, we will explore how to train a CNN specifically for LCD under combined viewpoint and condition change to improve LCD performance.


  • Sivic and Zisserman (2003) Sivic J, Zisserman A. Video google: A text retrieval approach to object matching in videos. In: IEEE International Conference on Computer Vision. IEEE; 2003, p. 1470–7.
  • Cummins and Newman (2008) Cummins M, Newman P. Fab-map: Probabilistic localization and mapping in the space of appearance. Int J Rob Res 2008;(6):647–65.
  • Milford and Wyeth (2012) Milford MJ, Wyeth GF. Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights. In: IEEE International Conference on Robotics and Automation. IEEE; 2012, p. 1643–9.
  • Krizhevsky et al. (2012) Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012, p. 1097–105.
  • Chatfield et al. (2014) Chatfield K, Simonyan K, Vedaldi A, Zisserman A. Return of the devil in the details: Delving deep into convolutional nets. British Machine Vision Conference 2014;.
  • Hou et al. (2015) Hou Y, Zhang H, Zhou S. Convolutional neural network-based image representation for visual loop closure detection. In: IEEE International Conference on Information and Automation. IEEE; 2015, p. 2238–45.
  • Sünderhauf et al. (2015) Sünderhauf N, Shirazi S, Dayoub F, Upcroft B, Milford M. On the performance of convnet features for place recognition. In: IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE; 2015, p. 4297–304.
  • Suenderhauf et al. (2015) Suenderhauf N, Shirazi S, Jacobson A, Dayoub F, Pepperell E, Upcroft B, et al. Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free. Robotics Science and Systems 2015;11.
  • Lowry et al. (2016) Lowry S, Sunderhauf N, Newman P, Leonard JJ, Cox DD, Corke P, et al. Visual place recognition: A survey. IEEE Trans Rob 2016;32(1):1–19.
  • Zhou et al. (2014) Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A. Learning deep features for scene recognition using places database. In: Advances in neural information processing systems. 2014, p. 487–95.
  • Lowry and Milford (2015) Lowry S, Milford MJ. Change removal: Robust online learning for changing appearance and changing viewpoint. In: IEEE International Conference on Robotics and Automation Workshops. 2015,.
  • Barfoot et al. (2013) Barfoot TD, Kelly J, Sibley G. Special issue on long-term autonomy. Int J Rob Res 2013;(14):1609–10.
  • Bay et al. (2006) Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features. In: European conference on computer vision. Springer; 2006, p. 404–17.
  • Niko et al. (2013) Niko S, Neubert P, Protzel P. Are we there yet? challenging seqslam on a 3000 km journey across all four seasons. In: IEEE International Conference on Robotics and Automation Workshops. IEEE; 2013,.
  • Wan et al. (2014) Wan J, Wang D, Hoi SCH, Wu P, Zhu J, Zhang Y, et al. Deep learning for content-based image retrieval: A comprehensive study. ACM International Conference on Multimedia 2014;:157–66.
  • Razavian et al. (2014) Razavian AS, Azizpour H, Sullivan J, Carlsson S. Cnn features off-the-shelf: An astounding baseline for recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2014, p. 512–9.
  • Jia et al. (2014) Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, et al. Caffe: Convolutional architecture for fast feature embedding. In: ACM International Conference on Multimedia. ACM; 2014, p. 675–8.
  • Beyer et al. (1999) Beyer KS, Goldstein J, Ramakrishnan R, Shaft U. When is ”nearest neighbor” meaningful? Database Theory—ICDT’99 1999;:217–35.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description