# Deep Self-Convolutional Activations Descriptor for Dense Cross-Modal Correspondence

## Abstract

We present a novel descriptor, called deep self-convolutional activations (DeSCA), designed for establishing dense correspondences between images taken under different imaging modalities, such as different spectral ranges or lighting conditions. Motivated by descriptors based on local self-similarity (LSS), we formulate a novel descriptor by leveraging LSS in a deep architecture, leading to better discriminative power and greater robustness to non-rigid image deformations than state-of-the-art cross-modality descriptors. The DeSCA first computes self-convolutions over a local support window for randomly sampled patches, and then builds self-convolution activations by performing an average pooling through a hierarchical formulation within a deep convolutional architecture. Finally, the feature responses on the self-convolution activations are encoded through a spatial pyramid pooling in a circular configuration. In contrast to existing convolutional neural networks (CNNs) based descriptors, the DeSCA is training-free (i.e., randomly sampled patches are utilized as the convolution kernels), is robust to cross-modal imaging, and can be densely computed in an efficient manner that significantly reduces computational redundancy. The state-of-the-art performance of DeSCA on challenging cases of cross-modal image pairs is demonstrated through extensive experiments.

## 1Introduction

In many computer vision and computational photography applications, images captured under different imaging modalities are used to supplement the data provided in color images. Typical examples of other imaging modalities include near-infrared [1] and dark flash [4] photography. More broadly, photos taken under different imaging conditions, such as different exposure settings [5], blur levels [6], and illumination [8], can also be considered as cross-modal [9].

Establishing dense correspondences between cross-modal image pairs is essential for combining their disparate information. Although powerful global optimizers may help to improve the accuracy of correspondence estimation to some extent [11], they face inherent limitations without help of suitable matching descriptors [13]. The most popular local descriptor is scale invariant feature transform (SIFT) [14], which provides relatively good matching performance when there are small photometric variations. However, conventional descriptors such as SIFT often fail to capture reliable matching evidences in cross-modal image pairs due to their different visual properties [9].

Recently, convolutional neural networks (CNNs) based features [15] have emerged as a robust alternative with high discriminative power. However, CNN-based descriptors cannot satisfactorily deal with severe cross-modality appearance differences, since they use shared convolutional kernels across images which lead to inconsistent responses similar to conventional descriptors [19]. Furthermore, they do not scale well for dense correspondence estimation due to their high computational complexity. Though recent works [21] propose an efficient method that extracts dense outputs through the deep CNNs, they do not extract dense CNN features for all pixels individually. More seriously, their methods were usually designed to perform a specific task only, *e.g.*, semantic segmentation, not to provide a general purpose descriptor like ours.

To address the problem of cross-modal appearance changes, feature descriptors have been proposed based on local self-similarity (LSS) [22], which is motivated by the notion that the geometric layout of local internal self-similarities is relatively insensitive to imaging properties. The state-of-the-art descriptor for cross-modal dense correspondence, called dense adaptive self-correlation (DASC) [10], makes use of LSS and has demonstrated high accuracy and speed on cross-modal image pairs. However, DASC suffers from two significant shortcomings. One is its limited discriminative power due to a limited set of patch sampling patterns used for modeling internal self-similarities. In fact, the matching performance of DASC may fall well short of CNN-based descriptors on images that share the same modality. The other major shortcoming is that the DASC descriptor does not provide the flexibility to deal with non-rigid deformations, which leads to lower robustness in matching.

In this paper, we introduce a novel descriptor, called deep self-convolutional activations (DeSCA), that overcomes the shortcomings of DASC while providing dense cross-modal correspondences. This work is motivated by the observation that local self-similarity can be formulated in a deep convolutional architecture to enhance discriminative power and gain robustness to non-rigid deformations. Unlike the DASC descriptor that selects patch pairs within a support window and calculates the self-similarity between them, we compute self-convolutional activations that more comprehensively encode the intrinsic structure by calculating the self-similarity between randomly selected patches and all of the patches within the support window. These self-convolutional responses are aggregated through spatial pyramid pooling in a circular configuration, which yields a representation less sensitive to non-rigid image deformations than the fixed patch selection strategy used in DASC. To further enhance the discriminative power and robustness, we build hierarchical self-convolutional layers resembling a deep architecture used in CNN, together with nonlinear and normalization layers. For efficient computation of DeSCA over densely sampled pixels, we calculate the self-convolutional activations through fast edge-aware filtering.

DeSCA resembles a CNN in its deep, multi-layer, and convolutional structure. In contrast to existing CNN-based descriptors, DeSCA requires no training data for learning convolutional kernels, since the convolutions are defined as the local self-similarity between pairs of image patches, which yields its robustness to cross-modal imaging. Fig. ? illustrates the robustness of DeSCA for image pairs across non-rigid deformations and illumination changes. In the experimental results, we show that DeSCA outperforms existing area-based and feature-based descriptors on various benchmarks.

## 2Related Work

#### Feature Descriptors

Conventional gradient-based descriptors, such as SIFT [14] and DAISY [23], as well as intensity comparison-based binary descriptors, such as BRIEF [24], have shown limited performance in dense correspondence estimation between cross-modal image pairs. Besides these handcrafted features, several attempts have been made using machine learning algorithms to derive features from large-scale datasets [15]. A few of these methods use deep convolutional neural networks (CNNs) [26], which have revolutionized image-level classification, to learn discriminative descriptors for local patches. For designing explicit feature descriptors based on a CNN architecture, immediate activations are extracted as the descriptor [15], and have been shown to be effective for this patch-level task. However, even though CNN-based descriptors encode a discriminative structure with a deep architecture, they have inherent limitations in cross-modal image correspondence because they are derived from convolutional layers using shared patches or volumes [19]. Furthermore, they cannot in practice provide dense descriptors in the image domain due to their prohibitively high computational complexity.

To estimate cross-modal correspondences, variants of the SIFT descriptor have been developed [27], but these gradient-based descriptors maintain an inherent limitation similar to SIFT in dealing with image gradients that vary differently between modalities. For illumination invariant correspondences, Wang *et al.* proposed the local intensity order pattern (LIOP) descriptor [28], but severe radiometric variations may often alter the relative order of pixel intensities. Simo-Serra *et al.* proposed the deformation and light invariant (DaLI) descriptor [29] to provide high resilience to non-rigid image transformations and illumination changes, but it cannot provide dense descriptors in the image domain due to its high computational time.

Schechtman and Irani introduced the LSS descriptor [22] for the purpose of template matching, and achieved impressive results in object detection and retrieval. By employing LSS, many approaches have tried to solve for cross-modal correspondences [30]. However, none of these approaches scale well to dense matching in cross-modal images due to low discriminative power and high complexity. Inspired by LSS, Kim *et al.* recently proposed the DASC descriptor to estimate cross-modal dense correspondences [10]. Though it can provide satisfactory performance, it is not able to handle non-rigid deformations and has limited discriminative power due to its fixed patch pooling scheme.

#### Area-Based Similarity Measures

A popular measure for registration of cross-modal medical images is mutual information (MI) [33], based on the entropy of the joint probability distribution function, but it provides reliable performance only for variations undergoing a global transformation [34]. Although cross-correlation based methods such as adaptive normalized cross-correlation (ANCC) [35] produce satisfactory results for locally linear variations, they are less effective against more substantial modality variations. Robust selective normalized cross-correlation (RSNCC) [9] was proposed for dense alignment between cross-modal images, but as an intensity based measure it can still be sensitive to cross-modal variations. Recently, DeepMatching [36] was proposed to compute dense correspondences by employing a hierarchical pooling scheme like CNN, but it is not designed to handle cross-modal matching.

## 3Background

Let us define an image as for pixel , where is a discrete image domain. Given the image , a dense descriptor with a feature dimension of is defined on a local support window of size .

Unlike conventional descriptors, relying on common visual properties across images such as color and gradient, LSS-based descriptors provide robustness to different imaging modalities since internal self-similarities are preserved across cross-modal image pairs [22]. As shown in Fig. ?(a), the LSS discretizes the correlation surface on a log-polar grid, generates a set of bins, and then stores the maximum correlation value of each bin. Formally, it generates an feature vector for , with computed as

where log-polar bins are defined as with a log radius for and a quantized angle for with and . is a correlation surface between a patch and of size , computed using sum of square differences. Each pair of and is associated with a unique index . Though LSS provides robustness to modality variations, its significant computation does not scale well for estimating dense correspondences in cross-modal images.

Inspired by the LSS [22], the DASC [10] encodes the similarity between patch-wise receptive fields sampled from a log-polar circular point set as shown in Fig. ?(b). It is defined such that , which has a higher density of points near a center pixel, similar to DAISY [23]. The DASC is encoded with a set of similarities between patch pairs of sampling patterns selected from such that for :

where and are the selected sampling pattern from at pixel . The patch-wise similarity is computed with an exponential function with a bandwidth of , which has been widely used for robust estimation [37]. is computed using an adaptive self-correlation measure. While the DASC descriptor has shown satisfactory results for cross-modal dense correspondence [10], its randomized receptive field pooling has limited descriptive power and does not accommodate non-rigid deformations.

## 4The DeSCA Descriptor

### 4.1Motivation and Overview

Inspired by DASC [10], our DeSCA descriptor also measures an adaptive self-correlation between two patches. We, however, adopt a different strategy for selecting patch pairs, and build self-convolutional activations that more comprehensively encode self-similar structure to improve the discriminative power and the robustness to non-rigid image deformation (Section 4.2). Motivated by the deep architecture of CNN-based descriptors [19], we further build hierarchical self-convolution activations to enhance the robustness of the DeSCA descriptor (Section 4.4). Densely sampled descriptors are efficiently computed over an entire image using a method based on fast edge-aware filtering (Section 4.3). Fig. ?(c) illustrates the DeSCA descriptor, which incorporates a circular spatial pyramid pooling on hierarchical self-convolutional activations.

### 4.2SiSCA: Single Self-Convolutional Activation

To simultaneously leverage the benefits of self-similarity in DASC [10] and the deep convolutional archiecture of CNNs while overcoming the limitations of each method, our approach builds self-convolutional activations. Unlike DASC [10], the feature response is obtained through circular spatial pyramid pooling. We start by describing a single-layer version of DeSCA, which we denote as SiSCA.

#### Self-Convolutions

To build a self-convolutional activation, we randomly select points from a log-polar circular point set defined within a local support window . We convolve a patch centered at the -th point with all patches , which is defined for and as Fig. ?(b). Similar to DASC [10], the similarity between patch pairs is measured using an adaptive self-correlation, which is known to be effective in addressing cross-modality. With omitted for simplicity, is computed as follows:

for and . and represent weighted averages of and . Similar to DASC [10], the weight represents how similar two pixels and are, and is normalized, *i.e.*, . It may be defined using any form of edge-aware weighting [38].

#### Circular Spatial Pyramid Pooling

To encode the feature responses on the self-convolutional surface, we propose a circular spatial pyramid pooling (C-SPP) scheme, which pools the responses within each hierarchical spatial bin, similar to a spatial pyramid pooling (SPP) [20] but in a circular configuration. Note that many existing descriptors also adopt a circular pooling scheme thanks to its robustness based on a higher pixel density near a central pixel [22]. We further encodes more structure information with a C-SPP.

The circular pyramidal bins are defined from log-polar circular bins , where indexes all pyramidal level and all bins in each level as in Fig. ?. The circular pyramidal bin at the top of pyramid, *i.e.*, , first encompasses all of bins . At the second level, *i.e.*, , it is defined by dividing into quadrants. For further lower pyramid levels, *i.e.*, , the circular pyramidal bins are defined differently according to whether is odd or even. For an odd , the bins are defined by dividing bins in upper level into two parts along the radius. For an even , they are defined by dividing bins in upper level into two parts with respect to the angle. The set of all circular pyramidal bins is denoted such that for , where the number of circular spatial pyramid bins is defined as .

As illustrated in Fig. ?(c), the feature responses are finally max-pooled on the circular pyramidal bins of each self-convolutional surface , yielding a feature response

This pooling is repeated for all , yielding accumulated activations where indexes for all and .

Interestingly, LSS [22] also uses the max pooling strategy to mitigate the effects of non-rigid image deformation. However, max pooling in the 2-D self-correlation surface of LSS [22] loses fine-scale matching details as reported in [10]. By contrast, DeSCA employs circular spatial pyramid pooling in the 3-D self-correlation surface that provides a more discriminative representation of self-similarities, thus maintaining fine-scale matching details as well as providing robustness to non-rigid image deformations.

#### Non-linear Gating and Nomalization

The final feature responses are passed through a non-linear and normalization layer to mitigate the effects of outliers. With accumulated activations , the single self-convolution activiation (SiSCA) descriptor is computed for through a non-linear gating layer:

where is a Gaussian kernel bandwidth. The size of features obtained from the SiSCA becomes . Finally, for each pixel is normalized with an L-2 norm for all .

### 4.3Efficient Computation for Dense Description

The most time-consuming part of DeSCA is in constructing self-convolutional surfaces for and , where computations of are needed for each pixel . Straightforward computation of a weighted summation using in (Equation 3) would require considerable processing with a computational complexity of , where represents the image size (height and width ). To expedite processing, we utilize fast edge-aware filtering [38] and propose a pre-computation scheme for convolutional surfaces.

Similar to DASC [10], we compute efficiently by first rearranging the sampling patterns into reference-biased pairs . can then be expressed as

where , , and . can be efficiently computed using any form of fast edge-aware filter [38] with the complexity of . is then simply obtained from by re-indexing sampling patterns.

Though we remove the computational dependency on patch size , computations of are still needed to obtain the self-convolutional activations, where many sampling pairs are repeated. To avoid such redundancy, we first compute self-convolutional activation for with a doubled local support window of size . A doubled local support window is used because is computed with patch and the minimum support window size for to cover all samples within is as shown in Fig. ?(b). After the self-convolutional activation for is computed once over the image domain, can be extracted through an index mapping process, where the indexes for are estimated from .

0.8pt |
||

0.8pt |
||

Determine a circular pyramidal point . | ||

Compute by using an average pooling for on . | ||

Determine a circular pyramidal bin . | ||

Compute and by using C-SPP on each from and , respectively. | ||

0.8pt |

### 4.4DeSCA: Deep Self-Convolutional Activations

So far, we have discussed how to build the self-convolutional activation on a single level. In this section, we extend this idea by encoding self-similar structures at multiple levels in a manner similar to a deep architecture widely adopted in the CNNs [26]. DeSCA is defined similarly to SiSCA, except that an average pooling is executed before C-SPP (see Fig. ?). With self-convolutional activations, we perform the average pooling on circular pyramidal point sets.

In comparison to the self-convolutions just from a single patch, the spatial aggregation of self-convolutional responses is clearly more robust, and it requires only marginal computational overhead over SiSCA. The strength of such a hierarchical aggregation has also been shown in [36]. Compared to using only last CNN layer activations, we use all intermediate activations from hierarchical average pooling, which yields better cross-modal matching quality.

To build the hierarchical self-convolutional volume using an average pooling, we first define the circular pyramidal point sets from log-polar circular point sets , where associates all pyramidal level and all points in each level . In the average pooling, the circular pyramidal bins used in C-SPP is re-used such that , thus . Deep self-convolutional activations are defined by aggregating for all patches determined on each such that

which is defined for all , and is the number of patches within . The hierarchical activations are sequentially aggregated using average pooling from bottom to top of circular pyramidal point set . After computing hierarchical self-convolutional aggregations, similar to SiSCA, the DeSCA employs C-SPP, non-linear, and normalization layer presented in Section 4.2. Hierarchical self-convolutional activation is computed using the C-SPP such that

Accumulated self-convolutional activations are built from in (Equation 4) and in (Equation 8) such that . Our DeSCA descriptor is then passed through a non-linear layer. is built for with . Finally, for each pixel is normalized with an L-2 norm for all .

## 5Experimental Results and Discussion

### 5.1Experimental Settings

In our experiments, the DeSCA descriptor was implemented with the following fixed parameter settings for all datasets: , and . We chose the guided filter (GF) for edge-aware filtering in (Equation 6), with a smoothness parameter of . We implemented the DeSCA descriptor in C++ on an Intel Core i7-3770 CPU at 3.40 GHz. We will make our code publicly available. The DeSCA descriptor was compared to other state-of-the-art descriptors (SIFT [14], DAISY [23], BRIEF [24], LIOP [28], DaLI [29], LSS [22], and DASC [10]), as well as area-based approaches (ANCC [35] and RSNCC [9]). Furthermore, to evaluate the performance gain with a deep architecture, we compared SiSCA and DeSCA.

### 5.2Parameter Evaluation

The matching performance of DeSCA is exhibited in Fig. ? for varying parameter values, including support window size , number of log-polar circular points , number of random samples , and levels of the circular spatial pyramid . Note that . Especially, Fig. ?(c), (d) prove the effectiveness of self-convolutional activations and deep architectures of DeSCA. For a quantitative analysis, we measured the average bad-pixel error rate on the Middlebury benchmark [42]. With a larger support window , the matching quality improves rapidly until about . influences the performance of circular pooling, which is found to plateau at . Using a larger number of random samples yields better performance since the descriptor encodes more information. The level of circular spatial pyramid also affects the amount of encoding in DeSCA. Based on these experiments, we set and in consideration of efficiency and robustness.

0.8pt Methods |
RGB-NIR | flash-noflash | diff. expo. | blur-sharp | RGB-NIR | flash-noflash |

### 5.3Middlebury Stereo Benchmark

We evaluated DeSCA on the Middlebury stereo benchmark [42], which contains illumination and exposure variations. In the experiments, the illumination (exposure) combination ‘1/3’ indicates that two images were captured under the and illumination (exposure) conditions. For a quantitative evaluation, we measured the bad-pixel error rate in non-occluded areas of disparity maps [42].

Fig. ? shows the disparity maps estimated under severe illumination and exposure variations with winner-takes-all (WTA) optimization. Fig. ? displays the average bad-pixel error rates of disparity maps obtained under illumination or exposure variations, with graph-cut (GC) [43] and WTA optimization. Area-based approaches (ANCC [35] and RSNCC [9]) are sensitive to severe radiometric variations, especially when local variations occur frequently. Feature descriptor-based methods (SIFT [14], DAISY [23], BRIEF [24], LSS [22], and DASC [10]) perform better than the area-based approaches, but they also provide limited performance. Our DeSCA descriptor achieves the best results both quantitatively and qualitatively. Compared to SiSCA descriptor, the performance of DeSCA descriptor is highly improved, where the performance benefits of the deep architecture are apparent.

0.8pt Methods |
def. | illum. | def./ illum. | aver. SIFT |

### 5.4Cross-modal and Cross-spectral Benchmark

We evaluated DeSCA on a cross-modal and cross-spectral benchmark [10] containing various kinds of image pairs, namely RGB-NIR, different exposures, flash-noflash, and blurred-sharp. Optimization for all descriptors and similarity measures was done using WTA and SIFT flow (SF) with hierarchical dual-layer belief propagation [11], for which the code is publicly available. Sparse ground truths for those images are used for error measurement as done in [10].

Fig. ? provides a qualitative comparison of the DeSCA descriptor to other state-of-the-art approaches. As already described in the literature [9], gradient-based approaches such as SIFT [14] and DAISY [23] have shown limited performance for RGB-NIR pairs where gradient reversals and inversions frequently appear. BRIEF [24] cannot deal with noisy regions and modality-based appearance differences since it is formulated on pixel differences only. Unlike these approaches, LSS [22] and DASC [10] consider local self-similarities, but LSS is lacking in discriminative power for dense matching. DASC also exhibits limited performance. Compared to those methods, the DeSCA displays better correspondence estimation. We also performed a quantitative evaluation with results listed in Table 1, which also clearly demonstrates the effectiveness of DeSCA.

### 5.5DaLI Benchmark

We also evaluated DeSCA on a recent, publicly available dataset featuring challenging non-rigid deformations and very severe illumination changes [29]. Fig. ? presents dense correspondence estimates for this benchmark [29]. A quantitative evaluation is given in Table 2 using ground truth feature points sparsely extracted for each image, although DeSCA is designed to estimate dense correspondences. As expected, conventional gradient-based and intensity comparison-based feature descriptors, including SIFT [14], DAISY [23], and BRIEF [24], do not provide reliable correspondence performance. LSS [22] and DASC [10] exhibit relatively high performance for illumination changes, but are limited on non-rigid deformations. LIOP [28] provides robustness to radiometric variations, but is sensitive to non-rigid deformations. Although DaLI [29] provides robust correspondences, it requires considerable computation for dense matching. DeSCA offers greater discriminative power as well as more robustness to non-rigid deformations in comparison to the state-of-the-art cross-modality descriptors.

0.8pt image size |
SIFT | DAISY | LSS | DaLI | DASC | DeSCA* | DeSCA |

### 5.6Computational Speed

In Table 3, we compared the computational speed of DeSCA to state-of-the-art local descriptor, namely DaLI [29], and dense descriptors, namely DAISY [23], LSS [22], and DASC [10]. Even though DeSCA needs more computational time compared to some previous dense descriptors, it provides significantly improved matching performance as described previously.

## 6Conclusion

The deep self-convolutional activations (DeSCA) descriptor was proposed for establishing dense correspondences between images taken under different imaging modalities. Its high performance in comparison to state-of-the-art cross-modality descriptors can be attributed to its greater robustness to non-rigid deformations because of its effective pooling scheme, and more importantly its heightened discriminative power from a more comprehensive representation of self-similar structure and its formulation in a deep architecture. DeSCA was validated on an extensive set of experiments that cover a broad range of cross-modal differences. In future work, thanks to the robustness to non-rigid deformations and high discriminative power, DeSCA can potentially benefit object detection and semantic segmentation.

### References

**Multispectral sift for scene category recognition.**

Brown, M., Susstrunk, S.: In: CVPR (2011)**Cross-field joint image restoration via scale map.**

Yan, Q., Shen, X., Xu, L., Zhuo, S.: In: ICCV (2013)**Multispectral pedestrian detection: Benchmark dataset and baseline.**

Hwang, S., Park, J., Kim, N., Choi, Y., Kweon, I.: In: CVPR (2015)**Dark flash photography.**

Krishnan, D., Fergus, R.: In: SIGGRAPH (2009)**Robust patch-based hdr reconstruction of dynamic scenes.**

Sen, P., Kalantari, N.K., Yaesoubi, M., Darabi, S., Goldman, D.B., Shechtman, E.: In: SIGGRAPH (2012)**Deblurring by example using dense correspondence.**

HaCohen, Y., Shechtman, E., Lishchinski, E.: In: ICCV (2013)**Dense 3d reconstruction from severely blurred images using a single moving camera.**

Lee, H., Lee, K.: In: CVPR (2013)**Digital photography with flash and no-flash iimage pairs.**

Petschnigg, G., Agrawals, M., Hoppe, H.: In: SIGGRAPH (2004)**Multi-modal and multi-spectral registration for natural images.**

Shen, X., Xu, L., Zhang, Q., Jia, J.: In: ECCV (2014)**Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence.**

Kim, S., Min, D., Ham, B., Ryu, S., Do, M.N., Sohn, K.: In: CVPR (2015)**Sift flow: Dense correspondence across scenes and its applications.**

Liu, C., Yuen, J., Torralba, A.: IEEE Trans. PAMI**33**(5) (2011) 815–830**Deformable spatial pyramid matching for fast dense correspondences.**

Kim, J., Liu, C., Sha, F., Grauman, K.: In: CVPR (2013)**On cross-spectral stereo matching using dense gradient features.**

Pinggera, P., Breckon, T., Bischof, H.: In: BMVC (2012)**Distinctive image features from scale-invariant keypoints.**

Lowe, D.: IJCV**60**(2) (2004) 91–110**Learning local feature descriptors using convex optimisation.**

Simonyan, K., Vedaldi, A., Zisserman, A.: IEEE Trans. PAMI**36**(8) (2014) 1573–1585**Multi-scale orderless pooling of deep convolutional acitivation features.**

Gong, Y., Wang, L., Guo, R., Lazebnik, S.: In: ECCV (2014)**Descriptor matching with convolutional neural networks: A comparison to sift.**

Fischer, P., Dosovitskiy, A., Brox, T.: arXiv:1405.5769 (2014)**Decaf: A deep convolutional activation feature for generic visual recognition.**

Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: In: ICML (2014)**Discriminative learning of deep convolutional feature point descriptors.**

Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F.: In: ICCV (2015)**Domain-size pooling in local descriptors: Dsp-sift.**

Dong, J., Soatto, S.: In: CVPR (2015)**Fully conovlutional networks for semantic segmentation.**

Long, J., Shelhamer, E., Darrell, T.: In: CVPR (2015)**Matching local self-similarities across images and videos.**

Schechtman, E., Irani, M.: In: CVPR (2007)**Daisy: An efficient dense descriptor applied to wide-baseline stereo.**

Tola, E., Lepetit, V., Fua, P.: IEEE Trans. PAMI**32**(5) (2010) 815–830**Brief : Computing a local binary descriptor very fast.**

Calonder, M.: IEEE Trans. PAMI**34**(7) (2011) 1281–1298**Learning image descriptor with boosting.**

Trzcinski, T., Christoudias, M., Lepetit, V.: IEEE Trans. PAMI**37**(3) (2015) 597–610**Imagenet classification with deep convolutional neural networks.**

Alex, K., Ilya, S., Geoffrey, E.H.: In: NIPS (2012)**A robust sift descriptor for multispectral images.**

Saleem, S., Sablatnig, R.: IEEE SPL**21**(4) (2014) 400–403**Local intensity order pattern for feature description.**

Wang, Z., Fan, B., Wu, F.: In: ICCV (2011)**Dali: Deformation and light invariant descriptor.**

Simo-Serra, E., Torras, C., Moreno-Noguer, F.: IJCV**115**(2) (2015) 136–154**Mind: Modality indepdent neighbourhood descriptor for multi-modal deformable registration.**

Heinrich, P., Jenkinson, M., Bhushan, M., Matin, T., Gleeson, V., Brady, S., Schnabel, A.: MIA**16**(3) (2012) 1423–1435**Local self-similarity-based registration of human rois in pairs of stereo thermal-visible videos.**

Torabi, A., Bilodeau, G.: PR**46**(2) (2013) 578–589**A local descriptor based registration method for multispectral remote sensing images with non-linear intensity differences.**

Ye, Y., Shan, J.: JPRS**90**(7) (2014) 83–95**Mutual information based registration of medical images: A survey.**

Pluim, J., Maintz, J., Viergever, M.: IEEE Trans. MI**22**(8) (2003) 986–1004**Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.**

Heo, Y., Lee, K., Lee, S.: IEEE Trans. PAMI**35**(5) (2013) 1094–1106**Robust stereo matching using adaptive normalized cross-correlation.**

Heo, Y., Lee, K., Lee, S.: IEEE Trans. PAMI**33**(4) (2011) 807–822**Deepflow: Large displacement optical flow with deep matching.**

Weinzaepfel, P., Revaud, J., Harchaoui, Z., Schmid, C.: In: ICCV (2013)**Robust anisotropic diffusion.**

Black, M.J., Sapiro, G., Marimont, D.H., Heeger, D.: IEEE Trans. IP**7**(3) (1998) 421–432**Domain transform for edge-aware image and video processing.**

Gastal, E., Oliveira, M.: In: SIGGRAPH (2011)**Guided image filtering.**

He, K., Sun, J., Tang, X.: IEEE Trans. PAMI**35**(6) (2013) 1397–1409**Local pyramidal descriptors for image recognition.**

Seidenari, L., Serra, G., Bagdanov, A.D., Bimbo, A.D.: IEEE Trans. PAMI**36**(5) (2014) 1033–1040**Spatial pyramid pooling in deep convolutional networks for visual recognition.**

He, K., Zhang, X., Ren, S., Sun, J.: IEEE Trans. PAMI**37**(9) (2015) 1904–1916**http://vision.middlebury.edu/stereo/.**

online.:**Fast approximation enermgy minimization via graph cuts.**

Boykov, Y., Yeksler, O., Zabih, R.: IEEE Trans. PAMI**23**(11) (2001) 1222–1239