Sub-Pixel Registration of Wavelet-Encoded Images

Sub-Pixel Registration of Wavelet-Encoded Images

Vildan Atalay Aydin and Hassan Foroosh Vildan Atalay Aydin and Hassan Foroosh are with the Department of Computer Science, University of Central Florida, Orlando, FL, 32816 USA (e-mails: vatalay@knights.ucf.edu and foroosh@cs.ucf.edu).
Abstract

Sub-pixel registration is a crucial step for applications such as super-resolution in remote sensing, motion compensation in magnetic resonance imaging, and non-destructive testing in manufacturing, to name a few. Recently, these technologies have been trending towards wavelet encoded imaging and sparse/compressive sensing. The former plays a crucial role in reducing imaging artifacts, while the latter significantly increases the acquisition speed. In view of these new emerging needs for applications of wavelet encoded imaging, we propose a sub-pixel registration method that can achieve direct wavelet domain registration from a sparse set of coefficients. We make the following contributions: (i) We devise a method of decoupling scale, rotation, and translation parameters in the Haar wavelet domain, (ii) We derive explicit mathematical expressions that define in-band sub-pixel registration in terms of wavelet coefficients, (iii) Using the derived expressions, we propose an approach to achieve in-band sub-pixel registration, avoiding back and forth transformations. (iv) Our solution remains highly accurate even when a sparse set of coefficients are used, which is due to localization of signals in a sparse set of wavelet coefficients. We demonstrate the accuracy of our method, and show that it outperforms the state-of-the-art on simulated and real data, even when the data is sparse.

Subpixel Registration Wavelet Decomposition Haar Wavelets Image Pyramids

I Introduction

Image registration plays a crucial role in many areas of image and video processing, such as super-resolution [1, 2, 3, 4, 5, 6, 7, 8, 9], self-localization [10, 11, 12], image annotation [13, 14, 15, 16, 17, 18], surveillance [19, 20, 21], action recognition [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33], target tracking [34, 35, 36, 37], shape description and object recognition [38, 39, 40, 40], image-based rendering [41, 42, 43, 44], and camera motion estimation [45, 12, 46, 47, 48, 49, 50, 51], to name a few.

There are various different ways that one could categorize image registration methods. In terms of functioning space, they could be either spatial domain [52, 53, 54, 55] or transform domain methods [56, 57, 58, 59, 60, 61, 62, 63, 64]. On the other hand, in terms of their dependency on feature/point correspondences they may be categorized as either dependent [65, 66, 67, 68, 69, 70, 71] or independent [52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64] of feature/point correspondences. Finally, in terms of the complexity of the image transformation, they may be categorized as linear parametric (e.g. euclidean, affine, or projective) [72, 73, 74, 75, 76], or semi-parametric/non-parametric diffeomorphic [71, 77, 78, 79, 80]. The method that we propose in this paper is a parametric method that can handle full projective transformations in the Haar wavelet domain without establishing any feature or point correspondences as a preprocessing step.

Recently, there has been a trend in various imaging modalities and applications such as non-destructive testing and Magnetic Resonance Imaging (MRI) to adopt wavelet-encoded imaging [81, 82] and sparse sensing [83, 84, 85], with the aim of achieving better resolution, reduced distortions, higher SNR, and quick acquisition time, which are crucial for these applications. Sub-pixel registration is an integral step of various applications involving these wavelet-encoded compressive imaging technologies. Therefore, in this paper, our goal is to obtain a wavelet domain sub-pixel registration method that can achieve highly accurate results from a sparse set of wavelet coefficients. We make the following major contributions towards this goal: (i) We devise a method of decoupling scale, rotation, and translation parameters in the Haar wavelet domain, (ii) We derive explicit mathematical expressions that define in-band sub-pixel registration in terms of Haar wavelet coefficients, (iii) Using the derived expressions, we propose a multiscale approach to achieve in-band sub-pixel registration, avoiding back and forth transformations. (iv) Our solution remains highly accurate even when a sparse set of coefficients are used, due to signal energy localization in a sparse set of wavelet coefficients. Extensive experiments are used to validate our method both on simulated and real data under various scenarios.

Ii Related Work

The earliest methods related to our work are based on image pyramids, with the aim of reducing computational time and avoiding local extrema. Examples include the work by Thévenaz et al. [86], who minimized the mean square intensity differences using a modified version of the Levenberg-Marquardt algorithm, and the work by Chen et al. [87], who maximized the mutual information with a pyramid approach. Later, these approaches were extended to deal with local deformations in a coarse-to-fine fashion by either estimating a set of local parameters [88] or fitting a local model such as multi-resolution splines [89]. Cole-Rhodes et al. [90] proposed a method based on maximizing mutual information using stochastic gradient. Other examples of coarse-to-fine schemes are by Gong et al. [91], where automatic image registration is performed by using SIFT and mutual information, and by Ibn-Elhaj [92], where the bispectrum is used to register noisy images.

Hu and Acton [93] obtain sub-pixel accuracy by using morphological pyramid structure with Levenberg-Marguardt optimization and bilinear interpolation. Kim et al.  [94] apply Canny edge operator in a hierarchical fashion. Local regions of interest of images are registered in a coarse-to-fine fashion by estimating deformation parameters by Zhou et al.  [88]; and Szeliski and Coughlan [89] present local motion field using multi-resolution splines.

Template matching was also introduced in image registration for reducing computational cost. Ding et al. [95] utilized template matching with cross correlation in a spatial-domain based solution, while Rosenfeld and Vanderbrug [96, 97] used block averaging in template matching. Hirooka et al.  [98] optimize a small number of template points in each level of hierarchy which is selected by evaluating the correlation of images. In [99], Yoshimura and Kanade apply the Karhunen-Loeve expansion to a set of rotated templates to obtain eigen-images, which are used to approximate templates in the set. Tanimoto, in [100], applies hierarchical template matching to reduce computation time and sensitivity to noise. Anisimov and Gorsky [101] work with templates which have unknown orientation, location, and nonrectangular form.

Examples of wavelet-based methods can be summarized as follows. Turcajova and Kautsky [102] used separable fast discrete wavelet transform with normalized local cross correlation matching based on least square fit, where spline biorthogonal and Haar wavelets outperform other types of wavelets. In [103], Kekre et al. use several types of transforms such as discrete cosine, discrete wavelet, Haar and Walsh transforms for color image registration employing minimization of mean square error. Wang et al. [104] improve the polynomial subdivision algorithm for wavelet-based sub-pixel image registration. Le Moigne et al. [105, 106, 107, 108, 109, 110] have made extensive studies of various aspects of wavelet domain image registration, utilizing in particular the maxima of Daubechies wavelets for correlation based registration and multi-level optimization. In [111], Patil and Singhai use fast discrete curvelet transform with quincunx sampling for sub-pixel accuracy. Tomiya and Ageishi [112] minimize the mean square error; whereas, Wu and Chung [113] utilize mutual information and sum of differences with wavelet pyramids, while Wu et al. [114] proposed a wavelet-based model of motion as a linear combination of hierarchical basis functions for image registration. Hong and Zhang [115], combine feature-based and area-based registration, using wavelet-based features and relaxation based matching techniques, while Alam et al. [116] utilize approximate coefficients of curvelets with a conditional entropy-based objective function.

These methods require transformations between spatial and transform domains, since they start at uncompressed spatial domain and use the wavelets’ multiscale nature to approximate and propagate the solution from coarser to finer levels until it is refined to a good accuracy. Our method reaches the high accuracy at a coarser level with a sparse set of coefficients and no domain transformations.

Iii Sub-pixel Shifts in the Haar Domain

We first derive mathematical expressions that define in-band (i.e. direct wavelet-domain) shifts of an image, which will be used later for general registration under a similarity transformation (i.e. scale, rotation, and translation) [41].

Iii-a Notation

Table I summarizes the notations used throughout the paper, to streamline the understanding of the proposed method.

Reference image
Sensed image to be registered to
, , , Transformation parameters to be estimated: scale, rotation angle, and shifts along the two axes, respectively
Wavelet transform approximation, horizontal, vertical, and diagonal detail coefficients, respectively
Number of hypothetically added levels
() Perceived horizontal (vertical) integer shift of wavelet coefficients after the hypothetically added levels ()
TABLE I: Notation

Superscripts of show the level of wavelet decomposition. Subscripts and show horizontal and vertical directions, respectively; and stands for the calculated shifted coefficients.

Iii-B In-band Shifts

We demonstrate the derived explicit mathematical expressions for an in-band translation of a given image.

Let be a image, where is a positive integer. The Haar transform of this image consists of levels, where level holds approximation coefficient and horizontal, vertical and diagonal detail coefficients , , and , respectively, with , and .

Let,

Also, let be the difference between and , then, .

The following formula shows the relationship between and its parent level ;

(2)

Equation (2) shows that , for all , can be calculated only by using the detail coefficients of Haar transform iteratively, since . We utilize to calculate the detail coefficients of the shifted image which implies that the shifting process is in-band.

We can categorize a translational shift for a 2D image into two groups for horizontal and vertical shifts where a diagonal shift can be modeled as a horizontal shift followed by a vertical one. Unlike the common approach of modeling sub-pixel shifts by integer shifts of some upsampled version of the given image, our method models sub-pixel shifts directly in terms of the original level coefficients.

Observation 3.1. Let Haar transform of the image have levels, with at the th level. Upsampling an image is equivalent to adding levels to the bottom of the Haar transform, and setting the detail coefficients to zero while keeping the approximation coefficients equal to the ones already in the th level, , where .

Observation 3.2. Shifting upsampled image by an amount of is equivalent to shifting the original image by an amount of , where is the upsampling factor.

These observations allow us to shift a reference image for a sub-pixel amount without actually upsampling it, which saves memory, reduces computation, and avoids propagating interpolation errors.

Now, let and . The horizontal detail coefficients of the shifted image in case of a horizontal translation are computed from the reference image coefficients by:

(3)

where,


Here, is the horizontal shift amount at the th level (where and are calculated based on Observation ), is the reduction level, is highest power of 2 by which the shift is divisible. For the subpixel shifts, , since the shift amount at the hypothetically added level is always an odd integer. is essential to generalize the equation for even shifts. When , we set the coefficients utilizing in Eq. (3) to , since has a non-integer value. for vertical shifts are obtained by interchanging the ’s with ’s, ’s with ’s and ’s with ’s in Eq. (3).

By examining Eq. (3), it can be seen that each level of horizontal detail coefficients of the shifted image can be calculated using the original levels of the reference image, since is calculated in Eq. (2) using only the detail coefficients in its parent levels.

Here, we only demonstrate the formulae for horizontal detail coefficients. Approximation, vertical and diagonal detail coefficients of the shifted image can be described in a similar manner.

Iv Sub-pixel Registration

We first demonstrate that scale, rotation, and translation can be decoupled in the wavelet domain. This is similar to decoupling of rotation and translation in Fourier domain in magnitude and phase. We then describe the proposed method to solve the decoupled registration problem for the separated parameters.

Let us assume that sensed image is translated, rotated, and scaled with respect to a reference image, in that given order. Let also and be two points, where and are the reference and the sensed images, respectively. The point can be defined in terms of the similarity transformation (scale, rotation, translation) and the point in terms of homogeneous coordinates as follows:

q S R T p (4)

where we assume the same scale for both axes. Here, S, R, and T denote the scale, rotation, and translation matrices, and , , and denote the scale factor, the rotation angle in degrees, and the translations along the two axes, respectively. Although we assume the order of transformations as , we first explain rotation recovery to demonstrate the decoupling in the wavelet domain. Algorithm 1 shows the steps of the proposed in-band registration algorithm.

Algorithm 1 In-band Registration for Similarity Transform

  • Input: ,

  • Objective: Find similarity transform parameters

  • Output:

  • Generate wavelet coefficients of both images

  • Scale recovery using curvature radius on coefficients

    • Rescale sensed detail coefficients to the size of reference coefficients

  • Rotation recovery using angle histograms of coefficients

    • Rotate sensed detail coefficients for

  • Translation recovery using in-band wavelet coefficient relationship (Section III-B)

Iv-a Rotation Recovery

Let a and b denote wavelet coefficients of the input images as in Section III, where subscripts and stand for the images. Wavelet transform of Eq. (4), can be defined as follows:

(5)

Eq. (5) shows the relationship between the Haar wavelet coefficients of two images under similarity transformation, and indicate that the rotation and scale can be separated from translation, since translation parameters do not appear in these equations. In order to recover the rotation and scale independently, we also need to decouple and . One can see from Eq. (5) that dividing by eliminates the scale term, the result of which is an approximation to the slopes of local image gradients using Haar coefficients, since Haar coefficients can be viewed as an estimate of partial derivatives. To obtain an initial estimate of the rotation angle, we use wavelet thresholding [117] before finding the local slopes. This will both reduce noise and sparsify the coefficients. We then find an initial estimate of the rotation angle by maximizing the following cross-correlation:

(6)

where denotes the cross-correlation, and and are the histogram of wavelet-coefficient slopes (HWS) for the thresholded coefficients, which we define as follows:

(7)

where is the number of bins , and the subscript . We then refine the initial estimate , in the range to get the best estimate :

(8)

Iv-B Scale Recovery

Since we already demonstrated that scale, rotation and translation can be decoupled in wavelet domain, we can perform scale estimation independently of rotation and translation. Let us assume that the two images have a scale ratio of . Then, the mean curvature radius calculated on thresholded wavelet coefficients would provide an accurate estimate of the scale factor:

(9)

where shows the radius of curvature.

Iv-C Translation Recovery

Once the scale and rotation parameters are recovered and compensated for, the translations and along the two axes can be recovered independently by maximizing the following normalized cross-correlation function:

(10)

where and are the shifted versions of the reference detail coefficients (corresponding to or in the derivations of Section III-B), calculated using Eq. (3) (or the equivalent for the vertical coefficients); and and are the sensed image detail coefficients after rotation and scale compensation.

Observation 3.2 implies that sub-pixel registration for wavelet-encoded images can be performed directly in the wavelet domain without requiring inverse transformation. Furthermore, if the encoded image is also compressed (e.g. only a sparse set of detail coefficients are available), one can still perform the registration. The latter could be for instance a case of compressed sensing imager based on Haar wavelet sampling basis. To maximize the cost function in Eq. (IV-C), we use a branch and bound (BnB) algorithm, where split of rectangle areas in BnB are decided based on the two maximum cross correlations of four bounds [118].

Algorithm 2 Sub-pixel Shifts Estimation

  • Input: , (scale and rotation corrected images)

  • Objective: Find translational registration parameters

  • Output:

  • Initialize bounds for sub-pixel shift estimates to .

  • Do:

    • Generate horizontal/vertical detail coefficients of shifted versions of the reference image and , using Eq. (3) (similar equation for vertical), where and are the bounds at iteration .

    • Update bounds (reduce rectangles to half in size) based on the peak of cross correlation in Eq. (IV-C) for detail coefficients of shifted reference image and sensed image.

    until maximum cross-correlation in Eq. (IV-C) exceeds for an estimated bound, where is an accuracy measure (tolerance) for cross correlation.

Algorithm 2 demonstrates the main steps of the proposed method for translation recovery. Shifted horizontal/vertical detail coefficients for the updated bounds are calculated for a specified level () using Eq. (3) (similar equation for the vertical coefficients), followed by application of maximization of Eq. (IV-C).

When the algorithm converges within an distance to the true solution, it often starts osculating. So, as a modification to a general branch and bound method, we take the mid-point of osculations as the solution, which often happens to be the true solution.

The method requires the knowledge of for in-band shifts which may limit the approach to image sizes of . However, the solution can be generalized to images with arbitrary sizes by simply applying the method to a subregion of size of the original images.

a b c
d e f
Fig. 1: Examples of simulated and real world images used for experiments. a Lena b Cameraman c Pentagon d CIL - horizL0 [119] e Artichoke - 1 [119] f MDSP - Bookcase 1 [120]

V Experimental Results

To demonstrate the accuracy of our algorithm, we performed extensive experiments on both simulated and real data. In order to simulate reference and sensed images, a given high resolution image is shifted (using bicubic interpolation) and rotated, then both images are downsampled, which is a common technique employed in state-of-the-art literature [59], [121]. If different scale are assumed, then the sensed image is also scaled further. We performed thorough comparisons with state-of-the-art methods, which were given the same input images, and results were evaluated by measuring alignment errors. Fig. 1 shows some of the standard test images together with the real data obtained from [119] and [120]. Captions for real data indicate the dataset and specific image names utilized as reference image.

V-a Validation on Simulated Data

Here, we first performed experiments on translation, rotation and scale recovery separately. We then carried out tests for combination of transformations.

Image Exact shift Keren [122] Guizar [123] Szeliski [89] Proposed
Est. PSNR MSE Est. PSNR MSE Est. PSNR MSE Est. PSNR MSE
a 0.5 0.5 0.4878 0.5427 56.91 0.11 0.56 0.53 50.05 0.56 0.5017 0.5009 80.91 0 0.5 0.5 Inf 0
0.25 -0.125 0.2456 -0.1212 72.23 0.003 0.29 -0.16 53.10 0.28 0.2518 -0.1243 80.67 0 0.25 -0.125 Inf 0
-0.375 -0.4 -0.3826 -0.4146 63.99 0.02 -0.42 -0.42 52.67 0.31 -0.3732 -0.3990 80.36 0 -0.375 -0.4023 82.59 0
-0.625 0.75 -0.6958 -0.8268 47.81 0.95 -0.70 0.81 48.00 0.90 -0.6231 0.7508 80.17 0 -0.625 0.75 Inf 0
b 0.33 -0.33 0.3347 -0.3008 54.19 0.24 0.27 -0.33 45.91 1.61 0.3275 -0.3316 72.49 0.003 0.3281 -0.3438 60.74 0.05
0.167 0.5 0.1641 0.6154 42.08 3.91 0.11 0.55 44.78 2.10 0.1633 0.4977 69.04 0.007 0.1719 0.5 67.76 0.01
-0.875 -0.33 -0.8639 -0.2986 52.58 0.35 -0.92 -0.33 48.44 0.91 -0.8783 -0.3316 70.51 0.005 -0.875 -0.3438 60.40 0.06
-0.125 0.67 -0.1309 0.8230 39.53 7.01 -0.08 0.75 42.94 3.20 -0.1277 0.6695 72.62 0.003 -0.125 0.6719 77.60 0.001
TABLE II: Comparison of the proposed method with other baseline methods in estimated shifts, PSNR, and MSE.
Image Vandewalle [121] Proposed
PSNR MSE Time (s) PSNR MSE Time (s)
a 32.83 6.59 0.16 42.94 1.75 0.49
b 37.53 4.09 0.16 43.53 1.81 0.49
TABLE III: Comparison of average PSNR and MSE for rotation recovery for 121 simulations.

Table II summarizes some of the results for our translational method with simulated data, where the results are compared with the ground truth (GT) and other baseline methods; i.e. [122], [123], [89], in terms of estimated shifts, peak signal-to-noise ratio (PSNR), and mean square error (MSE). Since the expressions derived in Section III-B are exact for any arbitrary shift that can be expressed as positive or negative integer powers of 2, in the noise-free case, exact or near-exact solutions can be achieved, which outperforms the state-of-the-art methods. For any other shift amount, we can get arbitrarily close within the closest integer power of 2, which when compared with the state-of-the-art, is still outstanding.

Table III shows the PSNR, MSE and computational time for our rotation method compared to [121], averaged over 121 simulations. Although our technique can recover any rotation angle, since Vandewalle’s method [121] recovers only angles in the range , in order to be fair, we compared our results for every in that range.

We also ran our scale recovery method for 50 images with scale amounts . All experiments returned the exact scale in under seconds. Since wavelet transform downsamples images by in every level, we can only recover scales that are multiples of .

Results obtained for combination of transformations can be seen in Tables IV and V. While Table IV shows comparisons to Vandewalle’s method for rotation and translation, Table V presents our results obtained for several combinations of scale, rotation and translation. These tables also confirm that our method is accurate and outperforms or at least matches state-of-the-art.

V-B Optimal Parameters

In order to find the appropriate constants and for translational shift, which are the measures of accuracy (tolerance for cross correlation function) and reduction level of Haar transform, respectively, and show the accuracy of the proposed method, we tested our algorithm with 50 simulated test images for shift amount . Results after removing the outliers (when a local maxima is reached) are shown in Fig. 2, where is demonstrated as . As seen in the figure, the constants and can be adapted depending on the trade-off between time complexity and PSNR.

In case of the most general similarity transformation, is decided based on the recovered scale by choosing if or otherwise.

Fig. 2: Comparison of (x axis) and (y axis) with PSNR (z axis) for average of 50 images for GT shift of .
Image Exact Vandewalle [121] Proposed
Estimate PSNR MSE Time (s) Estimate PSNR MSE Time (s)
a 23.2 16.4 0.1 25.2 13.07 5.55
b 21.17 19.97 0.09 22.05 19.7 23.4
c 21.01 19.02 0.09 20.2 20.86 66
TABLE IV: Comparison of PSNR, MSE, and time for rotation and translation recovery.
Img Exact Results
Estimate Time (s)
a 25.7
a 4.94
b 105.3
c 93.6
TABLE V: Our results for scale, rotation and translation.

V-C Validation on Real Data

In order to ensure the accuracy of our method, real world images were also utilized as input. Results for real world examples (d, e and f in Fig. 1) including comparisons with the state-of-the-art methods [124] and [121] are summarized in Table VI. Since the GT for the used images is not known, the results are compared using PSNR and MSE as it is common practice in the literature. All methods are given the same input, where smaller image regions are used to adopt image sizes to work with our method as described in Section IV-C. As seen in Table VI, our method outperforms the baseline methods in real world examples in most cases as well.

V-D The Effect of Noise and Sparseness

Our proposed approaches for scale and rotation estimation already suppress noise by hard wavelet thresholding. Therefore, here we discuss only noise in translation estimation. In Table VII, a comparison of the proposed method with [59] and [125] under noisy conditions is presented. By adapting , based on the level of noise and cross validation, very accurate shift values can be achieved. It can be concluded from Table VII that our method performs well in suppressing Gaussian noise, which also is superior to the state-of-the-art. In order to show the accuracy under noisy conditions, the proposed algorithm is tested for 50 images with 50 different shift amounts for each image, with Gaussian noise. Results, after removing outliers, are shown in Fig. 3 for average SNR with respect to and .

Since our method works entirely in-band (i.e. using only detail coefficients), the method is particularly applicable to wavelet encoded imaging. Moreover, our approach can work with a sparse subset of coefficients, e.g. compressed sensing of wavelet-encoded images. Since our scale and rotation recovery methods already use sparse coefficients (i.e. hard-thresholded wavelet coefficients), we experimented on translational shifts under sparseness. We tested our method as the level of sparseness varied from 2% to 100% of detail coefficients, for several simulated images and different shifts. We then fitted a model to the average results to evaluate the trend which is shown in Fig. 4-a. It can be noticed that even at very sparse levels of detail coefficients, the method is stable with an average PSNR above 46dB. Beyond 50% sampled detail coefficients, the PSNR grows exponentially.

Fig. 3: Average PSNR (z axis) compared with changing (y axis) and (x axis), for horizontal axis.
Dataset Reference img. Sensed img. Vandewalle [121] Evangelidis [124] Proposed
PSNR MSE PSNR MSE PSNR MSE
Artichoke 1 2 26.88 11.59 31.5 6.78 31.8 6.72
Artichoke 27 28 26.86 11.17 42.08 1.93 31.06 6.92
CIL HorizR0 HorizR1 24.02 13.2 12.4 50.4 24.73 12.66
CIL VertR4 VertR5 20.66 21.57 22.2 18.46 20.75 22.5
MDSP Bookcase 1 2 3 26.58 11.90 12.52 60.31 25.10 14.11
TABLE VI: Comparison of our method with other methods for real world examples from [119] and [120] in PSNR and MSE.
SNR Foroosh [59] Chen [126] Proposed
10 dB 0.38 0.65 0.29 0.68 0.25 0.75
20 dB 0.31 0.71 0.28 0.74 0.25 0.75
30 dB 0.30 0.73 0.27 0.74 0.25 0.75
40 dB 0.29 0.74 0.27 0.74 0.25 0.75
TABLE VII: Comparison of results for noisy environments with ”Pentagon” image for shift.

V-E Computational Complexity and Convergence Rate

Time complexity of our method depends on in-band shifting, parameter selection, and the level of sparseness. In-band shifting method in Section III-B, has a complexity of for all , where is size of the image (or a sparsified version). Parameter selection also affects the complexity since when is higher, the method attempts to match the images with higher accuracy, which would increase the run time. We provide running time of our method with comparisons in Tables III, IV and V on a machine with 2.7 GHz CPU and 8 GB RAM.

Fig. 4-b demonstrates the convergence of our method to the global cross-correlation maximum for Lena image with GT in blue circles, Pentagon image with GT in green stars, and Cameraman image with GT in red line. The convergence is visibly exponential and therefore we get a very rapid convergence to the solution.

a b
Fig. 4: a Average PSNR as a function of percentage of detail coefficients (level of sparsity) used to register for Pentagon, Cameraman, and two different shifts of Lena. In all cases, the worst registration PSNR when using only 2%-7% of detail coefficients was above 46dB b Examples illustrating the convergence to optimal cross-correlation as a function.

Vi Conclusion

A sub-pixel registration technique for sparse Haar encoded images is proposed. Only a sparse set of detail coefficients are sufficient to establish the cross-correlation between images for scale, rotation, and translation recovery. Our registration process is thus performed solely in-band, making the method capable of handling both in-band registration for wavelet-encoded imaging systems, and sparsely sensed data for a wavelet-based compressive sensing imager. Moreover, our method conveniently decouples scale, rotation and translation parameters, while exploiting Haar wavelet’s important features, such as multi-resolution representation and signal energy localization. Our method does not use image interpolation for estimating the registration parameters, since the exact set of in-band equations are derived for establishing the registration and fitting the parameters. Although the run time of our method is higher than compared methods, we achieve far better accuracy as a reasonable trade-off. Overall, our results show superior performance, and outperform the baseline methods in terms of accuracy and resilience to noise.

References

  • [1] R. C. Hassan Shekarforoush (Foroosh), “Data-driven multi-channel super-resolution with application to video sequences,” Journal of Optical Society of America-A, vol. 16, no. 3, pp. 481–492, 1999.
  • [2] M. W. Hassan Shekarforoush (Foroosh), Marc Berthod and J. Zerubia, “Subpixel bayesian estimation of albedo and height,” International Journal of Computer Vision, vol. 19, no. 3, pp. 289–300, 1996.
  • [3] H. Shekarforoush, M. Berthod, and J. Zerubia, “3d super-resolution using generalized sampling expansion,” in Image Processing, 1995. Proceedings., International Conference on, vol. 2, pp. 300–303, IEEE, 1995.
  • [4] A. Lorette, H. Shekarforoush, and J. Zerubia, “Super-resolution with adaptive regularization,” in Image Processing, 1997. Proceedings., International Conference on, vol. 1, pp. 169–172, IEEE, 1997.
  • [5] H. Shekarforoush, R. Chellappa, H. Niemann, H. Seidel, and B. Girod, “Multi-channel superresolution for images sequences with applications to airborne video data,” Proc. of IEEE Image and Multidimensional Digital Signal Processing, pp. 207–210, 1998.
  • [6] M. Berthod, M. Werman, H. Shekarforoush, and J. Zerubia, “Refining depth and luminance information using super-resolution,” in Computer Vision and Pattern Recognition, pp. 654–657, 1994.
  • [7] H. Shekarforoush, Conditioning bounds for multi-frame super-resolution algorithms. Computer Vision Laboratory, Center for Automation Research, University of Maryland, 1999.
  • [8] A. Jain, S. Murali, N. Papp, K. Thompson, K.-s. Lee, P. Meemon, H. Foroosh, and J. P. Rolland, “Super-resolution imaging combining the design of an optical coherence microscope objective with liquid-lens based dynamic focusing capability and computational methods,” in Optical Engineering+ Applications, pp. 70610C–70610C, International Society for Optics and Photonics, 2008.
  • [9] H. Shekarforoush, A. Banerjee, and R. Chellappa, “Super resolution for fopen sar data,” in AeroSense’99, pp. 123–129, International Society for Optics and Photonics, 1999.
  • [10] I. Junejo and H. Foroosh, “Gps coordinates estimation and camera calibration from solar shadows,” Computer Vision and Image Understanding (CVIU), vol. 114, no. 9, pp. 991–1003, 2010.
  • [11] I. N. Junejo and H. Foroosh, “Gps coordinate estimation from calibrated cameras,” in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, pp. 1–4, IEEE, 2008.
  • [12] H. F. Xiaochun Cao, “Camera calibration and light source orientation from solar shadows,” Journal of Computer Vision & Image Understanding (CVIU), vol. 105, pp. 60–72, 2007.
  • [13] A. K. Amara Tariq and H. Foroosh, “A context-driven extractive framework for generating realistic image descriptions,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 619–632, 2002.
  • [14] A. K. Amara Tariq and H. Foroosh, “Nelasso: Building named entity relationship networks using sparse structured learning,” IEEE Trans. on on Pattern Analysis and Machine Intelligence, p. accepted, 2017.
  • [15] A. Tariq, A. Karim, F. Gomez, and H. Foroosh, “Exploiting topical perceptions over multi-lingual text for hashtag suggestion on twitter,” in The Twenty-Sixth International FLAIRS Conference, 2013.
  • [16] A. Tariq and H. Foroosh, “Feature-independent context estimation for automatic image annotation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1958–1965, 2015.
  • [17] A. Tariq and H. Foroosh, “Scene-based automatic image annotation,” in Image Processing (ICIP), 2014 IEEE International Conference on, pp. 3047–3051, IEEE, 2014.
  • [18] A. Tariq and H. Foroosh, “T-clustering: Image clustering by tensor decomposition,” in Image Processing (ICIP), 2015 IEEE International Conference on, pp. 4803–4807, IEEE, 2015.
  • [19] X. C. Imran Junejo and H. Foroosh, “Autoconfiguration of a dynamic non-overlapping camera network,” IEEE Trans. Systems, Man, and Cybernetics, vol. 37, no. 4, pp. 803–816, 2007.
  • [20] H. F. Imran Junejo, “Euclidean path modeling for video surveillance,” Image and Vision Computing (IVC), vol. 26, no. 4, pp. 512–528, 2008.
  • [21] I. N. Junejo and H. Foroosh, “Trajectory rectification and path modeling for video surveillance,” in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pp. 1–7, IEEE, 2007.
  • [22] M. T. H. F. Chuan Sun, Imran Junejo, “Exploring sparseness and self-similarity for action recognition,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2488–2501, 2015.
  • [23] C. S. Nazim Ashraf and H. Foroosh, “View-invariant action recognition using projective depth,” Journal of Computer Vision and Image Understanding (CVIU), vol. 123, pp. 41–52, 2014.
  • [24] X. C. Nazim Ashraf, Yuping Shen and H. Foroosh, “View-invariant action recognition using weighted fundamental ratios,” Journal of Computer Vision and Image Understanding (CVIU), vol. 117, pp. 587–602, 2013.
  • [25] Y. Shen and H. Foroosh, “View-invariant action recognition from point triplets,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 31, no. 10, pp. 1898–1905, 2009.
  • [26] Y. Shen and H. Foroosh, “View-invariant action recognition using fundamental ratios,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–6, IEEE, 2008.
  • [27] C. Sun, I. Junejo, and H. Foroosh, “Action recognition using rank-1 approximation of joint self-similarity volume,” in Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 1007–1012, IEEE, 2011.
  • [28] N. Ashraf, C. Sun, and H. Foroosh, “View invariant action recognition using projective depth,” Computer Vision and Image Understanding, vol. 123, pp. 41–52, 2014.
  • [29] Y. Shen, N. Ashraf, and H. Foroosh, “Action recognition based on homography constraints,” in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, pp. 1–4, IEEE, 2008.
  • [30] N. Ashraf, Y. Shen, and H. Foroosh, “View-invariant action recognition using rank constraint,” in Pattern Recognition (ICPR), 2010 20th International Conference on, pp. 3611–3614, IEEE, 2010.
  • [31] H. Boyraz12, S. Z. Masood13, B. Liu, M. Tappen12, and H. Foroosh, “Action recognition by weakly-supervised discriminative region localization,”
  • [32] C. Sun, M. Tappen, and H. Foroosh, “Feature-independent action spotting without human localization, segmentation or frame-wise tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2689–2696, 2014.
  • [33] N. Ashraf and H. Foroosh, “Human action recognition in video data using invariant characteristic vectors,” in Image Processing (ICIP), 2012 19th IEEE International Conference on, pp. 1385–1388, IEEE, 2012.
  • [34] W. L. H. F. Chen Shu, Luming Liang, “3d pose tracking with multitemplate warping and sift correspondences,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 26, no. 11, pp. 2043–2055, 2016.
  • [35] H. Shekarforoush and R. Chellappa, “A multi-fractal formalism for stabilization, object detection and tracking in flir sequences,” in Image Processing, 2000. Proceedings. 2000 International Conference on, vol. 3, pp. 78–81, IEEE, 2000.
  • [36] Q. S. Brian Milikan, Aritra Dutta and H. Foroosh, “Compressed infrared target detection using stochastically trained least squares,” IEEE Transactions on Aerospace and Electronics Systems, p. accepted, 2017.
  • [37] B. Millikan, A. Dutta, N. Rahnavard, Q. Sun, and H. Foroosh, “Initialized iterative reweighted least squares for automatic target recognition,” in Military Communications Conference, MILCOM 2015-2015 IEEE, pp. 506–510, IEEE, 2015.
  • [38] H. F. Ozan Cakmakci, Sophie Vo and J. Rolland, “Application of radial basis functions to shape description in a dual-element off-axis magnifier,” Optics Letters, vol. 33, no. 11, pp. 1237–1239, 2008.
  • [39] H. F. Ozan Cakmakci, Brendan Moore and J. Rolland, “Optimal local shape description for rotationally non-symmetric optical surface design and analysis,” Optics Express, vol. 16, no. 3, pp. 1583–1589, 2008.
  • [40] M. Ali and H. Foroosh, “Character recognition in natural scene images using rank-1 tensor decomposition,” in Image Processing (ICIP), 2016 IEEE International Conference on, pp. 2891–2895, IEEE, 2016.
  • [41] M. Alnasser and H. Foroosh, “Image-based rendering of synthetic diffuse objects in natural scenes,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 4, pp. 787–790, IEEE.
  • [42] M. Balci and H. Foroosh, “Real-time 3d fire simulation using a spring-mass model,” in Multi-Media Modelling Conference Proceedings, 2006 12th International, pp. 8–pp, IEEE.
  • [43] M. Balci, M. Alnasser, and H. Foroosh, “Image-based simulation of gaseous material,” in Image Processing, 2006 IEEE International Conference on, pp. 489–492, IEEE, 2006.
  • [44] Y. Shen, F. Lu, X. Cao, and H. Foroosh, “Video completion for perspective camera under constrained motion,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 3, pp. 63–66, IEEE.
  • [45] I. Junejo and H. Foroosh, “Optimizing ptz camera calibration from two images,” Machine Vision and Applications (MVA), pp. 1–15, 2011.
  • [46] H. F. Xiaochun Cao, “Camera calibration using symmetric objects,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3614–3619, 2006.
  • [47] X. Cao and H. Foroosh, “Camera calibration without metric information using 1d objects,” in Image Processing, 2004. ICIP’04. 2004 International Conference on, vol. 2, pp. 1349–1352, IEEE, 2004.
  • [48] I. N. Junejo, X. Cao, and H. Foroosh, “Calibrating freely moving cameras,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 4, pp. 880–883, IEEE.
  • [49] X. Cao, J. Xiao, and H. Foroosh, “Self-calibration using constant camera motion,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 1, pp. 595–598, IEEE.
  • [50] X. Cao, J. Xiao, and H. Foroosh, “Camera motion quantification and alignment,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 2, pp. 13–16, IEEE.
  • [51] N. Ashraf, I. Junejo, and H. Foroosh, “Near-optimal mosaic selection for rotating and zooming video cameras,” Computer Vision–ACCV 2007, pp. 63–72, 2007.
  • [52] H. Foroosh, “Pixelwise adaptive dense optical flow assuming non-stationary statistics,” IEEE Trans. Image Processing, vol. 14, no. 2, pp. 222–230, 2005.
  • [53] H. Foroosh, “A closed-form solution for optical flow by imposing temporal constraints,” in Image Processing, 2001. Proceedings. 2001 International Conference on, vol. 3, pp. 656–659, IEEE, 2001.
  • [54] H. Foroosh, “An adaptive scheme for estimating motion,” in Image Processing, 2004. ICIP’04. 2004 International Conference on, vol. 3, pp. 1831–1834, IEEE, 2004.
  • [55] H. Foroosh, “Adaptive estimation of motion using generalized cross validation,” in 3rd International (IEEE) Workshop on Statistical and Computational Theories of Vision, 2003.
  • [56] V. Atalay and H. Foroosh, “In-band sub-pixel registration of wavelet-encoded images from sparse coefficients,” Signal, Image and Video Processing, p. accepted, 2017.
  • [57] M. Balci and H. Foroosh, “Sub-pixel estimation of shifts directly in the fourier domain,” IEEE Trans. Image Processing, vol. 15, no. 7, pp. 1965–1972, 2006.
  • [58] M. Balci and H. Foroosh, “Sub-pixel registration directly from phase difference,” Journal of Applied Signal Processing-special issue on Super-resolution Imaging, vol. 2006, pp. 1–11, 2006.
  • [59] M. B. Hassan Foroosh, Josiane Zerubia, “Extension of phase correlation to subpixel registration,” IEEE Trans. on Image Processing, vol. 11, no. 3, pp. 188–200, 2002.
  • [60] H. Shekarforoush, M. Berthod, and J. Zerubia, “Subpixel image registration by estimating the polyphase decomposition of cross power spectrum,” in Computer Vision and Pattern Recognition, 1996. Proceedings CVPR’96, 1996 IEEE Computer Society Conference on, pp. 532–537, IEEE, 1996.
  • [61] H. Foroosh and M. Balci, “Sub-pixel registration and estimation of local shifts directly in the fourier domain,” in Image Processing, 2004. ICIP’04. 2004 International Conference on, vol. 3, pp. 1915–1918, IEEE, 2004.
  • [62] H. Shekarforoush, M. Berthod, and J. Zerubia, Subpixel image registration by estimating the polyphase decomposition of the cross power spectrum. PhD thesis, INRIA-Technical Report, 1995.
  • [63] M. Balci and H. Foroosh, “Estimating sub-pixel shifts directly from the phase difference,” in Image Processing, 2005. ICIP 2005. IEEE International Conference on, vol. 1, pp. I–1057, IEEE, 2005.
  • [64] M. Balci, M. Alnasser, and H. Foroosh, “Subpixel alignment of mri data under cartesian and log-polar sampling,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, vol. 3, pp. 607–610, IEEE, 2006.
  • [65] A. Myronenko and X. Song, “Point set registration: Coherent point drift,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 12, pp. 2262–2275, 2010.
  • [66] T. Kim and Y.-J. Im, “Automatic satellite image registration by combination of matching and random sample consensus,” IEEE transactions on geoscience and remote sensing, vol. 41, no. 5, pp. 1111–1117, 2003.
  • [67] J. Maciel and J. P. Costeira, “A global solution to sparse correspondence problems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 2, pp. 187–199, 2003.
  • [68] L. S. Shapiro and J. M. Brady, “Feature-based correspondence: an eigenvector approach,” Image and vision computing, vol. 10, no. 5, pp. 283–288, 1992.
  • [69] E. S. Ng and N. G. Kingsbury, “Robust pairwise matching of interest points with complex wavelets,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3429–3442, 2012.
  • [70] D. Shen, “Fast image registration by hierarchical soft correspondence detection,” Pattern Recognition, vol. 42, no. 5, pp. 954–961, 2009.
  • [71] D. Shen and C. Davatzikos, “Hammer: hierarchical attribute matching mechanism for elastic registration,” IEEE transactions on medical imaging, vol. 21, no. 11, pp. 1421–1439, 2002.
  • [72] E. De Castro and C. Morandi, “Registration of translated and rotated images using finite fourier transforms,” IEEE Trans. on Pattern Anal. Mach. Intell., vol. 9, no. 5, pp. 700–703, 1987.
  • [73] B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. on Imgage Processing, vol. 5, no. 8, pp. 1266–1271, 1996.
  • [74] Q. Zhang, Y. Wang, and L. Wang, “Registration of images with affine geometric distortion based on maximally stable extremal regions and phase congruency,” Image Vision Comput., vol. 36, pp. 23–39, 2015.
  • [75] J.-M. Morel and G. Yu, “Asift: A new framework for fully affine invariant image comparison,” SIAM J. Img. Sci., vol. 2, no. 2, pp. 438–469, 2009.
  • [76] S. Zokai and G. Wolberg, “Image registration using log-polar mappings for recovery of large-scale similarity and projective transformations,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1422–1434, 2005.
  • [77] J. Ashburner, “A fast diffeomorphic image registration algorithm,” Neuroimage, vol. 38, no. 1, pp. 95–113, 2007.
  • [78] A. Klein, J. Andersson, B. A. Ardekani, J. Ashburner, B. Avants, M.-C. Chiang, G. E. Christensen, D. L. Collins, J. Gee, P. Hellier, et al., “Evaluation of 14 nonlinear deformation algorithms applied to human brain mri registration,” Neuroimage, vol. 46, no. 3, pp. 786–802, 2009.
  • [79] T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache, “Diffeomorphic demons: Efficient non-parametric image registration,” NeuroImage, vol. 45, no. 1, pp. S61–S72, 2009.
  • [80] J.-P. Thirion, “Image matching as a diffusion process: an analogy with maxwell’s demons,” Medical image analysis, vol. 2, no. 3, pp. 243–260, 1998.
  • [81] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, “Image coding using wavelet transform,” IEEE Transactions on image processing, vol. 1, no. 2, pp. 205–220, 1992.
  • [82] G. M. Davis and A. Nosratinia, “Wavelet-based image coding: an overview,” in Applied and computational control, signals, and circuits, pp. 369–434, Springer, 1999.
  • [83] Z. Liu, B. Nutter, and S. Mitra, “Compressive sampling in fast wavelet-encoded MRI,” in IEEE Southwest Symposium on Image Analysis and Interpretation, SSIAI 2012, Santa Fe, New Mexico, USA, April 22-24, 2012, pp. 137–140, 2012.
  • [84] M. Duarte, M. Wakin, and R. Baraniuk, “Wavelet-domain compressive signal reconstruction using a hidden markov tree model,” in Proc. Proc. of IEEE ICASSP, pp. 5137–5140, 2008.
  • [85] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty, “An efficient algorithm for compressed mr imaging using total variation and wavelets,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1–8, 2008.
  • [86] P. Thevenaz, U. E. Ruttimann, and M. Unser, “A pyramid approach to subpixel registration based on intensity,” IEEE transactions on image processing, vol. 7, no. 1, pp. 27–41, 1998.
  • [87] H. Chen, M. Arora, and P. Varshney, “Mutual information based image registration for remote sensing data,” Int. J. Remote Sensing, vol. 24, pp. 3701–3706, 2003.
  • [88] F. Zhou, W. Yang, and Q. Liao, “A coarse-to-fine subpixel registration method to recover local perspective deformation in the application of image super-resolution,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 53–66, 2012.
  • [89] R. Szeliski and J. Coughlan, “Hierarchical spline-based image registration,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 194–201, 1994.
  • [90] A. A. Cole-Rhodes, K. L. Johnson, J. LeMoigne, and I. Zavorin, “Multiresolution registration of remote sensing imagery by optimization of mutual information using a stochastic gradient,” IEEE transactions on image processing, vol. 12, no. 12, pp. 1495–1511, 2003.
  • [91] M. Gong, S. Zhao, L. Jiao, D. Tian, and S. Wang, “A novel coarse-to-fine scheme for automatic image registration based on sift and mutual information,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 7, pp. 4328–4338, 2014.
  • [92] E. Ibn-Elhaj et al., “A robust hierarchical motion estimation algorithm in noisy image sequences in the bispectrum domain,” Signal, image and video processing, vol. 3, no. 3, pp. 291–302, 2009.
  • [93] Z. Hu and S. T. Acton, “Morphological pyramid image registration,” in Proceedings of the 4th IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 227–231, 2000.
  • [94] J.-Y. Lee, K.-B. Kim, S.-H. Lee, T.-E. Kim, and J.-S. Choi, “Image registration for sub-pixel using pyramid edge images,” in Proceedings of the International Conference on IT Convergence and Security 2011, pp. 367–371, Springer, 2012.
  • [95] L. Ding, A. Goshtasby, and M. Satter, “Volume image registration by template matching,” Image and Vision Computing, vol. 19, no. 12, pp. 821–832, 2001.
  • [96] G. J. Vanderbrug and A. Rosenfeld, “Two-stage template matching,” IEEE Transactions on Computers, vol. 26, no. 4, pp. 384–393, 1977.
  • [97] A. Rosenfeld, “Coarse-fine template matching,” IEEE Trans. Syst., Man & Cybern., vol. 2, no. 2, pp. 104–107, 1977.
  • [98] M. Hirooka, K. Sumi, M. Hashimoto, H. Okuda, and S. Kuroda, “Hierarchical distributed template matching,” in Electronic Imaging’97, pp. 176–183, International Society for Optics and Photonics, 1997.
  • [99] S. Yoshimura and T. Kanade, “Fast template matching based on the normalized correlation by using multiresolution eigenimages,” in Intelligent Robots and Systems’ 94.’Advanced Robotic Systems and the Real World’, IROS’94. Proceedings of the IEEE/RSJ/GI International Conference on, vol. 3, pp. 2086–2093, IEEE, 1994.
  • [100] S. L. Tanimoto, “Template matching in pyramids,” Computer Graphics and Image Processing, vol. 16, no. 4, pp. 356–369, 1981.
  • [101] V. A. Anisimov and N. D. Gorsky, “Fast hierarchical matching of an arbitrarily oriented template,” Pattern Recognition Letters, vol. 14, no. 2, pp. 95–101, 1993.
  • [102] R. Turcajova and J. Kautsky, “A hierarchical multiresolution technique for image registration,” in Proc. SPIE’s International Symposium on Optical Science, Engineering, and Instrumentation. Mathematical Imaging: Wavelet Applications in Signal and Image, pp. 686–696, International Society for Optics and Photonics, 1996.
  • [103] H. Kekre, T. K. Sarode, and R. B. Karani, “A deviant transform based approach for color image registration,” in Communication, Information & Computing Technology (ICCICT), 2012 International Conference on, pp. 1–6, IEEE, 2012.
  • [104] S.-H. Wang, J.-P. Li, and Y.-Q. Yang, “Study on subpixel registration methods based on wavelet analysis,” in Apperceiving Computing and Intelligence Analysis, 2008. ICACIA 2008. International Conference on, pp. 236–240, 2008.
  • [105] J. Le Moigne, W. J. Campbell, and R. F. Cromp, “An automated parallel image registration technique based on the correlation of wavelet features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 8, pp. 1849–1864, 2002.
  • [106] I. Zavorin and J. Le Moigne, “Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery,” IEEE Transactions on Image Processing, vol. 14, no. 6, pp. 770–782, 2005.
  • [107] J. Le Moigne, “Parallel registration of multi-sensor remotely sensed imagery using wavelet coefficients,” Proceedings of the SPIE: Wavelet Applications, Orlando, Florida, vol. 2242, pp. 432–443, 1994.
  • [108] H. S. Stone, J. Le Moigne, and M. McGuire, “The translation sensitivity of wavelet-based registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 10, pp. 1074–1081, 1999.
  • [109] J. Le Moigne and I. Zavorine, “Use of wavelets for image registration,” in Wavelet Applications VII, vol. 4056, pp. 99–108, 2000.
  • [110] J. Le Moigne, N. S. Netanyahu, J. G. Masek, D. M. Mount, S. Goward, and M. Honzak, “Geo-registration of landsat data by robust matching of wavelet features,” in Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International, vol. 4, pp. 1610–1612, IEEE, 2000.
  • [111] A. A. Patil and J. Singhai, “Discrete curvelet transform based super-resolution using sub-pixel image registration,” International Journal of Signal Processing, Image processing, and pattern recognition, vol. 4, no. 2, pp. 41–50, 2011.
  • [112] M. Tomiya and A. Ageishi, “Registration of remote sensing image data based on wavelet transform,” INTERNATIONAL ARCHIVES OF PHOTOGRAMMETRY REMOTE SENSING AND SPATIAL INFORMATION SCIENCES, vol. 34, no. 3/W8, pp. 145–150, 2003.
  • [113] J. Wu and A. C. Chung, “Multimodal brain image registration based on wavelet transform using sad and mi,” in International Workshop on Medical Imaging and Virtual Reality, pp. 270–277, Springer, 2004.
  • [114] Y.-T. Wu, T. Kanade, C.-C. Li, and J. Cohn, “Image registration using wavelet-based motion model,” International Journal of Computer Vision, vol. 38, no. 2, pp. 129–152, 2000.
  • [115] G. Hong and Y. Zhang, “Wavelet-based image registration technique for high-resolution remote sensing images,” Computers & Geosciences, vol. 34, no. 12, pp. 1708–1720, 2008.
  • [116] M. M. Alam, T. Howlader, and S. M. Rahman, “Entropy-based image registration method using the curvelet transform,” Signal, Image and Video Processing, vol. 8, no. 3, pp. 491–505, 2014.
  • [117] D. L. Donoho and I. M. Johnstone, “Compressed infrared target detection using stochastically trained least squares,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994.
  • [118] R. Marti and G. Reinelt, “Branch-and-bound,” in The Linear Ordering Problem, pp. 85–94, Springer, 2011.
  • [119] “CMU: vision and autonomous systems center’s image database.” http://vasc.ri.cmu.edu/idb/images/motion/. Accessed: 2017-04-30.
  • [120] “UCSC: mdsp super-resolution and demosaicing datasets.” http://www.soe.ucsc.edu/milanfar/software/sr-datasets.html. Accessed: 2010-09-30.
  • [121] P. Vandewalle, L. Sbaiz, J. Vandewalle, and M. Vetterli, “Super-resolution from unregistered and totally aliased signals using subspace methods,” Signal Processing, IEEE Transactions on, vol. 55, no. 7, pp. 3687–3703, 2007.
  • [122] D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using sub-pixel displacements,” in Computer Vision and Pattern Recognition, 1988. Proceedings CVPR’88., Computer Society Conference on, pp. 742–746, IEEE, 1988.
  • [123] M. Guizar-Sicairos, S. Thurman, and J. Fienup, “Efficient sub-pixel image registration algorithms,” Optics letters, vol. 33, no. 2, pp. 156–158, 2008.
  • [124] G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 10, pp. 1858–1865, 2008.
  • [125] Y. He, K.-H. Yap, L. Chen, and L.-P. Chau, “A nonlinear least square technique for simultaneous image registration and super-resolution,” Image Processing, IEEE Transactions on, vol. 16, no. 11, pp. 2830–2841, 2007.
  • [126] L. Chen and K. Yap, “An effective technique for sub-pixel image registration under noisy conditions,” IEEE Transactions on Systems, Man and Cybernetics, vol. 38, no. 4, pp. 881–887, 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
5195
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description