FFT Interpolation from Nonuniform Samples Lying in a Regular Grid

FFT Interpolation from Nonuniform Samples Lying in a Regular Grid

J. Selva
Abstract

This paper presents a method to interpolate a periodic band-limited signal from its samples lying at nonuniform positions in a regular grid, which is based on the FFT and has the same complexity order as this last algorithm. This kind of interpolation is usually termed “the missing samples problem” in the literature, and there exists a wide variety of iterative and direct methods for its solution. The one presented in this paper is a direct method that exploits the properties of the so-called erasure polynomial, and it provides a significant improvement on the most efficient method in the literature, which seems to be the burst error recovery (BER) technique of Marvasti’s et al. The numerical stability and complexity of the method are evaluated numerically and compared with the pseudo-inverse and BER solutions.

I Introduction

In a variety of applications, a band-limited signal is converted from the analog to the discrete domain, but some of the resulting samples are lost due to various causes. Then, the problem is to interpolate the lost samples from the available ones, assuming the average rate of the latter fulfills the Nyquist condition. Just to cite a few applications in which this problem arises, it is a task required whenever a sampled signal is sent through a packet network and there exist losses [1]. Also, it is a basic spectral estimation problem whenever a channel spectrum must be estimated from its nonuniform samples in OFDM systems [2, 3], (pilot-aided estimation). It is equivalent to the error calculation step for the so-called Bose-Chaudhuri-Hocquenghem (BCH) DFT codes, in which the coding is performed in the real field, before quantization, [4, 5, 6]. Finally, in time-interleaved analog-to-digital converters (TI-ADCs), some samples at arbitrary positions can be unavailable due to a jitter calibration process, and they must be recovered, [7, 8].

In sampling theory, this problem is usually termed “the missing samples problem”, and is addressed assuming the signal’s bandwidth is unknown but fulfills the Nyquist condition. The basic interpolation model is then the trigonometric one, i.e, the signal is viewed as a trigonometric polynomial, and the problem reduces to computing the polynomial’s coefficients and from them the missing samples [9], [10, Ch. 17]. As can be easily deduced, this task is equivalent to solving a linear system for which there exist various standard techniques. There are, however, two main issues. The first is the numerical stability, due to the fact that the round-off errors accumulate heavily if there is a large number of consecutive missing samples. The second is the complexity, given that the linear system size is large and the complexity order of the standard techniques depends cubically on it. These two drawbacks have led to the development of various direct and iterative algorithms for recovering the lost samples during the last decades. Probably, the earliest solution in the literature was the Papoulis-Gerchberg algorithm [11, 12], which is an iterative method based on the FFT. Standard techniques like the conjugate gradient and Lagrange interpolation methods have also been employed [13, Sec. 3]. [1] presents another iterative method and several ways to speed up its convergence using extrapolation. The BER technique of Marvasti et al. in [13] seems to be the most efficient technique to date. This technique is numerically stable and achieves the complexity order , where is the total number of samples and the number of known ones. This complexity order is a clear improvement relative to the complexity order of the standard methods which is .

The purpose of this paper is to present a new direct solution for the missing sampling problem whose complexity order is . If denotes the ratio of total to known samples and is assumed constant, then the complexity of the BER technique is while that of the proposed method is . Thus, the proposed method provides a significant improvement in terms of complexity. Actually, its arithmetic operations count is up to factor twenty smaller than the corresponding count of the BER technique, for typical values of . The method proposed in this paper is based on two theorems. The first gives a procedure to obtain the missing samples which consists of two FFTs plus three weighting operations. The coefficients of two of the three weighting operations depend on the sampling positions, and thus the procedure is efficient but only usable if these last coefficients have been pre-computed. The second theorem provides a solution to this last shortcoming, by specifying a procedure to compute the weighting coefficients in just two FFTs plus the computation of one complex exponential per output sample. The combination of these two theorems yields the proposed method whose complexity is .

The paper has been organized as follows. In the next sub-section, we introduce the notation and recollect several basic results about periodic signals and the FFT, that will be instrumental in the paper. Then, in Sec. II we introduce the missing samples problem and the BER technique. Afterward, we present in Secs. III and IV the two theorems that make up the proposed method. The complexity order of the BER and proposed method are then discussed in Sec. V. Finally, Sec. VI compares the standard pseudo-inverse, BER, and proposed methods numerically in terms of numerical stability and computational burden.

I-a Notation and basic concepts

We will employ the following notation:

  • Throughout the paper, will denote the time variable, and , and integer variables.

  • Definitions of new symbols and functions will be written using ’’.

  • Vectors will be denoted in bold face, (, ).

  • will denote the th component of vector .

  • For integer , will denote the set

  • For a sequence and an index set , the notation will specify the set of values for indices in .

  • and will respectively denote the DFT and IDFT of vector , but computed using a fast algorithm based on the FFT. Note that there exist fast algorithms of this kind for any vector length, like the Chirp transform, [14, Sec. 6.9.2].

  • For two sets, and , will denote the set of elements in not in .

  • will denote the component-wise product of and , i.e, for and of equal length, .

  • For two -period discrete sequences, and , will denote their cyclic convolution, defined by

    where can be any integer, given that and have period . The cyclic convolution can be efficiently evaluated using the FFT by means of the formula

    (1)

    where

In the paper, we will exploit several basic results about the evaluation of periodic band-limited signals using the FFT. For integer , will denote the set of signals whose Fourier series is restricted to the index range ; specifically, contains the signals of the form

(2)

where and .

For , consider the following vectors of samples of and , and Fourier coefficients ,

(3)

As is well known, we can switch from to and vice versa through the equations

We may express this relation using set notation in the following way,

(4)

where we interpret these sets as ordered.

is closed under differentiation, i.e, if then . This property is obvious since the Fourier coefficients of are . A consequence of this property is that we may compute the values , , using the DFT/IDFT pair. More precisely, we have

(5)

where

(6)

Ii The missing samples problem

A basic interpolator for a band-limited signal is the trigonometric one, i.e, it consists of viewing as a trigonometric polynomial of the form

(7)

where we assume that is interpolated in the range with , denotes the th coefficient, the first polynomial index, and the number of coefficients. If (7) is sufficiently accurate and is sampled with period for an integer , then it is well-known that the coefficients and the value of at any can be efficiently computed from the set of samples using algorithms from the FFT family [15].

In some applications, however, samples from the set are lost due to various causes, and then the problem consists of recovering these missing samples in a numerically stable way and with low computational burden from the known ones. More precisely, if denotes the indices of the known samples, then has elements and the objective is to obtain the samples , where is the complement of relative to ,

In this problem, the initial index and the time period are irrelevant, given that we may scale so that its period is and its first frequency is zero. So, in order to simplify the notation, we may state the problem in terms of the following normalized signal

From (7) we have that has the form

(8)

where . In terms of , the problem consists of computing the samples , assuming the samples are known.

As can be readily checked, the solution to this problem just involves the inversion of the linear system

(9)

in which the unknowns are the coefficients , followed by the computation of the desired samples using (8) for . The inversion can in principle be tackled using conventional linear algebra techniques whose complexity order is , [16, Ch. 3]. It must be noted that (9) is often ill conditioned, specially if there exist long sequences of missing samples, and it is then necessary to resort to the pseudo-inverse. The high complexity of conventional methods has led to the development of a variety of iterative and non-iterative methods with lower complexity during the last decades; (see [13, Ch. 17] for a review on this topic).

In [13], Marvasti et al. presented the so-called BER technique for this problem whose complexity order is . This order is a clear improvement relative to the order of the standard solutions, and relative to other methods like the Lagrange interpolation and conjugate gradient methods. The key of the BER method consists of two relations between the following three polynomials:

  • : Element of such that if and if .

  • : Polynomial with the same definition as but with and switched.

  • : Erasure polynomial. This is the monic element of of degree that has one simple zero at each of the instants of the missing samples (set ), i.e, the polynomial

    (10)

To introduce the first relation in the BER technique, note that to compute the desired samples is equivalent to compute , given that if . Additionally, from the definitions of and , it is clear that

(11)

This equation can be written in the coefficients (frequency) domain using (4),

(12)

where and respectively denote the Fourier coefficients of and . But if and, therefore, (12) implies

(13)

So, the DFT of the samples gives partial information about , namely its coefficients for .

The second relation links with the erasure polynomial and is the following

(14)

This relation is also a direct consequence of the definitions of and , given that , if , and if . If we take (14) to the frequency domain using the DFT (4), we have that (14) is turned into a cyclic convolution of the coefficients of and . More precisely, we have

(15)

where denotes the coefficients of , and we take as a periodic sequence, i.e, , . This second relation can be written as a recursive formula for computing , if is known for . For this, just note from (10) that and solve for in (15),

(16)

We have that (13) already provides the coefficients in this sum if . So we may recursively apply this last formula for , in order to compute the missing coefficients , .

Finally, from we obtain the desired samples through one inverse DFT,

(17)

where

We can see in this method that the operation of inserting zeros, either in a vector or using the erasure polynomial, is the key to obtaining an efficient solution. Actually, the zero insertion in the definitions of and permits the use of the DFT in going from (11) to (12). And the multiplication by the erasure polynomial in (14) produces a zero sequence and the corresponding cyclic convolution in (15). There is, however, a more powerful way to exploit this zero insertion property, that leads to a method entirely based on the DFT and weighting operations with complexity . The method is based on considering the properties of the signal . This method is presented in the next section and yields the desired samples in just two DFTs, if some samples of and its derivative are known.

Iii Proposed method for fixed sampling positions

We have the following theorem.

Theorem 1.

The desired samples are given by the formula

(18)

where was defined in (6) and

(19)

This theorem specifies the method proposed in this paper to compute the desired samples :  , if the values of and appearing in (18) and (19) are available. For implementing it, it is only necessary to form the nonuniformly zero-padded vector in (19), and then perform the steps specified in (18), i.e,

  1. Compute the DFT of .

  2. Weight the result component-wise using .

  3. Compute the inverse DFT.

  4. Multiply the samples with by .

If what is required is the set of Fourier coefficients , then they can be computed from , through one FFT using the formula

with

where is the smallest divisor of such that .

Proof of theorem 1.

Consider the signal

and two key facts related with it. The first is that we know its value at all instants in the regular grid . This is so because either and then both factors of the product are known, or and then because . As a consequence, we have enough information to form the vector in (19), akin to in (3), given that the only samples of appearing in (19) are the known ones, [, ].

The second fact is that belongs to . We can see that this is so if we view (8) and (10) as polynomials in the variable . Since the right-hand side of (8) has degree and (10) has degree (number of elements of ), then has degree in . In other words, has the form in (2). As a consequence, we may compute the derivative samples of using (5). We have

(20)

where

Finally, the product differentiation rule allows us to obtain the desired samples , , from , given that has placed zeros at the desired instants . Specifically, since if , we have

So, solving for we obtain

Note that the division by is valid because the instants are simple zeros of . The theorem’s formula in (18) is the result of substituting (20) into this last equation. ∎

Iv Computation of the erasure polynomial weights and

In Theorem 1, the samples of and depend on the sampling scheme and, therefore, they must be re-computed whenever the set changes. If this re-computation is performed using (10) directly, then the cost of obtaining is . As to the samples , they can be computed from the derivative of (10),

with complexity . These complexities are too high for real-time systems. The following theorem presents a method to compute these values with complexity . It involves the computation of two size- FFTs and complex exponentials.

Theorem 2.

Consider the -period sequence specified by

(21)

and , . Let denote the cyclic convolution

(22)

where is the cyclic indicator sequence for , defined by

(23)

and , . The samples of and required in theorem 1 are given by

(24)
(25)

Note that the sequence is independent of the sampling scheme and, therefore, it can be computed offline. This theorem implies that the computation of the required samples of and just requires the cyclic convolution in (22) and the computation of one complex exponential per sample. Since the cyclic convolution can be performed using the FFT [Eq. (1)], the total computational cost is . In computing the cyclic convolution, the DFT of the sequence can be spared, given that it can be performed offline. So to update and just requires two FFTs.

Proof of theorem 2..

Let us write (10) in terms of , taking into account that has elements. If , we have:

(26)
(27)

In this last step, note that the summation is the cyclic convolution of with the indicator sequence of in (23); i.e,

and we have from (27)

(28)

Thus we have proved (24).

For deriving (25), we must consider first the signal with the same definition as in (10), but with in place of , i.e, the signal

(29)

For , we may repeat the derivations in (26) to (28) already performed for and, as can be easily checked, the result is the formula in (28) but with and switched and in place of in the first term of the exponent. Specifically, we obtain

(30)

where is the indicator sequence of , defined by , , and

Next, we require two simple results about and the indicators and . The first is the property

(31)

which is proved in Ap. A. The second is the fact that we may write (30) in terms of instead of , because these two indicator functions are complementary; i.e, since and , we have

(32)

where is the all-ones sequence.

Now, using (31) and (32) we have that can be obtained from :

And substituting this formula into (30), we obtain a result of the form in (28) but for ,

(33)

Let us derive the formula for , . For this, consider the product . From (26) and (29), we have that this product is a monic polynomial whose root set is , given that and . So, we have

(34)

In this derivation, we have used the fact that the monic th-order polynomial with root set is .

Next, let us apply the product differentiation rule to the equation derived in (34),

at , . For its left-hand side, we have

(35)

given that if . And for its right-hand side, we have

(36)

So, the combination of (35) and (36) yields

Finally, substituting (33) into this last formula we obtain

which is (25). ∎

V Complexity analysis

In this section, we present counts of the number of floating point operations (flops) for both the proposed method and the BER technique. Since the complexity of basic operations like multiplication and complex exponential may vary wildly with the hardware implementation, we have employed the convention in Fig. 4 for measuring the complexity.

Operation Flops Real sum 1 Complex sum 2 Real multiplication 1 Complex multiplication 6 Complex exponential 7 Size- FFT, IFFT

Fig. 1: Flop counts for basic operations.

The flop count of each step in the BER technique, as explained in Sec. II, is the following:

  • Computation of using (10),

  • DFT of the sequence for obtaining the coefficients in (16): .

  • DFT of sequence , for computing : .

  • Computation of recursive formula in (16),

  • Inverse DFT for obtaining the final result : .

The total cost of the BER technique is the following

(37)

The implementation of the proposed method has the following flop counts:

  • Computation of in (21). We assume zero cost for this operation, given that it can be performed offline.

  • Computation of in (22). This operation involves two FFTs plus complex multiplications. The cost is

  • Computation of samples , , in (19) using (24). We assume the factor in the exponent of (24) has been pre-computed. The cost of this operation is

  • Computation of second factor in (18). This operation involves two DFTs and real-to-complex products with total cost

    Computation of from (25), and product with the output of the previous step. The cost is

The total cost of the proposed technique is the following

By comparing (37) with this last equation, we can readily see the complexity of the proposed method is free of quadratic terms, while the complexity of the BER techniques is dominated by these terms when is separated from and .

Vi Numerical examples

Vi-a Numerical stability

The linear system in (9) is ill conditioned if there are long sequences of missing samples. This facts makes conventional inversion methods like Gaussian elimination unstable numerically for (9). So, in order to validate the method proposed in this paper, it is necessary to evaluate the accumulation of round-off errors. For this, we compare the following three methods in the sequel using double precision arithmetic:

  • Pseudo-inverse method: Based on solving (9) for unknowns using the pseudo-inverse, and then computing , using (8).

  • BER technique: Combination of (13), (16), and (17).

  • Proposed method: Method in theorem 1 using the computation procedure for and in theorem 2.

In the examples that follow, we employ test signals of the form in (8) with , where and are independent realizations of a uniform distribution in the interval . The figures are based on 100 Monte Carlo trials.

We present two examples. In the first, we assume a sampling grid which is the result of shifting the samples of a uniform grid (jittered sampling). In the second, we address the extrapolation problem, i.e, the sampling grid has a long gap that must be filled.

Vi-A1 Round-off error for a jittered sampling scheme

In this example, we fix an oversampling factor and relate and through . Then, we take sampling instants , where is randomly taken from the set with uniform distribution (jittered sampling).

Fig. 2: Round-off error versus number of output samples () for the proposed and pseudo-inverse methods.

Fig. 2 shows the round-off error versus the number of output samples . The ordinate in this figure is the largest error among the interpolated samples. The proposed method improves on the BER technique slightly, and these two methods show a slight accuracy loss (one to two decimal digits) relative to the pseudo-inverse solution. The error of the proposed method is sufficiently small for most applications.

Vi-A2 Round-off error for extrapolation

In this example, we fix and take as input samples those with instants . The objective is to interpolate the signal at instants . Fig. 3

Fig. 3: Maximum round-off error versus number of input samples .

shows the maximum round-off error versus . Note that there is little difference between the performances of the three methods, with the BER technique having a slightly better performance.

Vi-B Computational burden

In this section, we evaluate the computational burden of the proposed method relative to the BER technique, using the results in Sec. V.

Vi-B1 Complexity versus grid size

Fig. 4 shows the flop counts for the proposed and BER methods versus the grid size, assuming . There are two variants of the proposed method in this figure. In variant “Prop. A”, in (22) is computed using the FFT (1), while in variant “Prop. B” (22) is evaluated directly. The proposed methods shows a clear improvement relative to the BER technique. For large , “Prop. A” is roughly a factor 16 less complex than the BER technique. Also, note that “Prop. B” improves on “Prop. A” for small . This is due to the fact that the cyclic convolution in (22) can be directly evaluated without any multiplications.

Fig. 4: Complexity versus grid size for two variants of the proposed method and the BER technique. Variant “Prop. A” computes in (22) using the FFT, while variant “Prop. B” performs this last computation by directly evaluating the convolution in (22).

Vi-B2 Complexity versus ratio

Fig. 5 shows the ratio

versus the factor for , where “Prop A” was described in the previous sub-section. This figure shows that “Prop. A” improves on the BER technique for all values except for the very small or very large. Actually, the BER technique is more efficient only if or , ( or ). The maximum improvement is factor 20 roughly.

Fig. 5: Improvement factor of the proposed method relative to the BER technique in terms of flop count, (proposed method’s count / BER technique’s count).

Vi-B3 Complexity compared with the zero-padding FFT algorithm

If is an integer and is a regular grid with spacing , then the missing samples problem can be solved using the zero-padding FFT (ZP-FFT) algorithm, [17, Sec. 3.11]. Fig. 6 shows the complexity of this well-known algorithm and that of the method in this paper. The curve “Proposed, no weight comp.” is the count of “Prop. A” but discounting the complexity of computing and , given that the sampling grid is constant. This figure shows that the proposed method is, in rough terms, only factor two more complex than ZP-FFT if the weight factors are available, and factor 4 if not.

Fig. 6: Complexity versus the zero-padding FFT algorithm.

Vii Conclusions

We have presented a solution for the missing samples problem based on the FFT. The method has complexity and consists of four FFTs plus several operations of order . It provides a significant improvement on the burst error recovery (BER) technique, which seems to be the most efficient method in the literature. For typical values of , the complexity is reduced up to factor 18, relative to this last technique. The method has been assessed in terms of numerical stability and computational burden numerically.

Appendix A Proof of (31)

In order to prove (31), write the summation as the logarithm of a polynomial in :

(38)

Note that is the monic polynomial with roots , , and these roots also appear in (38), except for the root at . So we have that the polynomial in (38) is actually and

References

  • [1] M. Ghandi, Mahdi Jahani Yekta, and F. Marvasti, “Some nonlinear/adaptive methods for fast recovery of the missing samples of signals,” Signal Processing, vol. 88, no. 3, pp. 624 – 638, 2008.
  • [2] M.K. Ozdemir and H. Arslan, “Channel estimation for wireless OFDM systems,” Communications Surveys Tutorials, IEEE, vol. 9, no. 2, pp. 18–48, 2007.
  • [3] P. Fertl and G. Matz, “Channel estimation in wireless OFDM systems with irregular pilot distribution,” IEEE Trans. on Signal Processing, vol. 58, no. 6, pp. 3180–3194, June 2010.
  • [4] M. Vaezi and F. Labeau, “Distributed source-channel coding based on real-field BCH codes,” Signal Processing, IEEE Transactions on, vol. 62, no. 5, pp. 1171–1184, March 2014.
  • [5] Paulo J. S. G. Ferreira and José M. N. Vieira, “Stable DFT codes and frames,” Signal Processing Letters, IEEE, vol. 10, no. 2, pp. 50–53, 2003.
  • [6] G. Rath and C. Guillemot, “Subspace-based error and erasure correction with DFT codes for wireless channels,” Signal Processing, IEEE Transactions on, vol. 52, no. 11, pp. 3241–3252, 2004.
  • [7] A.K.M. Pillai and H. Johansson, “A sub-band based reconstructor for M-channel time-interleaved ADCs with missing samples,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, May 2014, pp. 4126–4130.
  • [8] K. M. Tsui and S. C. Chan, “A novel iterative structure for online calibration of M-channel time-interleaved ADCs,” IEEE Transactions on Instrumentation and Measurement, vol. 63, no. 2, pp. 312–325, 2014.
  • [9] T. E. Tuncer, “Block-based methods for the reconstruction of finite-length signals from nonuniform samples,” IEEE Transactions on Signal Processing, vol. 55, no. 2, pp. 530–541, Feb 2007.
  • [10] F. Marvasti, Ed., Nonuniform sampling, theory and practice, Kluwer academic/Plenum Publishers, 2001.
  • [11] A. Papoulis, “A new algorithm in spectral analysis and band-limited extrapolation,” Circuits and Systems, IEEE Transactions on, vol. 22, no. 9, pp. 735–742, Sep 1975.
  • [12] P.J.S.G. Ferreira, “Interpolation and the discrete Papoulis-Gerchberg algorithm,” Signal Processing, IEEE Transactions on, vol. 42, no. 10, pp. 2596–2606, Oct 1994.
  • [13] F. Marvasti, M. Hasan, M. Echhart, and S. Talebi, “Efficient algorithms for burst error recovery using FFT and other transform kernels,” Signal Processing, IEEE Transactions on, vol. 47, no. 4, pp. 1065–1075, Apr 1999.
  • [14] Alan V. Oppenheim and Ronald W. Schafer, Discrete-time signal processing, Prentice Hall, 1989.
  • [15] M. Frigo and S.G. Johnson, “The design and implementation of fftw3,” Proceedings of the IEEE, vol. 93, no. 2, pp. 216–231, Feb 2005.
  • [16] Gene H. Golub and Charles F. Van Loan, Matrix Computations, The Johns Hopkins University Press, third edition, 1996.
  • [17] R. G. Lyons, Understanding Digital Signal Processing, Prentice Hall, 2001.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198630
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description