On the Estimation of Nonrandom Signal Coefficients from Jittered Samples

On the Estimation of Nonrandom Signal Coefficients from Jittered Samples

Daniel S. Weller*,  and Vivek K Goyal,  This material is based upon work supported in part by the Office of Naval Research through a National Defense Science and Engineering Graduate (NDSEG) fellowship, the National Science Foundation under CAREER Grant No. 0643836, Texas Instruments through its Leader University program, and Analog Devices, Inc.D. S. Weller and V. K. Goyal are with the Massachusetts Institute of Technology, Room 36-680, 77 Massachusetts Avenue, Cambridge, MA 02139 USA (phone: +1.617.324.5862; fax: +1.617.324.4290; email: {dweller,vgoyal}@mit.edu).
Abstract

This paper examines the problem of estimating the parameters of a bandlimited signal from samples corrupted by random jitter (timing noise) and additive iid Gaussian noise, where the signal lies in the span of a finite basis. For the presented classical estimation problem, the Cramér–Rao lower bound (CRB) is computed, and an Expectation-Maximization (EM) algorithm approximating the maximum likelihood (ML) estimator is developed. Simulations are performed to study the convergence properties of the EM algorithm and compare the performance both against the CRB and a basic linear estimator. These simulations demonstrate that by post-processing the jittered samples with the proposed EM algorithm, greater jitter can be tolerated, potentially reducing on-chip ADC power consumption substantially.

analog-to-digital converters, Cramér–Rao bound, EM algorithm, jitter, maximum likelihood estimator, sampling, timing noise.

I Introduction

An analog-to-digital converter (ADC) processes a real signal to generate a sequence of observations (samples) at times :

(1)

where is the sampling prefilter and is an additive noise term that lumps together quantization, thermal noise, and other effects. For standard sampling applications, it is assumed that the sample times are uniformly spaced by some period (, ), where the period is small enough that the total bandwidth of is less than the sampling rate . Jitter , also known as timing noise, perturbs the sample times:

(2)

This paper focuses on the mitigation of random jitter in a non-Bayesian estimation framework. A simplified block diagram for the overall system is illustrated in Figure 1.

Fig. 1: Abstract block diagram of an ADC with off-chip post-processing. The signal is filtered by the sampling prefilter and sampled at time . These samples are corrupted by additive noise to yield . The post-processor estimates the parameters of using the vector of samples from the ADC.

The generally-accepted practice is to design clocks with low enough phase noise that the effect of jitter is negligible. The maximum allowable jitter is set such that the effect of jitter on a sinusoid of maximum frequency and maximum amplitude is at most one-half the least significant bit level [1]. While making jitter negligible obviates the mitigation of jitter, this may not be possible or desirable from an overall system design perspective, because requiring jitter to have a negligible effect may mandate high power consumption in the clock circuitry. (A model relating jitter and power is given below.) Technological trends suggest that eventually it will be worthwhile to allow nontrivial jitter and compensate through digital post-processing: the digital portions of mixed-signal systems like sensors and wireless transceivers continue to shrink, so the analog portions of such systems, including the ADC and its clock generator, dominate the size and power consumption of these chips. The ability to use more power-efficient analog circuitry would enable substantial new capabilities in diverse applications like implantable medical devices and remote sensors. One motivation of our study is to contribute to understanding the trade-off between accuracy in analog circuitry vs. complexity of off-chip digital post-processing of samples.

The power consumed by a typical ADC design is approximately proportional to the desired accuracy and sampling rate [2], so lower-power circuitry would produce clock signals with more jitter. Specifically, it is shown in [3] that

(3)

Furthermore, the analyses in [4] and [5] suggest that in the large-jitter domain, every doubling of the standard deviation of the jitter reduces the effective number of bits () by one. Thus, pre-compensating for the expected jitter in the design requires increasing the power consumption of the ADC by a factor of four for every doubling of the jitter standard deviation (e.g. by adding an additional level of comparators). In this paper, we instead propose block post-processing the jittered samples using classical estimation techniques off-chip. In addition to mitigating random jitter, this work may also be adapted to compensate for frequency modulated and spread-spectrum clocks, which produce lower EMI and radiation [6]. Note that this block post-processing method is intended to be performed off-chip, where power consumption of an implementation of the algorithm is not important.

I-a Problem Formulation

While more sophisticated signal models may be more appropriate for some applications, we concern ourselves with a signal that lies in the span of a finite basis , with basis functions. We further restrict the basis to be uniform shifts of a single smooth bandlimited function :

(4)

Denote the unknown weighting parameters ; there are of them. The signal then equals

(5)

Without loss of generality, assume that equals the critical sampling period of , and assume this period is unity. In this work, the signal parameters are unknown deterministic quantities.

While there are many possible choices of , when in need of a specific example in this work, we choose the sinc interpolating function . This basis satisfies , for all . In general, we choose to be appropriate for the class of input signals we wish to sample; we choose the sinc function because bandlimitedness is a common assumption in the context of signal processing. It is sufficient, but not necessary, in this work that be analytic and bounded.

When sampling the signal , we will assume that the sampling prefilter is an ideal anti-aliasing filter with bandwidth , so for appropriately bandlimited inputs. The signal’s critical sampling period is assumed to be one, but to accommodate oversampling by a factor of into our model, the ideal sample times are spaced time units apart. We acquire jittered samples with additive noise, , at this rate:

(6)

In this paper, we will assume that the jitter and additive noise are iid zero-mean Gaussian, with known variances equal to and , respectively. We assume that these variances can be measured reasonably accurately through in-factory calibration, although we expect the variances to vary naturally over time due to environmental effects.

Combining the signal and observation models yields

(7)

This relationship can be expressed as a semilinear system of equations:

(8)

where , , , and . For notational convenience, denote the th (zero-indexed) row of by .

To keep notation compact, denote the probability density function (pdf) of by , the pdf of parameterized by the nonrandom vector by , and the pdf of conditioned on the random variable by . The pdf is made explicit using subscripts only when necessary to avoid ambiguity. Expectations are written similarly. Also, denote the -dimensional multivariate normal distribution by

(9)

The primary objective of classical (non-Bayesian) estimation is to derive an estimator that minimizes a desired cost function of unknown nonrandom parameters . One such cost function is the mean squared error (MSE):

(10)

where is the expectation with respect to , is the estimate of the unknown parameters based on the samples , and the observations are implicitly a function of . However, except in certain cases, computing the estimate that minimizes this cost function, which is called the minimum MSE (MMSE) estimator, is impossible without prior knowledge about . Instead, it is essential to derive an estimator that relies only on the observation model and the actual observations . One such estimator is the maximum likelihood (ML) estimator, which maximizes the likelihood function . The likelihood function corresponding to the signal parameter observation model in (8) is

(11)

Using the assumptions that the jitter and additive noise are iid, and the fact that the th row of only depends on one , the multivariate normal distributions in (11) are separable over . Thus, is also separable, with , and the likelihood function is the product of univariate integrals

(12)

Given the likelihood function, parameters , , and , MSE cost function, and observations , the goal of this work is use ML estimation to tolerate more jitter when estimating . Thus, the bulk of this paper is concerned with the evaluation of this likelihood function and the problem of maximizing it.

I-B Related Work

The problem of mitigating jitter has been investigated since the early days of signal processing. The effects of jitter on the statistics of samples of a deterministic (nonrandom) bandlimited signal are briefly discussed in [7]; this work also is concerned with stochastic signals and proposes an optimal linear reconstruction filter for the stochastic case. Much more work on analyzing the error and reconstructing stochastic signals from jittered samples can be found in [8] and [9]. However, the analysis of jittered samples of deterministic signals appears to be much more limited in the early literature.

When the sample times are irregularly spaced, but known, the problem greatly simplifies. Efficient techniques, as well as a mention of prior work, can be found in [10]. When the sample times are unknown, but belong to a known finite set, the jitter mitigation problem becomes a combinatorial one; [11] describes geometric and algebraic solutions to this problem of reconstructing discrete-time signals. Two block-based reconstruction methods for this finite location-set problem are described in [12].

However, when the set is infinitely large, or when the jitter is described by a continuous random distribution as seen here, a different approach is necessary. One contribution of this work is an Expectation-Maximization (EM) algorithm; in a similar context, [13] develops a similar EM algorithm for the related problem of mitigating unknown phase offsets between component ADCs in a time-interleaved ADC system. Some of the results summarized in this paper are described in greater detail in [14], which also provides further background material.

I-C Outline

In Section II, numerical integration using Gauss quadrature and iteration using the EM algorithm are discussed. Section III presents and derives the Cramér–Rao lower bound (CRB) on the MSE for this estimation problem. Sections IV and V derive linear and ML estimators for the jitter mitigation problem; simulations comparing these estimators are discussed in Section VI. In conclusion, the results and contributions are summarized, and future research directions are introduced.

Ii Background

Except for certain limited choices for , the expression for the likelihood function in (12) has no closed form; however, various techniques exist to approximate it. One such powerful and general technique is that of quadrature, which refers to the method of approximating an integral with a finite weighted summation. The trapezoidal and Simpson’s rules are elementary examples of quadrature. In particular, due to the normal distribution assumption on the jitter , Gauss–Hermite quadrature is a natural choice of quadrature rule. Gauss–Legendre quadrature and Romberg’s method are also discussed below.

Computational problems also occur when deriving the ML estimator, due to the nonconcave and high-dimensional nature of the likelihood function. One local approximation technique called the EM algorithm can be used to locate local maxima in a computationally-feasible manner. The EM algorithm is also introduced in this section.

Ii-a Numerical Integration

Consider approximating the integral using the summation , where and are fixed abscissas (sampling locations) and weights. This type of approximation is known generally as quadrature. When the abscissas are uniformly spaced, the summation is known as interpolatory quadrature; the trapezoidal and Simpson’s rules, as well as Romberg’s method [15], are of this type. Gauss quadrature seeks greater accuracy for a given number of function evaluations by allowing the abscissas to be spaced nonuniformly. An appropriate choice of abscissas and weights (called a rule) can be precomputed for a choice of and using a variety of methods, including a very efficient eigenvalue-based method derived in [16]. Orthogonal polynomials satisfy a three-term recursive relationship, which is used to form a tri-diagonal matrix, whose eigenvalues are the abscissas, and whose eigenvectors yield the weights. The eigendecomposition of a tri-diagonal matrix is very efficient, so quadrature rules are very inexpensive to compute, even for very large . Quadrature is particularly attractive when is smooth and has bounded derivatives. This method can be applied to multivariate integration as well, although in the absence of separability, the complexity scales exponentially with the number of variables.

One weighting function of particular interest in this work is the pdf of the normal distribution. For a standard normal distribution, the associated form of quadrature is known as Gauss–Hermite quadrature, since the abscissas and weights derive from Hermite polynomials. Using elementary changes of variables, this method can be generalized to normal distributions with arbitrary mean and variance :

(13)

where and are the weights and abscissas for Gauss–Hermite quadrature with a standard normal weighting function. As mentioned in [17], the approximation error for Gauss–Hermite quadrature is bounded by the function’s derivatives:

(14)

As long as is sufficiently smooth, the term in the denominator dominates the above expression for large , and the approximation error goes to zero superexponentially fast. While general conditions for convergence are difficult to isolate for arbitrary , a sufficient condition for convergence mentioned in [17] is that

(15)

Many other Gauss quadrature rules exist; one simple rule also considered is called Gauss–Legendre quadrature and is defined for integrating over a finite interval , with the weighting function :

(16)

The abscissas and weights for Gauss–Legendre quadrature can be computed using the eigenvalue-based method mentioned above. Gauss–Legendre quadrature and other rules defined over a finite interval, including interpolatory quadrature methods like Simpson’s rule and Romberg’s method, can be extended to the infinite support case by re-mapping the variable of integration:

(17)

When applied to the Gauss–Legendre quadrature rule, the new rule becomes

(18)

where , and .

To compare the effectiveness of these different quadrature-based methods for numerical integration, Gauss–Hermite quadrature and Gauss–Legendre quadrature are contrasted against two more general methods, Simpson’s rule and Romberg’s method, by comparing each method against the marginal likelihood function , for a fixed, but randomly chosen, value of . The marginal likelihood function is calculated from the empirical distribution of samples generated by the observation model in (7). As shown in Figure 2, Gauss–Legendre quadrature approximates the likelihood function well when is relatively large, but when and are both small, Gauss–Hermite quadrature is much more effective. However, other quadrature rules may be more accurate for different choices of signal basis functions .

(a)
(b)
Fig. 2: The quadrature approximation to is computed for a fixed, but randomly chosen, on a dense grid of and is compared against a histogram of samples generated using (7). Two cases are shown to illustrate the approximation quality of LABEL:sub@fig:hermitevalid-1 Gauss-Legendre quadrature and LABEL:sub@fig:hermitevalid-2 Gauss-Hermite quadrature; the worst-case is chosen for each of these approximations. Note: the choice of is used instead of because Romberg’s method evaluates the function times for iterations.

Ii-B EM Algorithm

The EM algorithm was introduced in [18]; a classic application of this algorithm is ML estimation in the presence of incomplete data. Consider the problem of maximizing the likelihood function , where depends on some latent random variables . The observations are described as the incomplete data. We augment this incomplete data with some subset of latent (hidden) variables to form the complete data. The underlying assumption of the EM algorithm is that knowledge of the complete data makes the ML estimation problem easier to solve.

The EM algorithm consists of repeatedly maximizing the function

(19)

with respect to the desired parameters ; the maximizing value becomes , which is used in the next iteration. As long as the original likelihood function is bounded above, and some other mild conditions are satisfied, this algorithm is guaranteed to converge to a local maximum of the likelihood function [18].

Much has been written about the convergence rate of EM algorithms. In [19], the rate of convergence of the EM algorithm is related to the difference in the CRB using the incomplete data and the CRB using the complete data (incomplete data + latent variables). The supplemented EM algorithm in [20] also obtains Fisher information estimates, conditioned on the observations , which can be used to evaluate the quality of the resulting approximation to the ML estimate.

Since the likelihood function in (11) is not in general strictly concave, the presence of many critical points is a potential problem for any local algorithm. Simulated annealing [21] and other methods can be combined with the EM algorithm to improve robustness to getting trapped in local extrema.

Iii Approximating the CRB

Minimizing the MSE without access to prior information about the unknown parameters may be impossible in the general case. However, even in situations when the MMSE estimator cannot be computed, the Cramér–Rao lower bound on the minimum achievable MSE by an unbiased estimator may be straightforward to compute. The , where is the variable to estimate from observations , is defined to be the trace of the inverse of the Fisher information matrix , which is defined as

(20)

Assuming the likelihood function satisfies the regularity condition

(21)

the Fisher information matrix can be expressed in terms of the Hessian of the log-likelihood:

(22)
(23)
(24)
(25)
(26)

A sufficient condition for the regularity condition in (21) is that differentiation and integration interchange, since

(27)

By basic analysis, uniform convergence of the integral in likelihood function in (11) implies that the above regularity condition holds.

Since the likelihood function is separable, the log-likelihood function can be expressed as the summation of marginal log-likelihood functions; i.e.

(28)

which means that (26) can be rewritten as (again, assuming regularity conditions are satisfied)

(29)

The marginal pdf can be computed numerically using quadrature:

(30)

where and are the abscissas and weights for the chosen quadrature rule. As depicted in Figure 2, Gauss–Hermite quadrature is a good choice for small , and Gauss–Legendre quadrature is more accurate for larger (). For all simulations in this paper, we use unless otherwise specified.

The derivative of the marginal pdf can be approximated similarly:

(31)

Since is equal to , combining (30) and (31) yields a complicated approximation to the expression inside the expectation in (29):

(32)

For convenience, denote the above approximation .

Now, consider computing the Fisher information matrix from this approximation:

(33)

To compute this expectation, a numerical method is needed again. The expectation is with respect to the distribution , which is approximated in (30) with a Gaussian mixture, so Monte Carlo sampling is a convenient method to approximate this expectation. Generating samples from the Gaussian mixture and averaging the corresponding function values , the Fisher information matrix can be computed as

(34)

Once this matrix is computed and inverted, the trace gives the Cramér–Rao lower bound for that choice of parameter . Due to matrix inversion, to ensure an accurate CRB estimate, we use for the quadratures in (32).

How much does the CRB decrease when is assumed given? Comparing the against the of the jitter-augmented data will be important later when analyzing the EM algorithm design. The Fisher information matrix in this case is equal to

(35)

Of course, since

(36)

and the Hessian matrix with respect to is

(37)

the jitter-augmented Fisher information matrix is

(38)

where the expectation can be approximated numerically using quadrature or Monte Carlo approximation. The jitter-augmented is the trace of the inverse of this matrix. We will return to the question of the difference of the two CRBs later in Section VI, after we discuss ML estimation using an EM algorithm.

Iv Linear Estimation

In this paper, an estimator is said to be linear if it is a linear function of the observations; such an estimator has the form

(39)

where the matrix is fixed.111Sometimes, affine estimators are considered to be linear as well. However, as we will concern ourselves with unbiased estimators, would turn out to be necessarily zero.

For the semilinear observation model in (8), a linear estimator is unbiased if and only if . Since is a tall matrix, assuming it has full column rank, one possible linear unbiased estimator is

(40)

where is the left pseudoinverse. This estimator is only one such linear unbiased estimator; more generally, any matrix that lies in the nullspace of can be added to the pseudoinverse and yield an unbiased estimator.

The question then remains of how to obtain the best linear unbiased estimator (BLUE), in the MMSE sense. In the context of a simple linear observation model , with Gaussian noise , the BLUE is elementary to find (see [22]), and it is also the ML and efficient minimum variance unbiased estimator (MVUE). If we choose to be deterministic (no jitter) in the observation model, the corresponding BLUE/efficient estimator would be

(41)

The performance of this estimator when applied to the proper (jittered) observation model will be used as one baseline for MSE improvement for the proposed estimators.

As derived previously in [14], the BLUE for the semilinear model (8) is

(42)

where the covariance matrix of the data depends on the value of the parameters:

(43)

The BLUE estimator, in general, is not a valid estimator, since it depends on the true value of the unknown . Two sufficient conditions for the estimator to be valid are: is a scalar matrix, in which case, the covariance matrix commutes across multiplication, or does not depend on . Since and are independent for , off-diagonal elements of are zero. For the covariance matrix to be a scalar matrix that commutes over matrix multiplication for all , must be equal for all . However, this equality generally does not hold due to oversampling. Also, the covariance matrix clearly depends on when is nonzero for some . When the covariance matrix is a scalar matrix, the BLUE estimator is equal to .

Fig. 3: The best linear unbiased estimator (BLUE) is computed for different choices of (holding fixed), using (42). In this example, , , and .

To conclusively demonstrate that the BLUE is not a valid estimator, the estimator is computed for a fixed value of and varying ; the results are shown in Figure 3. Clearly, since the estimates of vary depending on the value of used in (42), the estimator is not valid. Thus, an MSE-optimal linear estimator does not exist for this problem, and we will utilize the estimator in (40).

V ML Estimation

Given a semilinear model as in (8), we would not expect the optimal MMSE estimator to have a linear form. Indeed, as shown in the previous section, for most signal models and priors on the jitter, the BLUE does not even exist. To improve upon linear estimation, and reduce the MSE, we move to maximum likelihood estimation.

Consider the problem of maximizing the likelihood function in (12); since the logarithm is an increasing function, we can perform the optimization by maximizing the log-likelihood:

(44)

However, since the marginal pdf does not have a closed form, and neither do its derivatives, performing the necessary optimization is difficult. Numerical techniques may be applied directly to (44), and various general-purpose methods have been studied extensively throughout the literature. An iterative joint maximization method proposed in [23] attempts to approximate the ML estimate by alternating between maximizing with respect to and with respect to . One method that explicitly takes advantage of the special structure in (44) is the EM algorithm.

V-a ML Estimation using the EM Algorithm

Consider the function in (19). The expression for is in (36), and summing these together (without the minus sign) gives

(45)

Expanding and substituting into the expectation in (19) yields

(46)

We want to find the value of that maximizes this expression. Noticing that (46) is quadratic in , the candidate value satisfies the linear system

(47)

Also, the Hessian matrix is negative-definite, so (46) is strictly concave, and the candidate point is the unique maximum . All that remains to specify the EM algorithm is to approximate the expectations in (47).

Using Bayes’ rule and the separability of both and , the posterior distribution of the jitter is also separable:

(48)
(49)

Thus, the expectations are also separable into univariate expectations:

(50)
(51)

The subscript after the left-side expectation in (51) denotes the th (zero-indexed) row of the matrix. The distribution is constant with respect to , and can be evaluated using quadrature, as in (30). Approximating each of the univariate expectations in (50) and (51) with quadrature yields

(52)
(53)

The complexity of each iteration of this algorithm appears to be linear in the number of samples, although the rate of convergence (and thus, the number of iterations required) may also vary with the number of samples, or other factors. The convergence rate, as well as susceptibility to initial conditions (since the EM algorithm only guarantees local convergence), are the subject of simulations in this work and in [14].

The EM algorithm for ML estimation is summarized in Algorithm 1.

0:  , , , , , ,
  
  Compute -term quadrature rule (using e.g., the eigendecomposition method in [16]) for use in below approximations (use ).
  repeat
     
     for  to  do
        Approximate using (30).
        Compute using (52).
        Approximate using (53).
     end for
     Solve for using (47) and the above approximations.
  until  or or
  return  
Algorithm 1 Pseudocode for the EM algorithm for computing ML estimates for the unknown signal parameters .

Vi Simulation Results

The objectives of the simulations presented here are to (a) analyze the behavior of the proposed EM algorithm for approximating the ML estimator, and to (b) compare the performance of this estimator to both the Cramér–Rao bound and that of linear parameter estimation. The convergence behavior is studied in detail in [14] for periodic bandlimited signals. In this work, experiments to determine convergence behavior and sensitivity to initial conditions are conducted for the sinc basis signal model described in Section I. In all experiments, we utilize MATLAB to generate signals with pseudo-random parameters and noise and apply the algorithms described to the samples of these signals. For a factor of oversampling, we generate samples.

Vi-a Convergence Analysis

(a)
(b)
(c)
Fig. 4: The convergence of the EM algorithm depends on the choice of parameters , , and , as demonstrated in the above plots.

While guaranteed to converge, the EM algorithm would be of little use if it did not converge quickly. The rate of convergence of the EM algorithm is studied for several choices of , , and , and trends are presented in Figure 4. The rate of convergence is exponential, and the rate decreases with increasing , increasing , and decreasing .

As mentioned in Section II, the rate of convergence of the EM algorithm is related to the difference between the CRBs of the complete and the incomplete data. As shown later in Figure 6, the difference between the CRBs for the complete data and incomplete data increases exponentially with . This relationship coincides with the convergence behavior observed in Figure (b)b. Although these experiments evaluated iterations of the EM algorithm, the results suggest that iterations would suffice as long as the jitter standard deviation is not too large. Also, is chosen as a reasonable stopping criterion for change in and change in log-likelihood between iterations ( and in Algorithm 1, respectively).

Vi-B Sensitivity to Initial Conditions

(a)
(b)
(c)
Fig. 5: The effects of varying initial conditions of the EM algorithm as a function of LABEL:sub@fig:initscat-Ms oversampling factor, LABEL:sub@fig:initscat-sigma_zs jitter variance, and LABEL:sub@fig:initscat-sigma_ws additive noise variance are studied by computing the log-likelihoods of the EM algorithm results, for multiple initial conditions, across trials. The log-likelihood of the EM algorithm results are displayed relative to the result for zero-jitter initialization, so that the log-likelihood of the result for is zero.

The likelihood function described in (11) is generally nonconcave, so maximizing the function via a hill-climbing method like the EM algorithm is only guaranteed to yield a local maximum. The ability of the algorithm to converge to the global maximum depends on the nonconcavity of the likelihood function. To demonstrate the sensitivity of the EM algorithm, as a function of , , and , the empirical distribution of the log-likelihood of the optimal values reached from multiple initial conditions is evaluated over numerous trials for different choices of these parameters. In this experiment, the true value of , the no-jitter linear estimator (41), , and ten random choices, are used as initial conditions for each trial. As suggested by the spread of the samples shown in Figure 5, the variability of the EM algorithm increases with and decreasing . Even when the EM algorithm appears sensitive to initial conditions, using the no-jitter linear estimate (41) results in a relatively small deviation from the best observed log-likelihood value. In situations when such initialization does fail to produce consistent results, methods such as the deterministic annealing EM algorithm described in [24] may improve consistency.

Vi-C Performance of the EM Algorithm

(a)
(b)
Fig. 6: The approximate performances of the linear unbiased estimator and ML estimator (EM algorithm) are plotted in LABEL:sub@fig:crb-1 against the and the complete data , as a function of for , , , and a fixed random choice of . The linear and ML estimator biases are plotted in LABEL:sub@fig:crb-1-bias, using the root-mean-squared (RMS) values of the bias vectors. The bars above and below each data point for the linear and ML estimators in LABEL:sub@fig:crb-1 delineate the confidence intervals for those data points.

In the first performance experiment, the Cramér–Rao lower bound is compared to the unbiased linear estimator (40) and the EM algorithm of the ML estimator to measure the efficiency of the algorithms. The Cramér–Rao lower bound for the complete data is also presented for reference. Although computational difficulties prevent a complete comparison for every possible value of , carrying out a comparison for a few randomly chosen values of provide a measure of the quality of the algorithms. As the curves in Figure 6 demonstrate for one such random choice of , both algorithms are approximately efficient for small , but the EM algorithm continues to be efficient for larger values of than the linear estimator. In addition, the bias shown for the linear and ML estimators is approximately the same for ; this bias may be due to the small error in the numerical integration. Note how this error becomes larger with .

(a)
(b)
(c)
Fig. 7: The MSE performance of the ML estimator (EM algorithm) is compared against both the unbiased linear estimator (40) and the no-jitter BLUE (41), as a function of . The bars above and below each data point for the linear and ML estimators delineate the confidence intervals for those data points.

In Figure 7, the EM algorithm is compared against two linear estimators. First, to demonstrate the MSE improvement attainable through nonlinear estimation, the EM algorithm is pitted against the linear unbiased estimator. Since a major motivating factor for developing these algorithms is to reduce the power consumption due to clock accuracy, the EM algorithm also can achieve the same MSE as the linear estimator for a substantially larger jitter variance, reducing the clock’s power consumption.

When the additive noise dominates the jitter (), the improvement can be expected to be minimal, since the system is nearly linear, and the jitter is statistically insignificant. As the amount of jitter increases, the density function used in each iteration of the EM algorithm becomes more nonlinear in , and the quadrature becomes less accurate for a given number of terms. Therefore, the EM algorithm generally takes longer to converge, and the result should be a less accurate approximation to the true ML estimator. This behavior is observed in Figure 7, where the EM algorithm is compared against both the linear unbiased estimator and the no-jitter linear estimator. The EM algorithm generally has lower MSE than either linear estimator, and the performance gap is more pronounced for higher oversampling factor .

(a)
(b)
Fig. 8: These graphs show the maximum factor of improvement in jitter tolerance, measured by , achievable by the EM algorithm (relative to linear reconstruction). Holding fixed, LABEL:sub@fig:jitterimprove-Ms shows the trend in maximum improvement as increases, and LABEL:sub@fig:jitterimprove-sigmaws shows the trend in maximum improvement as increases while holding fixed. The jitter standard deviation corresponding to this maximum improvement for the ML estimator is plotted on the same axes.

To answer the question of how much more jitter can be tolerated for the same desired MSE using the EM algorithm, the maximum proportional increase is plotted as a function of and in Figure 8. The maximum proportional increase for a choice of and is computed by approximating log-log domain MSE curves, like those in Figure 7, with piece-wise linear curves and interpolating the maximum distance between them over the range of . The range of is ignored since the linear and nonlinear reconstructions perform similarly when the additive noise dominates (as expected). The proportion of improvement increases linearly as increases. As increases, the level of improvement stays approximately the same for . However, when increases beyond , the level of improvement decreases substantially as expected, since the additive noise dominates, and the optimal estimator is approximately linear. A maximum improvement factor of two corresponds to power savings of up to percent.

Vii Conclusion

The results presented in Section VI are very encouraging from a power-consumption standpoint. A maximum improvement of between to times the jitter translates to a two-to-fourfold decrease in power consumption by the clock, according to (3). To put the magnitude of such an improvement in context, consider the digital baseband processor for ultra-wideband communication in [25]. This processor incorporates an ADC and a PLL, which consume mW and mW, respectively, out of a mW budget for the chip. Reducing by a factor of two the power consumed by the ADC alone would decrease the total power consumption of the chip by almost sixteen percent.

While effective, the EM algorithm is computationally expensive. One benefit of digital post-processing is that these algorithms can be performed off-chip, on a computer or other system with less limited computational resources. For real-time on-chip applications, Kalman filter-like versions of the EM algorithm would be more practical; this extension is a topic for further investigation. Related to real-time processing is developing streaming algorithms for the infinite-dimensional case, extending this work for general real-time sampling systems. Another future direction involves modifying these algorithms for correlated or periodic jitter.

Sampling jitter mitigation is actually just one application of these new algorithms. In the frequency domain, jitter maps to uncertainty in frequency; using algorithms such as these should produce more reliable Fourier transforms for systems like spectrum analyzers. In higher dimensions, timing noise becomes location jitter in images or video. Greater tolerance of the locations of pixels in images would allow scanning electron microscope users to acquire higher resolution images without sacrificing MSE. This paper shows that significant improvements over the best linear post-processing are possible; thus, further work may impact these and other applications.

Acknowledgment

The authors thank J. Kusuma for asking stimulating questions about sampling and applications of jitter mitigation. The authors also thank Z. Zvonar at Analog Devices and G. Frantz at Texas Instruments for their insights and support.

References

  • [1] I. King, “Understanding high-speed signals, clocks, and data capture,” National Semicondutor Signal Path Designer, vol. 103, Apr. 2005.
  • [2] T. H. Lee and A. Hajimiri, “Oscillator phase noise: A tutorial,” IEEE J. Solid-State Circuits, vol. 35, no. 3, pp. 326–336, Mar. 2000.
  • [3] K. Uyttenhove and M. S. J. Steyaert, “Speed-power-accuracy tradeoff in high-speed CMOS ADCs,” IEEE Trans. Circuits Syst. II, vol. 49, no. 4, pp. 280–287, Apr. 2002.
  • [4] B. Brannon, “Aperture uncertainty and ADC system performance,” Analog Devices, Tech. Rep. AN-501, Sep. 2000.
  • [5] R. H. Walden, “Analog-to-digital converter survey and analysis,” IEEE J. Sel. Areas Commun., vol. 17, no. 4, pp. 539–550, Apr. 1999.
  • [6] Y. Lee and R. Mittra, “Electromagnetic interference mitigation by using a spread-spectrum approach,” IEEE Trans. Electromagn. Compat., vol. 44, no. 2, pp. 380–385, May 2002.
  • [7] A. V. Balakrishnan, “On the problem of time jitter in sampling,” IRE Trans. Inform. Th., vol. 8, no. 3, pp. 226–236, Apr. 1962.
  • [8] W. M. Brown, “Sampling with random jitter,” J. Soc. Industrial and Appl. Math., vol. 11, no. 2, pp. 460–473, Jun. 1963.
  • [9] B. Liu and T. P. Stanley, “Error bounds for jittered sampling,” IEEE Trans. Autom. Control, vol. 10, no. 4, pp. 449–454, Oct. 1965.
  • [10] H. G. Feichtinger, K. Gröchenig, and T. Strohmer, “Efficient numerical methods in non-uniform sampling theory,” Numer. Math., vol. 69, no. 4, pp. 423–440, Feb. 1995.
  • [11] P. Marziliano and M. Vetterli, “Reconstruction of irregularly sampled discrete-time bandlimited signals with unknown sampling locations,” IEEE Trans. Signal Process., vol. 48, no. 12, pp. 3462–3471, Dec. 2000.
  • [12] T. E. Tuncer, “Block-based methods for the reconstruction of finite-length signals from nonuniform samples,” IEEE Trans. Signal Process., vol. 55, no. 2, pp. 530–541, Feb. 2007.
  • [13] V. Divi and G. Wornell, “Signal recovery in time-interleaved analog-to-digital converters,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Process., vol. 2, May 2004, pp. 593–596.
  • [14] D. S. Weller, “Mitigating timing noise in ADCs through digital post-processing,” SM Thesis, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, Jun. 2008.
  • [15] P. K. Kythe and M. R. Schäferkotter, Handbook of Computational Methods for Integration.   Boca Raton, FL: CRC, 2005.
  • [16] G. H. Golub and J. H. Welsch, “Calculation of Gauss quadrature rules,” Math. Comp., vol. 23, no. 106, pp. 221–230, Apr. 1969.
  • [17] P. J. Davis and P. Rabinowitz, Methods of Numerical Integration.   Orlando: Academic Press, 1984.
  • [18] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Roy. Statist. Soc., Ser. B, vol. 39, no. 1, pp. 1–38, 1977.
  • [19] C. Herzet and L. Vandendorpe, “Prediction of the EM-algorithm speed of convergence with Cramer-Rao bounds,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Process., vol. 3, Apr. 2007, pp. 805–808.
  • [20] X.-L. Meng and D. B. Rubin, “Using EM to obtain asymptotic variance-covariance matrices: The SEM algorithm,” J. Amer. Statist. Assoc., vol. 86, no. 416, pp. 899–909, Dec. 1991.
  • [21] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, May 1983.
  • [22] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, ser. Prentice-Hall Signal Processing Series.   Upper Saddle River, NJ: Prentice Hall, 1993, vol. 1.
  • [23] J. Kusuma and V. K. Goyal, “Delay estimation in the presence of timing noise,” IEEE Trans. Circuits Syst. II, vol. 55, no. 9, pp. 848–852, Sep. 2008.
  • [24] N. Ueda and R. Nakano, “Deterministic annealing EM algorithm,” Neural Networks, vol. 11, no. 2, pp. 271–282, Mar. 1998.
  • [25] R. Blázquez, P. P. Newaskar, F. S. Lee, and A. P. Chandrakasan, “A baseband processor for impulse ultra-wideband communications,” IEEE J. Solid-State Circuits, vol. 40, no. 9, pp. 1821–1828, Sep. 2005.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
4472
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description