Spectrally Accurate Causality Enforcement using SVD-based Fourier Continuations for High Speed Digital Interconnects

Spectrally Accurate Causality Enforcement using SVD-based Fourier Continuations for High Speed Digital Interconnects

Abstract

We introduce an accurate and robust technique for accessing causality of network transfer functions given in the form of bandlimited discrete frequency responses. These transfer functions are commonly used to represent the electrical response of high speed digital interconnects used on chip and in electronic package assemblies. In some cases small errors in the model development lead to non-causal behavior that does not accurately represent the electrical response and may lead to a lack of convergence in simulations that utilize these models. The approach is based on Hilbert transform relations or Kramers-Krönig dispersion relations and a construction of causal Fourier continuations using a regularized singular value decomposition (SVD) method. Given a transfer function, non-periodic in general, this procedure constructs highly accurate Fourier series approximations on the given frequency interval by allowing the function to be periodic in an extended domain. The causality dispersion relations are enforced spectrally and exactly. This eliminates the necessity of approximating the transfer function behavior at infinity and explicit computation of the Hilbert transform. We perform the error analysis of the method and take into account a possible presence of a noise or approximation errors in data. The developed error estimates can be used in verifying causality of the given data. The performance of the method is tested on several analytic and simulated examples that demonstrate an excellent accuracy and reliability of the proposed technique in agreement with the obtained error estimates. The method is capable of detecting very small localized causality violations with amplitudes close to the machine precision.

1Introduction

The design of high speed interconnects, that are common on chip and at the package level in digital systems, requires systematic simulations at different levels in order to evaluate the overall electrical system performance and avoid signal and power integrity problems [1]. To conduct such simulations, one needs suitable models that capture the relevant electromagnetic phenomena that affect the signal and power quality. These models are often obtained either from direct measurements or electromagnetic simulations in the form of discrete port frequency responses that represent scattering, impedance, or admittance transfer functions or transfer matrices in scalar or multidimensional cases, respectively. Once frequency responses are available, a corresponding macromodel can be derived using several techniques such as the Vector Fitting [2], the Orthonormal Vector Fitting [3], the Delay Extraction-Based Passive Macromodeling [4] among others. However, if the data are contaminated by errors, it may not be possible to derive a good model. These errors may be due to a noise, inadequate calibration techniques or imperfections of the test set-up in case of direct measurements or approximation errors due to the meshing techniques, discretization errors and errors due to finite precision arithmetic occurring in numerical simulations. Besides, these data are typically available over a finite frequency range as discrete sets with a limited number of samples. All this may affect the performance of the macromodeling algorithm resulting in non-convergence or inaccurate models. Often the underlying cause of such behavior is the lack of causality in a given set of frequency responses [5].

Causality can be characterized either in the time domain or the frequency domain. In the time domain, a system is said to be causal if the effect always follows the cause. This implies that a time domain impulse response function for , and a causality violation is stated if any nonzero value of is found for some . To analyze causality, one can convert the frequency responses to the time domain using the inverse discrete Fourier transform. This approach suffers from the well known Gibbs phenomenon that is inherent for functions that are not smooth enough and represented by a truncated Fourier series. Examples of such functions include impulse response functions of typical interconnects that have jump discontinuities and whose spectrum is truncated since the frequency response data are available only on a finite length frequency interval. Direct application of the inverse discrete Fourier transform to raw frequency response data causes severe over and under shooting near the singularities. This problem is usually addressed by windowing the Fourier data to deal with the slow decay of the Fourier spectrum [6]. Windowing can also be applied in the Laplace domain [7] to respect causality. This approach is shown to be more accurate and efficient than the Fourier approach [8]. There are other filtering techniques that deal with the Gibbs phenomenon but they require some knowledge of location of singularities (see [9] and references therein). A related paper [13] employs nonlinear extrapolation of Fourier data to avoid the Gibbs phenomenon and the use of windows/filtering.

In the frequency domain, a system is said to be causal if a frequency response given by the transfer function satisfies the dispersion relations also known as Kramers-Krönig relations [14]. The dispersion relations can be written using the Hilbert transform. They represent the fact that the real and imaginary parts of a causal function are related through Hilbert transform. The Hilbert transform may be expressed in both continuous and discrete forms and is widely used in circuit analysis, digital signal processing, remote sensing and image reconstruction [16]. Applications in electronics include reconstruction [17] and correction [18] of measured data, delay extraction [19], interpolation/extrapolation of frequency responses [20], time-domain conversion [21], estimation of optimal bandwidth and data density using causality checking [22] and causality enforcement techniques using generalized dispersion relations [23], causality enforcement using minimum phase and all-pass decomposition and delay extraction [26], causality verification using minimum phase and all-pass decomposition that avoids Gibbs errors [30], causality characterization through analytic continuation for integrable functions [31], causality enforcement using periodic polynomial continuations [32] and the subject of the current paper.

The Hilbert transform that relates the real and imaginary parts of a transfer function is defined on the infinite domain which can be reduced to by symmetry properties of for real impulse response functions. However, the frequency responses are usually available over a finite length frequency interval, so the infinite domain is either truncated or behavior of the function for large is approximated. This is necessary since measurements can only be practically conducted over a finite frequency range and often the cost of the measurements scales in an exponential manner with respect to frequency. Likewise simulation tools have a limited bandwidth and there is a computational cost associated with each frequency data point that generally precludes very large bandwidths in these data sets. Usually is assumed to be square integrable, which would require the function to decay at infinity. When a function does not decay at infinity or even grows, generalized dispersion relations with subtractions can be successfully used to reduce the dependence on high frequencies and allow a domain truncation [23]. A review of some previous work on generalized dispersion relations and other methods that address the problem of having finite frequency range is provided in [25].

We take another approach and instead of approximating the behavior of for large or truncating the domain, we construct a causal periodic continuation or causal Fourier continuation of by requiring the transfer function to be periodic and causal in an extended domain of finite length. In [32], polynomial periodic continuations were used to make a transfer function periodic on an extended frequency interval. In these papers, the raw frequency responses were used on the original frequency interval. Once a periodic continuation is constructed, the spectrally accurate Fast Fourier Transform [35] implemented in FFT/IFFT routines can be used to compute discrete Hilbert transform and enforce causality. The accuracy of the method was shown to depend primarily on the degree of the polynomial, which implied the smoothness up to some order of the continuation at the end points of the given frequency domain. This in turn allowed to reduce the boundary artifacts compared to applying the discrete Hilbert transform directly to the data without any periodic continuation, which is implemented in the function hilbert from the popular software Matlab.

In the current work we implement the idea of periodic continuations of an interconnect transfer function by approximating this function with a causal Fourier series in an extended domain. The approach allows one to obtain extremely accurate approximations of the given function on the original interval. The causality conditions are imposed exactly and directly on Fourier coefficients, so there is no need to compute Hilbert transform numerically. This eliminates the necessity of approximating the behavior of the transfer function at infinity similar to polynomial continuations employed in [32], and does not require the use of Fast Fourier Transform. The advantage of the method is that it is capable of detecting very small localized causality violations with amplitude close to the machine precision, at the order of , and a small uniform approximation error can be achieved on the entire original frequency interval, so it does not have boundary artifacts reported by using hilbert Matlab function, polynomial continuations [32] or generalized dispersion relations [23]. The performed error analysis unbiases an error due to approximation of a transfer function with a causal Fourier series from causality violations that are due to the presence of a noise or approximation errors in data. The developed estimates of upper bounds for these errors can be used in checking causality of the given data.

The paper is organized as follows. Section 2 provides a background on causality for linear time-translation invariant systems, dispersion relations and the motivation for the proposed method. In Section 3 we derive causal spectrally accurate Fourier continuations using truncated singular value decomposition (SVD). In Section 4 we perform the error analysis of the method and take into account a possible noise or approximation errors in the given data. We outline an approach for verifying causality of the given data by using the developed error estimates. In Section 5, the technique is applied to several analytic and simulated examples, both causal and non-causal, to show the excellent performance of the proposed method that works in a very good agreement with the developed error estimates. Finally, in Section 6 we present our conclusions.

2Causality for Linear Time-Translation Invariant Systems

Consider a linear and time-invariant physical system with the impulse response subject to a time-dependent input , to which it responds by an output . Linearity of the system implies that the output is a linear functional of the input , while time-translation invariance means that if the input is shifted by some time interval , the output is also shifted by the same interval, and, hence, the impulse response function depends only on the difference between the arguments. Thus, the response can be written as the convolution of the input and the impulse response [36]

Denote by

the Fourier transform of 1. is also called the transfer matrix in multidimensional case or transfer function in a scalar case.

The system is causal if the output cannot precede the input, i.e. if for , the same must be true for . This primitive causality condition in the time domain implies , , and (Equation 2) becomes

Note that the integral in (Equation 3) is extended only over a half-axis, which implies that has a regular analytic continuation in lower half -plane.

Examples of physical systems that satisfy the above conditions include electric networks with the input voltage, the output current, the admittance of the network; the input current, the output voltage, the impedance; , both power waves, the scattering.

For simplicity, we consider the case with a scalar impulse response but the approach can also be extended to the multidimensional case for any element of the impulse response matrix .

Very often it is assumed that is square integrable [31], i.e. for some constant . Then one can use Parseval’s theorem to show that is also square integrable [36]. The converse also holds [37]. Square integrability of is often related with the requirement that the total energy of the system is finite. Starting from Cauchy’s theorem and using contour integration, one can show [36] that for any point on the real axis, can be written as

where

denotes Cauchy’s principal value. Separating the real and imaginary parts of (Equation 4), we get

These expressions relating and are called the dispersion relations or Kramers-Krönig relations after Krönig [15] and Kramers [14] who derived the first known dispersion relation for a causal system of a dispersive medium. In mathematics the dispersion relations (Equation 5), (Equation 6) are also known as the Sokhotski–Plemelj formulas. These formulas show that at one frequency is related to for all frequencies, and vice versa. Choosing either or as an arbitrary square integrable function, then the other one is completely determined by causality. Recalling that the Hilbert transform is defined

we see that and are Hilbert transforms of each other, i.e.

For example, a function is clearly square integrable and satisfies the dispersion relations (Equation 5), (Equation 6), which can be verified by contour integration. An example of a function that is not square integrable but satisfies the Kramers-Krönig dispersion relations (Equation 5), (Equation 6) is provided by , . The real and imaginary parts are and , and dispersion relations (Equation 5), (Equation 6) can be verified by noting that and .

In practice, the function may not satisfy the assumption of square integrability and it may be only bounded or even behave like , when , . In such cases, instead of dispersion relations (Equation 5), (Equation 6), one can use generalized dispersion relations with subtractions, in which a square integrable function is constructed by subtracting a Taylor polynomial of around from and dividing the result by . This approach makes the integrand in the generalized dispersion relations less dependent on the high-frequency behavior of . This may be very beneficial when the high-frequency behavior of may not be known with sufficient accuracy or not accessible in practice at all due to availability of only finite bandwidth data. The technique was proposed in [38] and implemented successfully in [23] to reduce sensitivity of Kramers-Krönig dispersion relations (Equation 5), (Equation 6) to the high-frequency data.

In this paper, we take an alternative approach motivated by the example of the periodic function , , mentioned above, that is not square integrable but still satisfies Kramers-Krönig dispersion relations (Equation 5), (Equation 6). The transfer function in practice is typically known only over a finite frequency interval with the limited number of discrete values and it is not periodic in general. Direct application of dispersion relations (Equation 5), (Equation 6) produces large errors in the boundary regions mainly because the high-frequency behavior of is missing, unless data decay to zero at the boundary. To overcome this problem, we construct a spectrally accurate causal periodic continuation of in an extended domain. A method for constructing a periodic continuation, also known as Fourier continuation or Fourier extension, which is based on regularized singular value decomposition (SVD), was recently proposed in [39] (see also references therein). This method allows one to calculate Fourier series approximations of non-periodic functions such that a Fourier series is periodic in an extended domain. Causality can be imposed directly on the Fourier coefficients producing a causal Fourier continuation, thus satisfying causality exactly. The Fourier coefficients are determined by solving an overdetermined and regularized least squares problem since the system suffers from numerical ill-conditioning. The resulting causal Fourier continuation is then compared with the given discrete data on the original bandwidth of interest. A decision about causality of the given data is made using the error estimates developed in Section 4.

In the next section we provide details of the derivation of causal Fourier continuations.

3Causal Fourier Continuation

Consider a transfer function available at a set of discrete frequencies from , where . First, let , so we have the baseband case. Since equations (Equation 5), (Equation 6) are homogeneous in the frequency variable, we can rescale to using the transformation for convenience, to get a rescaled transfer function . The time domain impulse response function is often real-valued. Hence, the real and imaginary parts of , as the Fourier transform of , and, hence, of , are even and odd functions. This implies that the discrete set of rescaled frequency responses is available on the unit length interval by spectrum symmetry. In some cases, the data are available only from a non-zero, low-frequency cutoff , which corresponds to the bandpass case. The proposed procedure is still applicable since it does not require data points to be equally spaced. The transmission line example Section 5.3 considers such situation.

The idea is to construct an accurate Fourier series approximation of by allowing the Fourier series to be periodic and causal in an extended domain. The result is the Fourier continuation of that we denote by , and it is defined by

for even number of terms, whereas for odd number of terms, the index varies from to . Throughout this paper we will consider Fourier series with even number of terms for simplicity. All presented results have analogues for Fourier series with odd number of terms. Here is the period of approximation. For SVD-based periodic continuations is normally chosen as twice the length of the domain on which function is given [41]. The value is not necessarily optimal and it is shown [43] to depend on a function being approximated. For causal Fourier continuations we also find that the optimal value of depends on . In practice, can be varied in to get more optimal performance of the method. For very smooth functions, it is better to use a wider extension zone with , for example, or was enough in most of our examples. However, for functions that are wildly oscillatory or have high gradients in the boundary regions of the domain where the original data are available, a smaller extension zone with is recommended [39]. We used in one of our examples. Assume that values of the function are known at discretization or collocation points , , . Note that is a trigonometric polynomial of degree at most .

Since and are even and odd functions of , respectively, the Fourier coefficients

are real. Here , , and denotes the complex conjugate. Functions form a complete orthogonal basis in , and, in particular

where is the Kronecker delta. In addition, .

For a function , the Hilbert transform is . Hence,

which implies that the functions are the eigenfunctions of the Hilbert transform with associated eigenvalues with . We will use relations (Equation 9) to impose a causality condition on the coefficients of similarly as it was done in [19] for the case of square integrable where the idea of projecting on the eigenfunctions of the Hilbert transform in [44] was used. In the present work, the square integrability of is not required and more general transfer functions than in [19] can be considered.

For convenience of derivation, let us write as a Fourier series , which will be truncated at the end to get a Fourier continuation in the form (Equation 7). Let and . Since

we obtain

and, since , we have

where in the last sum we changed the order of summation in the second term. Similarly, we can show that

For a causal periodic continuation, we need to be the Hilbert transform of . Hence,

Employing linearity of the Hilbert transform, we get

Using (Equation 9), we obtain

or

that implies for in (Equation 7). Hence, a causal Fourier continuation has the form

\tag{10}\]

where we truncated the infinite sum to obtain a trigonometric polynomial. Evaluating at points , , , produces a complex valued system

with equations for unknowns , , . If , the system (Equation 11) is overdetermined and has to be solved in the least squares sense. When Fourier coefficients are computed, formula (Equation 10) provides reconstruction of on .

To ensure that numerically computed Fourier coefficients are real, instead of solving complex-valued system (Equation 11), one can separate the real and imaginary parts of to obtain real-valued system

This produces equations ( equations for both real and imaginary parts) and unknowns . We show below that both complex (Equation 11) and real (Equation 12) formulations give the reconstruction errors of the same order with the real formulation performing slightly better. To distinguish between the continuation computed using complex or real formulation, we will use notation and , respectively.

Consider the real formulation (Equation 12) and introduce the following notation. Let , , where denotes the transpose, and matrix have entries

Similar notation can be made for the complex formulation (Equation 11). Then the coefficients , , are defined as a least squares solution of written as

that minimizes the Euclidean norm of the residual. This least squares problem is extremely ill-conditioned, as explained in [45] using the theory of frames. However, it can be regularized using a truncated SVD method when singular values below some cutoff tolerance close to the machine precision are being discarded. In this work we use as the threshold to filter the singular values. The ill-conditioning increases as increases by developing rapid oscillations in the extended region. These oscillations are typical for SVD-based Fourier continuations. Once the system reaches a critical size that does not depend on the function being approximated, the coefficient matrix becomes rank deficient and the regularization of the SVD is required to treat singular values close to the machine precision. Because of the rank deficiency, the Fourier continuation is not longer unique. Applying the truncated SVD method produces the minimum norm solution , , for which the corresponding Fourier continuation is oscillatory. The oscillations in the extended region do not significantly affect the quality of the causal Fourier continuation on the original domain and varying can minimize their effect and decrease the overall reconstruction error, especially in the boundary domain.

Another way to make ill-conditioning of matrix problems (Equation 11) or (Equation 12) better is to use more data (collocation) points than the Fourier coefficients . This is called “overcollocation” [39] and makes the problem more overdetermined and helps to increase the accuracy of solutions. It is recommended to use at least twice more collocation points than the Fourier coefficients , i.e. . The convergence can be checked by keeping the number of Fourier coefficients fixed and increasing the number of collocation points . The overcollocation also helps with filtering out trigonometric interpolants that have very small errors at collocation points but large oscillations between the collocation points [39]. In all our examples we use at least as an effective way to obtain an accurate and reliable approximation of over the interval .

In the multidimensional case when a transfer matrix is given, the above procedure can be extended to all elements of the matrix. Computing SVD is an expensive numerical procedure for large matrices. The operation count to find a least squares solution of using the SVD method with being , , matrix, is of the order [46]. The actual CPU times for computing SVD, solving a linear system in the least squares sense and constructing causal Fourier continuations for various values of used in this work, are shown in Table 3 in Section 5. Computational savings can be achieved by noting that in our problem the matrix depends only on the location of frequency points at which transfer matrix is evaluated/available, and continuation parameters and but does not depend on actual values of . Having frequency responses at points, we can fix , choose as a default value and compute SVD only once prior to verifying causality. Varying or for each element of separately, if needed, would require recomputing SVD.

4Error Analysis

In this section, we provide an upper bound for the error in approximation of a given function by its causal Fourier continuation . The analysis of convergence of Fourier continuation technique based on the truncated SVD method was done by Lyon [43] using a split real formulation. In this work, we extend his results to causal Fourier continuations. The obtained error estimates can be employed to characterize causality of a given set of data.

4.1Error estimates

Denote by any function of the form

where , , as before. Let be the full SVD decomposition [47] of the matrix with entries , , , where is an unitary matrix, is an diagonal matrix of singular values , , , is an unitary matrix with entries , and denotes the complex conjugate transpose of .

We can prove the following result.

. To obtain the error bound ( ?) we use ideas from [43] but employ a complex formulation and impose causality on Fourier coefficients. The error bound for is expressed in terms of the error in approximation of a function with a causal Fourier series and for any given causal Fourier series . This requires finding upper bounds for and , that estimate the error due to truncation of singular values and the effect of Fourier continuation on the error in approximation of a function with a causal Fourier series. If function is known with some error , its effect is also included in a straight-forward way. The bound for follows from Jackson Theorems [48] that estimate the error in approximation of a periodic function with its causal Fourier series as a partial case. Indeed, a causal mode Fourier series can be considered as an mode Fourier series whose coefficients with nonpositive indices are zero. Hence, the error in approximating a -periodic function with continuous derivatives with a causal mode Fourier series, has the following upper bound:

Left and right singular vectors that form columns of matrices and are used in the derivation of the error estimates ( ?), ( ?) as alternatives to Fourier basis.

As can be seen from ( ?), , and depend only on the continuation parameters , , and as well as location of discrete points , and not on the function . The behavior of and as functions of can be investigated using direct numerical calculations. We do this for the case of equally spaced points , , which is typical in applications, and use with . Other distributions of points , values of and relations between and can be analyzed in a similar manner. For example, results for are very similar to the case with . We find that while does not change much with and remains small, the coefficient stays close to the cut-off value for small and increases at most to for large . This behavior does not seem to depend on the cut-off value and the results are similar for varying from to . The number of discarded singular values grows with (provided that singular values above are computed accurately) since the ill-conditioning of the problem increases with . Values of remain close to for values we considered. At the same time, the coefficient grows approximately as as increases.

Since the error (Equation 14) in approximation of a function with a causal Fourier series decays as and the coefficient grows as , the error bound part that is due to a causal Fourier series approximation

decays at least as fast as . For comparison, the analogous error bound term for Fourier continuations reported in [43] is on the order of , i.e. a causal Fourier series converges slightly slower than a standard Fourier series.

In practice, the smoothness order of the transfer function may not be known. In this case, it can be estimated by noting that the error bound can be written as

Taking natural logarithm of both sides, we get

i.e. is approximately a linear function of . The values of and can be estimated as follows. Assume that is known at frequency points. Usually the number of frequency responses is fixed and it may not be possible to get data with higher resolution. Assume that the errors due to truncation of singular values (term with ) and a noise in data (term with ) are small, so that the error due to a causal Fourier series approximation is dominant. Let and be reconstruction errors,

on the original interval . Compute and with , , etc. samples, i.e. with , , etc. Fourier coefficients. Solve equation (Equation 16) in the least squares sense to find approximations of and , and, hence, to and . Then the error term can be extrapolated to higher values of using (Equation 15) to see if the causal Fourier series approximation error decreases if the number of Fourier coefficients increases, i.e. resolution increases.

The error bound term

that is due to the truncation of singular values, is typically small and close to the cut-off value for small and at most for . As can be seen from (Equation 19), depends on and the function being approximated. The default value may not provide the smallest error. To find a more optimal value of , a few values in may be tried to determine which one gives smaller overall reconstruction errors. In case of non-causal functions, varying does not essentially effect the size of reconstruction errors.

The error in data should be known in practice since the error in measurements or the accuracy of full wave simulations are typically known. The error bound term due to a noise in data

consists of the norm of the noise amplified by the coefficient that grows as as was shown above. In numerical experiments that we conducted the reconstruction errors due to a noise in data seem to level off to the order of and are not amplified significantly as the resolution increases. This does not contradict the error estimate ( ?), ( ?). The error bounds are not tight and the actual reconstruction errors may be smaller than the error bounds suggest.

4.2Causality characterization

The error estimate ( ?), ( ?) shows that the reconstruction errors and defined in (Equation 17), (Equation 18) can be dominated by either the error due to approximation of a function with its causal Fourier series, which has the upper bound , or the error due to a noise or approximation errors in data, which has the upper bound . If the only errors in data are round-off errors, then the reconstruction errors will approach or will be bounded by the error due to truncation of singular values, which has the upper bound . The noise level may be known. In case of experimental data, could be around or , for example. Data obtained from finite element simulations may be accurate within or , for instance, which would correspond to a single precision accuracy. In some cases, the expected accuracy may be even higher. In this case if the reconstruction errors are higher than (in practice can be used), then the reconstruction errors may be dominated by a causal Fourier series approximation error with the upper bound . Determining the smoothness order of using the model (Equation 15) with , , etc. samples can give an answer whether the causal Fourier series approximation error can be made smaller by increasing the resolution of the data. If the model (Equation 15) extrapolated to higher values of , decays with , this implies that the Fourier series approximation error can be decreased by using more frequency samples, then we can state that the dispersion relations are satisfied within error given by and and causality violations, if present, have the order smaller than or at most the order of reconstruction errors and . In this case, with fixed resolution, the method will not be able to detect these smaller causality violations. The noise level may not be known but it can be determined by comparing reconstruction errors and as the resolution of data increases, if possible, or decreases, since the reconstruction error due to the noise does not significantly depend on the resolution in practice. This means that as the number of samples increases, the reconstruction errors level off at the value of the noise . Equivalently, there may be a situation that using , , etc. samples (resolution becomes coarser) the reconstruction error does not increase. Instead, it remains at the same order, which would be a noise level . In this case, the dispersion relations will be satisfied within the error , which would be the order of causality violations. Please see Example Section 5.2 of a finite element model of a package where a related situation is discussed.

In the next section we employ the proposed causal Fourier continuation based method to several analytic and simulated examples that are causal/non-causal or have imposed causality violations. When a given transfer function is causal, we expect that the dispersion relations (Equation 5), (Equation 6) are satisfied with the accuracy close to machine precision. This is so called an ideal causality test. When there is a causality violation, the method is expected to point to the location of violation or at least develop reconstruction error close to the order given by the noise in data as suggested by the error analysis. We show that the results are in full agreement with the error estimates developed in Section 4.

5Numerical Experiments: Causality Verification

5.1Two-pole example

The two-pole transfer function with no delay [19] is defined by

where , . Since both poles of this function located at are in the upper half plane, the transfer function is causal as a linear combination of causal transforms. We sample data on the interval from to , use the spectrum symmetry to obtain data on and scale the frequency interval from to . The real and imaginary parts of are shown in Figure 1. Superimposed are their causal Fourier continuations obtained using , , and solving the complex system (Equation 11) and its real counterpart (Equation 12). As can be seen, there is no essential difference in using complex or real formulation, though the real formulation (Equation 12) is slightly more ill-conditioned than the complex one. The data and the causal Fourier continuations are practically undistinguishable on .

Figure 1: H(x) and its Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) computed using complex () and real () formulations in example  with M=250, N=4M, b=4 shown on [-0.5, 0.5].
H(x) and its Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) computed using complex () and real () formulations in example  with M=250, N=4M, b=4 shown on [-0.5, 0.5].
Figure 1: and its Fourier continuations and computed using complex () and real () formulations in example with , , shown on .

To demonstrate the nature of continuations, we plot the same curves as in Figure 1 (only the real parts are presented, the imaginary parts have similar features) but on the extended domain where we show two periods. These are plotted in Figure 2. It is obvious that the continuations oscillate in the extended region outside . The frequency of these oscillations increases with . At the same time, the Fourier series become more and more accurate in approximation in the original interval . To demonstrate this, we show the reconstruction errors , defined in (Equation 17), in Figure 3 (semilogy plot is shown) in for various values of with obtained using real formulation (Equation 12). The results for are similar. As increases from to , the order of the error decreases from to for both real and imaginary parts. For example, with , , , both errors and are at the order of . The results indicate that the error is uniform on the entire interval and does not exhibit boundary artifacts.

Figure 2: The real part of Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) in example  with M=250, N=1000, b=4 shown on a wider domain [-4,4] (two periods are shown).
Figure 2: The real part of Fourier continuations and in example with , , shown on a wider domain (two periods are shown).
Figure 3: Semilogy plot of the errors E_R(x) in example  on [-0.5, 0.5] with M=5, 10, 25, 50, 100 and 250 and 400, N=4M, b=4.
Figure 3: Semilogy plot of the errors in example on with , , , , and and , , .

The above results demonstrate that the proposed technique is capable of verifying that the given data are causal with high accuracy. In this case, causality is satisfied with the error less than . These results are in agreement with the error estimates ( ?), ( ?) developed in Section 4. Since the data do not have any noise except of round-off errors and the transfer function is smooth on , the reconstruction errors are dominated by the fast decaying error in approximation of the smooth transfer function with its causal Fourier series for smaller and then by the error due to truncation of the singular values for high values of , which it is close to the cut-off value . We also make another observation about the behavior of the errors and as increases for a causal smooth function. Even for small values of , as doubles, the errors decrease by several orders of magnitude until the errors level off around (see Table 1). This is a consequence of the fast convergence of a Fourier series for a smooth function.

Table 1: The orders of errors and in example to demonstrate how fast reconstruction errors decay as resolution increases in case of a causal smooth function.

, ,
[3pt]
[3pt]
[3pt]
[3pt]

Next we test how sensitive this method is to causality violations. We do this by imposing a localized non-causal perturbation on causal data. This artificial causality violation is modeled by a Gaussian function

of small amplitude , centered at and added to , while keeping unchanged. This type of non-causal perturbation was used in [23] to test causality verification technique based on the generalized dispersion relations. We use the Gaussian centered at with standard deviation , so its “support” is concentrated on a very narrow interval of length and outside this interval the values of the perturbation are very close to . The advantage of using a Gaussian perturbation is that it can be localized and allows one to verify if the proposed method is capable of detecting the location of causality violation. By varying the amplitude , we can impose larger or smaller causality violations. The smaller can be used, the more sensitive method is for detecting causality violations. With , which gives a very small causality violation, the error between the data and its causal Fourier continuation is shown in Figure 4. It is clear that the error has very pronounced spikes at due to symmetry that correspond to the exact locations of the Gaussian perturbation. These spikes are of the order whereas on the rest of the interval the error is about times smaller. For larger perturbations, the results are similar. For example, with , the error at is of the order of and the rest of the interval has the error times smaller, etc. We can see that the reconstruction error in this case is strongly dominated by the error (perturbation) in the data at the location of causality violation and the magnitude of the error is of the same order as the order of the perturbation. At the same time, the transfer function itself is very smooth, which ensures fast convergence of the Fourier series. The results are in a perfect agreement with the error estimate ( ?), ( ?).

Figure 4: Semilogy plot of E_R(x) and E_I(x) in example  with Gaussian perturbation () with a=10^{-10} on [-0.5, 0.5] with M=250, N=1000, b=4.
Semilogy plot of E_R(x) and E_I(x) in example  with Gaussian perturbation () with a=10^{-10} on [-0.5, 0.5] with M=250, N=1000, b=4.
Figure 4: Semilogy plot of and in example with Gaussian perturbation () with on with , , .

Another perturbation that we consider is a cosine function that we also add to but keep unaltered. Adding a non-causal cosine perturbation makes the transfer function non-causal on the entire interval and higher reconstruction errors are expected everywhere. We find that both errors and oscillate with the frequency and amplitude similar to those of the perturbation. For example, with , these errors are of the order . Increasing/decreasing increases/decreases the magnitude of the error. To see how the level of a noise can be detected, set, for example, and compute the reconstruction errors and as the resolution increases. We vary from to and analyze the order of the errors. The results are shown in Table 2. As we can see, the reconstruction errors first decrease as the resolution (the number of points) and the number of Fourier coefficients increase until the error reaches the size of the perturbation, which happens at , and after that the order of the error does not decrease further and levels off instead.

Table 2: The orders of errors and in example with non-causal cosine perturbation with as and increase in proportion .

, ,
[3pt]
[3pt]
[3pt]
[3pt]

With a smaller perturbation amplitude , the error levels off at a larger value of . For example, with , the reconstruction errors stop decreasing at .

5.2Finite Element Model of a DRAM package

In this example we use a scattering matrix generated by a Finite Element Modeling (FEM) of a DRAM package (courtesy of Micron). The package contains input and output ports. The simulation process was performed for equally spaced frequency points ranging from =0 to GHz. We expect data to be causal but, perhaps, with some error due to limited accuracy of numerical simulations. For simplicity, we apply our method to the -parameter rather than the entire -matrix. The selected -parameter relates the output signal from port to the input signal at port as a function of frequency . Our approach can be extended to the entire -matrix by applying the method to every element of the scattering matrix . The graph of is shown in Figure 5.

Figure 5: H(x) with its causal Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) in example  with M=100, N=199, b=1.1.
H(x) with its causal Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) in example  with M=100, N=199, b=1.1.
Figure 5: with its causal Fourier continuations and in example with , , .

In this test example, the number of samples in is fixed at (by symmetry), so to construct a causal Fourier continuation we use Fourier coefficients and can only vary the length of the extended region. Because the transfer function has oscillations and high slopes in the boundary region, we have to use a smaller value of . We find to be optimal by trying a few different values of . Slightly higher values of give similar results while using too large does not produce small enough error, most likely because we have fixed resolution and more data points would be needed to construct a causal continuation on a larger domain with good enough resolution.

The errors with the above parameters are shown in Figure 6. The errors have the same order of . This implies that the dispersion relations are satisfied within error on the order of . Since the data came from finite element simulations, we expect their accuracy to be on the order of or at least. To verify if the relatively large reconstruction errors comes from a noise in the data or a causal Fourier series approximation error, we use data with samples, then every other and every forth samples, i.e. with , and samples to find a model (Equation 15) of the form using the least squares method. We find that both and decay approximately as . These models were extrapolated to higher values of as shown in Figure 6, where we plot the norms of actual reconstruction errors and fitted model curves. The extrapolated error curves indicate that the error may be decreased further if more frequency samples are available. In this case, we say that the transfer function satisfies dispersion relations within , i.e. the transfer function is causal within the error at most , and the causality violations, if present, are smaller than or at most on the order of . To determine the actual level of a noise in the data, higher resolution of frequency responses would be needed.

For comparison, using periodic polynomial continuation method [32] with th degree polynomial applied to the transfer function in this example, the error in approximation of is about that is by two orders of magnitude larger than with the spectral continuation, it is not uniform and the largest in the boundary domain.

Figure 6: Errors E_R(x) in approximation of S(100,1) in example  with N=199, M=100 and b=1.1. Errors E_I(x) have the same order.
Figure 6: Errors in approximation of in example with , and . Errors have the same order.
Figure 7: Errors ||E_R(x)||_2 in approximation of S(100,1) in example  with N=199, 99, 49 and M=100, 50, 25, respectively, and b=1.1, together with their least squares fit: ||E_R||_2\sim 11.7M^{-2.9} and extrapolation for higher values of M. For ||E_I(x)||_2 we find ||E_I||_2\sim 23.5M^{-3.1}.
Figure 7: Errors in approximation of in example with , , and , , , respectively, and , together with their least squares fit: and extrapolation for higher values of . For we find .

5.3Transmission line example

We consider a uniform transmission line segment that has the following per-unit-length parameters: nH/cm, pF/cm, /cm, and length cm. The frequency is sampled on the interval GHz. This example was used in [23] to analyze causality using generalized dispersion relations with subtractions. The scattering matrix of the structure was computed using Matlab function rlgc2s. Then we consider the element . Due to limitation of the model used in the function rlgc2s, we cannot obtain the value of the transfer function at (DC) but we can sample it from any small nonzero frequency. It is typical for systems to have frequency response missing at low frequencies and can occur either during measurements or simulations. However, the value of at is finite, because the magnitude of must be bounded by . Hence, we have a bandpass case. Once we choose the number of points and the corresponding , we can sample frequencies from . We use GHz. Using symmetry conditions we reflect the values of the transfer function for negative frequencies as for the baseband case considered above. We know that equals at but is to be computed. Since we do not have a value of at , our frequencies at which the values of the transfer function are available will have a gap at . Nevertheless, our approach is still applicable since it does not require the data points to be equally spaced. In this example, we get better results, however, with smaller . Alternatively, we can use a polynomial interpolation to find a value of the real part of at . The value of the imaginary part by symmetry. This approach is not very accurate since it does not take into account causality when the polynomial interpolation of is constructed, and produces a larger error compared with just skipping the value at and using spectral continuation approach directly. With our technique and , , we are able to construct a causal Fourier continuation accurate within . The graphs of together with their causal Fourier continuations are presented in Figure 8. Agreement for is similar. With smaller values of , it is enough to use smaller values of and in the same proportion to get the same order of accuracy. For example, to get an error in approximation of the transfer function on the original interval at the order of and GHz, it is enough to use , , while for GHz one could use , . In this case, just more data points would be needed to have the same resolution on the longer domain.

Figure 8: Transfer function H(x) and its causal Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) in example  with M=1500, N=3000, b=4 (real parts are shown).
Figure 8: Transfer function and its causal Fourier continuations and in example with , , (real parts are shown).

Computations presented in this paper were done using Matlab on a MacBook Pro with 2.93 GHz Intel Core 2 Duo Processor and 4 GB RAM. The CPU times (in seconds) for computing SVD, minimum norm solution of a linear system using the least squares method as well as overall CPU time for constructing a causal Fourier continuation with varying from to with are shown in Table 3.

Table 3: CPU times (in seconds) for computing SVD, finding minimum norm solution using the least squares method and overall CPU time for constructing causal Fourier continuation for varying from to with .

CPU time CPU time CPU time
[3pt] SVD minnmsvd continuation
[3pt]
100
50 0.0091 0.0141 0.0299
[3pt]
200
100 0.1910 0.0340 0.0781
[3pt]
500
250 0.9626 0.1646 0.4040
[3pt]
1000
500 3.9779 1.5061 3.9063
[3pt]
2000
1000 25.5921 9.8634 35.9250
[3pt]
3000
1500 68.6258 48.4047 117.6096
[3pt]
Figure 9: Errors E_R(x) in example  with M=1500, N=3000, b=4 and non-causal Gaussian perturbation () with a=10^{-6}, \sigma=10^{-2}/6, centered at x_0=0.1. The errors E_I(x) have similar spikes at causality violation locations.
Figure 9: Errors in example with , , and non-causal Gaussian perturbation () with , , centered at . The errors have similar spikes at causality violation locations.

Now we impose a Gaussian perturbation (Equation 21) to the transfer function as in example Section 5.1 with amplitude to model a causality violation located at with the standard deviation , so that the “support” of the Gaussian is approximately or of the entire bandwidth. This results in the error having very pronounced spikes, shown in Figure 9, of magnitude in both and at the locations of the perturbation, while as before the error on the rest of the interval is approximately times smaller. We find that we are able to detect reliably a causality violation of amplitude up to .

In papers [23] an excellent work is done by utilizing the generalized dispersion relations with subtractions to check causality of raw frequency responses. The performed error analysis provides explicit frequency dependent error estimates to account for finite frequency resolution (discretization error in computing Hilbert transform integral) and finite bandwidth (truncation error due to using only a finite frequency interval instead of the entire frequency axis) and unbiases the causality violations from numerical discretization and domain truncation errors. It is shown that by using more subtractions, one can make the truncation error arbitrary small but the discretization error does not go away since it is fixed by the resolution of given frequency responses. The authors report that if a causality violations are too small (with amplitude smaller than ) and smooth, using more subtractions may not affect the overall error since it is then dominated by the discretization error. In addition, even with placing more subtraction points in the boundary regions close to , the truncation error always diverges at the bandwidth edges due to the missing out-of-band samples. In the current work, we are able to remove boundary artifacts and detect really small localized infinitely smooth (Gaussian) causality violations. The resolution of data is also important since the number of collocation points at which frequency responses are available, dictates the number of Fourier coefficients with , and, hence, the sensitivity of the method to causality violations. For instance, in the above example that was also used in [23] (but we use a smaller amplitude and the Gaussian with more narrow “support”), with ( points in ) we can detect causality violation with , whereas and are capable of detecting violations with and , respectively. Similarly to [25], we also find that it is more difficult to detect wide causality violations. As the “support” of the Gaussian increases, the spikes in the reconstruction errors become wider and shorter, and eventually it is not possible to determine the location of the causality violation since the error at causality violation locations has the same order as the error on the rest of the interval. For , the spikes in the errors are still observable, whereas for a bigger value , the reconstruction errors are uniform. Increasing the resolution of the data does not decrease the reconstruction errors, that indicates the presence of causality violations of the order of reconstruction errors.

5.4Delayed Gaussian example

Here we test the performance of the method with an example of a delayed Gaussian function that was used in [30] to check causality of interconnects through the minimum-phase and all-pass decomposition. We consider the impulse response function modeled by a Gaussian with the center of the peak and standard deviation :

If , the Gaussian function is even, so it cannot be causal. As increases, the center of the peak moves to the right and for the impulse response function can be gradually made causal. The corresponding transfer function is

which is a periodic function damped by an exponentially decaying function. We consider two regimes. One has value of , so that the transfer function is non-causal. In the second regime, the delay is big enough to make the transfer function causal. We fix , and sample from the interval Hz and consider first the case with . The real part of is shown in Figure 10 together with its Fourier continuations. One can clearly see that Fourier continuations do not match well . Instead, there are visible high frequency oscillations throughout the domain as confirmed by analyzing the reconstruction errors . With , , the magnitude of the errors is about (see Figure 11). When is increased in proportion , the error slightly increases. For example, with , , the errors are about . Varying the length of the extended domain does not decrease the magnitude of the error. This is in agreement with the error estimate ( ?), ( ?) since in this case the reconstruction error is dominated by causality violations. Results for are similar.

Figure 10: Noncausal transfer function H(w,t_d) in example  and its Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) with M=250, N=500, b=2, t_d=0.1\sigma (only real parts are shown).
Figure 10: Noncausal transfer function in example and its Fourier continuations and with , , , (only real parts are shown).
Figure 11: Errors E_R(x) between the noncausal transfer function H(w,t_d) and its causal Fourier continuations {\mathcal C}^C(H) and {\mathcal C}^R(H) in example  with M=250, N=500, b=2, t_d=0.1\sigma. Errors E_I(x) have the same order.
Figure 11: Errors between the noncausal transfer function and its causal Fourier continuations and in example with , , , . Errors have the same order.

In the second case, we set , which should give a causal transfer function. together with its Fourier continuations are shown in Figure 12. Both reconstruction errors and drop to the order of (see Figure 13). In this case, the transfer function is causal and infinitely smooth, so the error in approximation of such function with a causal Fourier series decays quickly even for moderate values of .

We do observe a gradual change of the non-causal Gaussian function into a causal function as increases. Writing and vary values , , , and and find that the norms of both errors and are , , and , respectively, i.e. the error decays as