Spectra of Empirical Auto-Covariance Matrices

Spectra of Empirical Auto-Covariance Matrices

R. Kühn and P. Sollich Department of Mathematics, King’s College London, UK
July 7, 2019
Abstract

We compute spectra of sample auto-covariance matrices of second order stationary stochastic processes. We look at a limit in which both the matrix dimension and the sample size used to define empirical averages diverge, with their ratio kept fixed. We find a remarkable scaling relation which expresses the spectral density of sample auto-covariance matrices for processes with dynamical correlations as a continuous superposition of appropriately rescaled copies of the spectral density for a sequence of uncorrelated random variables. The rescaling factors are given by the Fourier transform of the auto-covariance function of the stochastic process. We also obtain a closed-form approximation for the scaling function . This depends on the shape parameter , but is otherwise universal: it is independent of the details of the underlying random variables, provided only they have finite variance. Our results are corroborated by numerical simulations using auto-regressive processes.

pacs:
02.50.-r,05.10.-a

The present investigation concerns spectral properties of sample auto-covariance matrices derived from time series, and in particular the way they are affected by finite sample fluctuations. Having a theory that would quantify such effects analytically would clearly be useful for e.g. the empirical analysis of stochastic processes. However, such theoretical understanding is at present almost entirely lacking – in marked contrast to the situation for the closely related problem of sample covariance matrices of a multi-dimensional data set estimated from finitely many independent measurements.

From an abstract point of view, both problem classes belong to random matrix theory Mehta (2004); Akemann et al. (2011). In the case of sample covariance matrices, the random matrix ensemble in question is the well known Wishart-Laguerre ensemble Wishart (1928), which has been widely studied for several decades, and for which numerous results are available. The spectral problem for example was solved in the 60s by Marčenko and Pastur Marčenko and Pastur (1967); typical fluctuations of the largest eigenvalue of Wishart matrices were shown Johnstone (2001) to follow a Tracy-Widom distribution Tracy and Widom (1996), and large deviation properties of both the largest Vivo et al. (2007) and smallest Katzav and Pérez Castillo (2010) eigenvalue have recently been characterized. Numerous variants of the original Wishart-Laguerre ensemble have been studied in the literature over the years (e.g. Burda et al. (2005); Akemann and Vivo (2008); Biely and Thurner (2008); Recher et al. (2010)), and applications formulated in a variety of fields, including multivariate statistics Muirhead (1982), wireless communication Tulino and Verdú (2004) and the analysis of cross-correlations in financial data Laloux et al. (1999); Plerou et al. (1999). For a more complete recent overview, we refer to Akemann et al. (2011).

Due to the temporal structure of the underlying signals in the problem of sample auto-covariance matrices of time series, the ensemble of random matrices describing this problem is radically different from the Wishart-Laguerre ensemble, and much less is known about their spectral properties. The existence of the limiting spectral density of sample auto-covariance matrices of moving-average processes Hamilton (1994) with i.i.d. driving (of both finite and infinite order) has in fact been established only very recently Basak et al. (2011); the corresponding existence proof for the closely related problem of random Toeplitz matrices with i.i.d. entries is also only a few years old Bryc et al. (2006). We are not aware of closed form expressions for limiting spectral densities for these cases – whether exact, or approximate but of a quality that would allow meaningful use for e.g. time series analysis. The purpose of the present letter is to report recent progress that fills this gap.

We consider stationary zero-mean processes . These could be discrete-time processes to begin with, or sampled from continuous-time processes at discrete equidistant time steps , in which case . We are interested in the spectrum of empirical auto-covariance matrices , which are estimated by measurements on sequences of samples. There are several (non-equivalent) ways to define the elements of . Our choice is

(1)

Sample auto-covariance matrices of this form constitute randomly perturbed Toeplitz matrices Grenander and Szegö (1984). Note that our choice differs from the ones in Basak et al. (2011), which are simpler for being constructed as random Toeplitz matrices from the start.

Our main results are the following. We find a remarkable scaling relation which expresses the spectral density of sample auto-covariance matrices for processes with dynamical correlations as a continuous superposition of rescaled copies of the spectral density for a sequence of uncorrelated, i.i.d. random variables. We also obtain a simple closed form expression for that provides an excellent approximation to numerically simulated spectra.

The spectral density of a matrix is evaluated in terms of its resolvent as

(2)

Here is the unit matrix and , the limit being understood. We follow Edwards and Jones Edwards and Jones (1976) and express the trace of the resolvent in terms of a Gaussian integral as

(3)

with

(4)

and angled brackets indicating an ensemble average. This average can be evaluated using replicas. Analogous calculations in random matrix theory Edwards and Jones (1976) suggest that the final results will exhibit the structure of a replica-symmetric high-temperature solution, and hence that an annealed calculation (which replaces by in (3)) will provide exact results. This is the approach we adopt here.

Inserting the definition Eq. (1) into Eq. (4), one notes that depends on the disorder, i.e. on the , only through the variables

(5)

Assuming that the true auto-covariance is absolutely summable, we can argue from the central limit theorem (CLT) for weakly dependent random variables that the will be correlated Gaussian variables with , and covariance matrix whose elements are given in terms of as

(6)

The disorder average is thus a Gaussian integral, which can be performed to give

(7)

The matrix being Toeplitz, we will use Szegö’s theorem Grenander and Szegö (1984) to evaluate the ‘spectral sum’ in Eq. (7). Given that our sequence of -matrices doesn’t fully fit the assumptions of the standard theory in that matrix elements are themselves dependent on the dimension , we expect this to be only an approximation; it should, however, become exact in the limit .

To proceed, we need Fourier representations of , and we will have to keep keep track of finite-, finite- expressions in what follows. Assuming to be odd, we have

(8)

for the ( dependent) elements of , with

(9)

where and . Here

(10)

is the Fourier transform of the true auto-covariance function of the underlying process. Truncating the sum in Eq. (10) at will create negligible errors in the large limit if exists, as already required when appealing to the CLT for the statistics above. Restricting the values to the discrete grid with spacing approximates by its cyclified version. In Szegö’s terminology, the matrix has as its (-grid) symbol, and Szegö’s approximation for the spectral sum reads

(11)

The symmetry entails , thus . Next, one extracts the dependence (via the ) from the evaluation of (11), using -functions and their Fourier representations. The integrals then become Gaussian, and we can express as

(12)

The elements of the matrix in (12) are given by , with . We have combined modes with and and neglected as subleading the fact that the resulting prefactors differ for the mode.

Figure 1: (Colour online) Spectral density for sample auto covariance matrices of i.i.d. signals at (green); analytic approximation Eq. (24) for (red). The Marčenko-Pastur law (blue-dashed) for the same is also shown for comparison.

We use residues to evaluate the integrals in (12):

After rescaling this yields

(13)

with now

(14)

In Eq. (13) we have introduced the short-hand

(15)

for the -integrals. As the notation indicates, these amount to averages over exponentially distributed random variables of unit mean. Hence within our Szegö-approximation, the original spectral problem for sample auto-covariance matrices is equivalent to that for random Toeplitz matrices given by (14).

To make progress we use the fact that the matrices , too, are Toeplitz, and approximate the spectral sum appearing in (13) in terms of Szegö’s theorem,

(16)

with

(17)

for defined on a grid of spacing , and the -kernel given by

(18)

Next one extracts the dependence from the spectral sum (16) by enforcing the -definitions using -functions and their Fourier representations. This enables one to perform the integrals using residues much as in the case of the integrals above, giving

(19)

with

(20)

The coupling via the -kernel entails that the for different are correlated. To proceed, we exploit the property that the -kernel is rapidly oscillating, and sharply peaked at . The dominant contributions to the exponential in (20) at fixed therefore lie in the interval . Approximating the -kernel by a rectangular window (of height ) on and using smoothness of on the -scale, we set

(21)

As the are overlapping, the in (20) remain correlated. As a last step we ignore these residual correlations and substitute in Eq. (20) to arrive at a closed form approximation for :

(22)
Figure 2: (Colour online) Logarithmic spectral densities of auto-covariance matrices for an AR2 process with and , comparing the scaling prediction Eq. (23) using the empirical scaling function (see text, black full curve) with that based on the analytic approximation (24) for the scaling function (red full curve), and simulations (green dashed curve) Double-peaked set of curves: , single-peaked set of curves: .

For the spectral density (3) in the thermodynamic limit we then get

(23)

in which

(24)

with obtained from (22) in terms of an incomplete -function: for ,

(25)

Note that the scaling function has an independent meaning as the spectral density of the empirical auto-covariance matrix (at the same value of the shape parameter ) for a sequence of uncorrelated data, for which . Eq. (23) thus constitutes a remarkable scaling relation relating the spectral density of sample auto-covariance matrices for processes with dynamical correlation to the spectral density of sample auto-covariance matrices for i.i.d. sequences of random data.

We have checked our results using simulations of sets of 5000 auto-covariance matrices with . Fig. 1 compares simulations for a sequence of i.i.d. variables with our prediction (24) for , and the Marčenko-Pastur law at . Fig. 2 looks at auto-regressive AR2 processes of the form , with i.i.d. , and chosen to ensure that . It compares simulations with scaling based on our analytic approximation (24) for , and with scaling using an empirical scaling function determined via simulation, with an -example shown in Fig. 1.

Whereas scaling appears to be exact using the empirical scaling function, our analytic result is not. It nevertheless produces quantitatively accurate results, even for as large as 0.8. We have elements of an independent proof of scaling which we intend to publish in a forthcoming paper. Judging from the impact that analogous results for (Wishart-Laguerre) sample covariance matrices have had, we believe our results to hold significant potential for applications in a variety of fields, including time-series analysis, information theory, or signal processing.

Acknowledgement: It is a pleasure to thank K. Anand, L. Dall’Asta and P. Vivo for illuminating discussions on the occasion of a visit of RK to the ICTP at Trieste, which triggered the present investigation.

References

  • Mehta (2004) M. L. Mehta, Random Matrices, 3rd Edition (Elsevier, Amsterdam, 2004).
  • Akemann et al. (2011) G. Akemann, J. Baik, and P. D. Francesco, eds., The Oxford Handbook of Random Matrix Theory (Oxford University Press, Oxford, 2011).
  • Wishart (1928) J. Wishart, Biometrika 20 A, 32 (1928).
  • Marčenko and Pastur (1967) V. A. Marčenko and L. A. Pastur, Math. USSR-Sb. 1, 457 (1967).
  • Johnstone (2001) I. M. Johnstone, Ann. Stat. 29, 295 (2001).
  • Tracy and Widom (1996) C. A. Tracy and H. Widom, Comm. Math. Phys. 177, 727 (1996).
  • Vivo et al. (2007) P. Vivo, S. N. Majumdar, and O. Bohigas, J. Phys. A 40, 4317 (2007).
  • Katzav and Pérez Castillo (2010) E. Katzav and I. Pérez Castillo, Phys. Rev. E 82, 040104 (4pp) (2010).
  • Burda et al. (2005) Z. Burda, J. Jurkiewicz, and B. Waclaw, Acta Phys. Polon. B 36, 2641 (2005).
  • Akemann and Vivo (2008) G. Akemann and P. Vivo, J. Stat. Mech. 2008, P09002 (31pp) (2008).
  • Biely and Thurner (2008) C. Biely and S. Thurner, Quant. Fin. 8, 705 (2008).
  • Recher et al. (2010) C. Recher, M. Kieburg, and T. Guhr, Phys. Rev. Lett. 105, 244101 (2010).
  • Muirhead (1982) R. J. Muirhead, Aspects of Multivariate Statistical Theory (Wiley, New York, 1982).
  • Tulino and Verdú (2004) A. M. Tulino and S. Verdú, Random Matrix Theory and Wireless Communications (now Publishers Inc, Hannover, MA, 2004).
  • Laloux et al. (1999) L. Laloux, P. Cizeau, J.-P. Bouchaud, and M. Potters, Phys. Rev. Lett. 83, 1467 (1999).
  • Plerou et al. (1999) V. Plerou, P. Gopikrishnan, B. Rosenow, L. Amaral, and H. E. Stanley, Phys.Rev. Lett. 83, 1471 (1999).
  • Hamilton (1994) J. D. Hamilton, Time Series Analysis (Princeton University Press, Princeton, NJ, 1994).
  • Basak et al. (2011) A. Basak, A. Bose, and S. Sen, preprint arXiv:1108.3147v1 [math.PR], 41pp (2011).
  • Bryc et al. (2006) W. Bryc, A. Dembo, and T. Jiang, Ann. Prob. 34, 1 (2006).
  • Grenander and Szegö (1984) U. Grenander and G. Szegö, Toeplitz Forms and Their Applications (American Mathematical Society, Providence, RI, 1984).
  • Edwards and Jones (1976) S. F. Edwards and R. C. Jones, J. Phys. A 9, 1595 (1976).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
84183
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description