Robust Covariance Estimation for Approximate Factor Models

Robust Covariance Estimation for Approximate Factor Models

Abstract

In this paper, we study robust covariance estimation under the approximate factor model with observed factors. We propose a novel framework to first estimate the initial joint covariance matrix of the observed data and the factors, and then use it to recover the covariance matrix of the observed data. We prove that once the initial matrix estimator is good enough to maintain the element-wise optimal rate, the whole procedure will generate an estimated covariance with desired properties. For data with only bounded fourth moments, we propose to use Huber loss minimization to give the initial joint covariance estimation. This approach is applicable to a much wider range of distributions, including sub-Gaussian and elliptical distributions. We also present an asymptotic result for Huber’s M-estimator with a diverging parameter. The conclusions are demonstrated by extensive simulations and real data analysis.

Keywords:

Robust covariance matrix, Approximate factor model, M-estimator.

1Introduction

The problem of estimating a covariance matrix and its inverse has been fundamental in many areas of statistics, including principal component analysis (PCA), linear discriminative analysis for classification, and undirected graphical models, just to name a few. The intense research in high dimensional statistics has contributed a stream of papers related to covariance matrix estimation, including sparse principal component analysis [?], sparse covariance estimation [?] and factor model analysis [?]. A strong interest in precision matrix estimation (undirected graphical model) has also emerged in the statistics community following the pioneering works in [?] and [?]. In the application aspect, many areas such as portfolio allocation [?], have benefited from this continuing research.

In the high dimensional setting, the number of variables is comparable or greater than the sample size . This dimensionality poses a challenge to the estimation of covariance matrices. It has been shown in [?] that the empirical covariance matrix behaves poorly, and sparsity of leading eigenvectors is assumed to circumvent this issue. Following this work, a flourishing literature on sparse PCA has developed in-depth analysis and refined algorithms; see [?]. Taking a different route, [?] advocated thresholding as a regularization approach to estimate a sparse matrix, in the sense that most entries of the matrix are close to zero.

Another challenge in high-dimensional statistics is that the measurements can not have light tails, as large scale data are often obtained by using (bio)imaging technologies that have a limited precision. Moreover, it is well known that financial returns exhibit heavy tails. These invalidate the fundamental assumptions in high-dimensional statistics that data have sub-Gaussian or sub-exponential tails, popularly imposed in most of the aforementioned papers. Significant relax of the above assumption requires some new ideas and forms the subject of this paper.

Recently, motivated by Fama-French model [?] from financial econometrics, [?] and [?] considered the covariance structure of the static approximate factor model, which models the covariance matrix by a low-rank signal matrix and a sparse noise matrix. The same model will also be the focus of this paper. The model assumes existence of several low-dimensional factors that drives a large panel data , that is

where ’s are the common factors and ’s are their corresponding factor loadings. The noises ’s, known as the idiosyncratic component, are uncorrelated with the factors . Here is relatively small compared with and . We will treat as fixed independent of and throughout this paper. When the factors are known, this model subsumes the well-known CAPM model [?] and Fama-French model [?]. When is unobserved, the model tries to recover the underlying factors for the movements of the whole panel data. Here the approximate factor model means that the covariance of is sparse, including the strict factor model in which is diagonal as a special case. In addition, “static” is on the contrary of the dynamic model which takes into account the time lag and allows more general infinite dimensional representations [?].

The covariance matrix of the outcome from model can be written as

where consisting of in each row is the loading matrix, is the covariance of and is the sparse covariance matrix for . Here we assume the process of is stationary so that do not depend on time. When factors are unknown, [?] proposed applying PCA to obtain an estimate of the low rank part and sparse part . The crucial assumption is that the factors are pervasive, meaning that the factors have non-negligible effects on a large amount of dimensions of the outcomes. [?] gives more explanation from random matrix theories and aims to relax the pervasiveness assumption in applications such as risk management and estimation of the false discovery proportion. See [?] for more discussions on strong and semi-strong factors.

In this paper, we consider estimating simply with known factors. The main focus of the paper is on robustness instead of factor recovery. Under exponential tails of the factors and noises, [?] proposed the idea of performing thresholding on the estimate of , obtained from the sample covariance of the residuals of multiple regression . The legitimacy of this approach hinges on the assumption that the tails of the factor and error distributions are exponential decay, which is likely to be violated in practice, especially in the financial applications. Thus, the need to extend the applicability of this approach beyond well-behaved noise has driven further research such as [?], in which they assume that has an elliptical distribution [?].

This paper studies the model (Equation 1) under a much more relaxed condition, that the random variables and only have finite fourth moments. The main observation that motivates our method is that, the joint covariance matrix of supplies sufficient information to estimate and . To estimate the joint covariance matrix in a robust way, the classical idea that dates back to [?] proves to be vital and effective. The novelty here is that we let the parameter diverges in order to control the bias in high-dimensional applications. The Huber loss function with a diverging parameter, together with other similar functions, has been shown to produce concentration bounds for M-estimators, when the random variables have fat tails; see for example [?] and [?]. This point will be clarified in Sections Section 2 and Section 3. The M-estimators considered here have additional merits in asymptotic analysis, which is studied in Section 3.3.

This paper can be placed in the broader context of low rank plus sparse representation. In the past few years, robust principal component analysis has received much attention among statisticians, applied mathematicians and computer scientists. Their focus is on identifying the low rank component and sparse component from a corrupted matrix [?]. However, the matrices considered therein do not come from random samples, and as a result, neither estimation nor inference are involved. [?] does consider the noisy decomposition, but still, it focuses more on identifying and separating the low rank part and sparse part. In spite of connections with the robust PCA literature, such as the incoherence condition (see Section 2), this paper and its predecessors are more engaged in disentangling “true signal” from noise, in order to improve estimation of covariance matrices. In this respect, they bear more similarity with the literature of covariance matrix estimation.

We make a few notational definitions before presenting the main results. For a general matrix , the max norm of , or the entry-wise maximum absolute value, is denoted as . The operator norm of is whereas the Frobenius norm is . If is furthermore symmetric, we denote as the largest eigenvalue, as the largest one, and as the smallest one. In the paper, is a generic constant that may differ from line to line in the assumptions and also derivation of our theories.

The paper is organized as follows. In Section 2, we present the procedure for robust covariance estimation when only finite fourth moment is assumed for both factors and noises without specific distribution family assumption. The theoretical justification will be provided in Section 3. Simulations will be carried out in Section 4 to demonstrate the effectiveness of the proposed procedure. We also conduct real data analysis on portfolio risk of S&P stocks via Fama-French model in Section 5. Technical proofs will be delayed to the appendix.

2Robust covariance estimation

Consider the factor model (Equation 1) again with observed factors. It can be written in the vector form as

where , are the factors for , is the unknown loading matrix and is uncorrelated with the factors. We assume that have zero mean and independent for . A motivating example from economic and financial studies is the classical Fama-French model, where ’s represent excess returns of stocks in the market and ’s are interpreted as common factors driving the market. It is more natural to allow for weak temporal dependence such as -mixing as in the work of [?]. Though possible, we assume independence in this paper for the sake of simplicity of analysis.

2.1Assumptions

We now state the main assumptions of the model. Let be the covariance of , and the covariance of . A covariance decomposition shows that , the covariance of , comprises two parts,

We assume that is sparse and the sparsity level is measured through

If , is defined to be , i.e. the exact sparsity. An intuitive justification of the sparsity measurement stems from modeling of the covariance structure: after taking out the common factors, the rest only has weak cross-sectional dependence. In addition, we assume that , as well as , is bounded away from and . In the case of degenerate , we can always consider rescaling the factors and reduce the number of observed factors to meet the requirement of non-vanishing minimum eigenvalue of . This leads to our first assumption.

Furthermore, it is observed by [?] that the factors are pervasive in the sense that the low rank part of (Equation 4) is the dominant component of ; more specifically, the top eigenvalues grow linearly as . This motivates the following assumption:

Note first assumption (ii) implies that . So together with (i), the above assumption requires leading eigenvalues to grow with an order of . This assumption is satisfied by the approximate factor model, since by Weyl’s inequality, if the main term is bounded from below. Furthermore, if we assume that each row of is iid from the same distribution with a finite second moment, it is not hard to see satisfies such a condition. Consequently, it is natural to assume is lower bounded for .

Assumption (ii) is related to the matrix incoherence condition. In fact, when grows linearly with , the condition that is bounded is equivalent to the incoherence of eigenvectors of being bounded, which is standard in the matrix completion literature [?] and the robust PCA literature [?].

We now consider the moment assumption of random variables in model (Equation 1).

The independence assumption can be relaxed to mixing conditions, but we do not pursue this direction in the current paper. We are going to establish our results based on the general distribution family with only bounded fourth moment in the above assumption.

2.2Robust estimation procedure

The basic idea we propose is to estimate the covariance matrix of the joint vector instead of just that of , although it is our target. The covariance of the concatenated dimensional vector contains all the information we need to recover the low-ranks and sparse structure. Observe that the covariance matrix can be expressed as

Any method which yields an estimate of as an initial estimator or estimates of could be used to infer the unknown and . Specifically, using the estimator , we can readily obtain an estimator of through the identity

Subsequently, we can subtract the estimator of from to obtain . With the sparsity structure of assumed in Section 2.1, the well-studied thresholding [?] can be employed. Applying thresholding to , we obtain a thresholded matrix with guaranteed error in terms of max norm and operator norm. The final step is to add up with the estimator of from to produce the final for .

Due to the fact that we only have bounded fourth moments for factors and errors, a straightforward idea to estimate the covariance matrix is through robust methodology. For the sake of simplicity, we assume has zero mean, so the covariance matrix of takes the form . We shall use the M-estimator proposed in [?] and [?], where the authors proved the concentration property in the estimation of population mean of a random variable with only a finite second moment. In essence, minimizing a suitable loss function, say Huber loss, yields an estimator of the population mean with deviation of order . The Huber loss reads

Choosing , where is an upper bound of the standard deviation of the random variable of interest, [?] showed that the minimizer satisfies

when where . This finite sample result holds for any distributions with bounded second moments, including asymmetric distributions generated by . The diverging parameter is chosen to reduce the biases of the -estimator for asymmetric distributions and hence we require a finite second moment. In our covariance matrix estimation, we will take to be the square of a random variable or products of two random variables. When applying this method to estimate element-wise, we expect to achieve element-wise errors of , where the logarithmic term is incurred when we bound the errors uniformly. The formal result will be given in Section 3.

In an earlier work, [?] proposed solving the equation where the strictly increasing satisfies . For and , [?] proved that

when and where is an upper bound of the standard deviation. This -estimator can also be used for covariance estimation, though it usually has a larger bias as shown in [?].

The whole procedure can be presented in the following steps:

  • For each entry of the covariance matrix , obtain a robust estimator by solving a convex minimization problem (through, for example, Newton-Rapson method):

    where is chosen as discussed above and

  • Derive an estimator of through the algebraic manipulation

    and then apply adaptive thresholding of [?]. That is,

    where is a the generalized shrinkage function [?] and is an entry-dependent threshold.

  • Produce the final estimator for :

Note in the above steps, the choice of the parameters (in the definition of ) and are not yet specified and will be discussed in Section 3.

Before delving into the analysis of the procedure, we first deviate to look at a technical issue. Recall that is an estimator of , by Weyl’s inequality,

Since both matrices are of low dimensionality, as long as we are able to estimate every entry of accurate enough (see Lemma ? below), vanishes with high probability as diverges. Since is invertible with high probability, there is no major issue implementing the procedure. In cases where positive semidefinite (psd) matrix is expected, we replace the matrix with its nearest positive semidefinite version. We can do this projection for either or . For example, for , we solve the following optimization problem:

and simply employ as a surrogate of . Observe that

Thus, apart from a different constant, inherits all the desired properties of , and we are able to safely replace with without modifying our estimation procedure. Moreover, (Equation 6) can be cast into the semidefinite programming problem below,

which can be easily solved by a semidefinite programming solver, e.g. [?].

3Theoretical analysis

In this section, we will show the theoretical properties of our robust estimator under bounded fourth moments. We will also show that when the data are known to be generated from more restricted families (e.g. sub-Gaussian), commonly used estimators such as sample covariance estimator suffices as an initial estimator in Step .

3.1General theoretical properties

From the above discussion on M-estimators and their concentration results, it is immediate to have the following lemma.

In practice, we do not know any of the fourth moments in advance. To pick up a good , one possibility is to try a sequence of geometrically increasing , as studied in [?]. Similar to [?], we may also use empirical variance to give a rough bound of .

Recall that is a dimensional vector concatenating and . From Assumption ?, there is a constant as a uniform bound for . This leads to the following result.

So after Step 1 of the proposed procedure, we obtain an estimator that achieves optimal rate of element-wise convergence. With , we proceed to establish results of estimation errors of our concern. We will establish convergence rates for both and . The key theorem that links the estimation error under element-wise max norm with that under other norms is stated as follows.

Note that this theorem provides a nice interface that connects max-norm guarantee with the desired convergence rate . Therefore, any robust method that attains the element-wise optimal convergence rate as in Corollary ? can be used in Step 1 instead of the current M-estimator approach.

3.2Estimators under more restricted distributional assumptions

We analyzed theoretical properties of the robust procedure in the previous subsection under the assumption of only bounded fourth moments. Theorem ? shows that any estimator that achieves the optimal max norm convergence rate could serve as an initial pilot estimator for to be used in Step 2 and Step 3 of our procedure. Thus the procedure depends on the distribution assumption ? only through Step 1 where a proper estimator is proposed. Sometimes, we do have more information on the shape of the distributions of factors and noises. For example, if the distribution of has a sub-Gaussian tail, the sample covariance matrix attains the optimal element-wise maximal rate for estimating .

In an earlier work, [?] proposed to simply regress observations on in order to obtain

where and . Then they threshold the matrix where and . This regression procedure is equivalent to applying directly in Step 1 and also equivalent to solving a least-square minimization problem, and thus suffers from robustness issue when the data come from heavy-tailed distributions. All the convergence rates achieved in Theorem ? are identical with [?] where sub-Gaussian tails are assumed.

As we explained, if is sub-Gaussian distributed, instead of can be used. If and exhibit heavy tails, another widely used assumption is t-distribution, which is included in the elliptical distribution family. The elliptical distribution is defined as follows. Let and with . A -dimensional random vector has an elliptical distribution, denoted by , if it has a representation [?]

where is a uniform random vector on the unit sphere in , is a scalar random variable independent of , is a deterministic matrix satisfying . To make the representation identifiable, we require so that . Here we also assume continuous elliptical distributions with .

If and are uncorrelated and jointly elliptical, i.e., , then a well known good estimator for the correlation matrix of is marginal Kendall’s tau. Kendall’s tau correlation coefficient is defined as

whose population counterpart is

For the elliptical family, the key identity relates Pearson correlation with Kendall’s correlation [?]. Using , [?] showed that is an accurate estimate of , achieving . Let where is the correlation matrix and is a diagonal matrix consisting of standard deviations for each dimension. We construct by separately estimating and . As before, if fourth moment exists, we estimate by only considering in Step 1.

Therefore, if is elliptically distributed, can be used as the initial pilot estimator for in Step 1. Note that is much more computationally efficient than . However, for general heavy-tailed distributions, there is no simple way to connect the usual correlation with Kendall’s correlation. Thus we should favor instead. We will compare the three estimators , and throughly through simulations in Section 4.

3.3Asymptotics of robust mean estimators

In this section we look further into robust mean estimators. Though the result we shall present is asymptotic and not essential for our main theorem ?, it is interesting in its own right and deserves some treatment.

Perhaps the best known result of Huber’s mean estimator is the asymptotic minimax theory. In [?], Huber considered the so-called -contamination model:

where is a known distribution, is fixed and is the family of symmetric distributions. Let be the minimizer of , where for , and for , where is fixed. In the special case where is Gaussian, Huber’s result shows that with appropriate choice of , Huber’s estimator minimizes the maximal asymptotic variance among all translation invariant estimators, the maximum being taken over .

One problem with -contamination model is that it makes sense only when we assume symmetry of , if is the quantity we are interested in. In contrast, [?] and [?] studied a different family, in which distributions have finite second moments. [?] called them ‘local’ and ‘global’ models respectively, and offered a detailed discussion.

This paper, along with the preceding two papers [?], studies robustness in the sense of the second model. The technical novelty primarily lies in the nice concentration property, which is a powerful tool in high dimensional statistics. This requires the parameter of to grow with , versus being kept fixed, such that the condition in Corollary is satisfied. It turns out that, in addition to the concentration property, we can establish results regarding its asymptotic behaviors in an exact manner.

Let for and for ; its derivative . Let us write . Denote as a solution of , which is unique when is sufficiently large, and a solution of . We have the following theorem.

This theorem gives a decomposition of error into two components: variance and bias. The rate of bias depends on the distribution and . When the distribution is either symmetric or , the second component is , a negligible quantity compared with the asymptotic variance. While Huber’s approach needs the symmetric restriction, there is no need for our estimator. This theorem also lends credibility to the bias-variance tradeoff we observed in the simulation (see Section 4.1).

It is worth comparing the above Huber loss minimization with another candidate for robust mean estimation called “median-of-means” estimator given by [?]. The method, as its name suggests, first divides samples into subgroups and calculates means for each subgroup, then take the median of those means as the final estimator. The first step basically symmetrizes the distribution by the central limit theorem and the second step is to robustify the procedure. According to [?], if we choose and element-wisely estimate , similar to (Equation 5), with probability , we have

Although “median-of-means” has the desired concentration property, unlike our estimator here, its asymptotic behavior differs from the empirical mean estimator, and as a consequence, it is not asymptotically efficient when the distribution is Gaussian. Therefore, regarding efficiency, we prefer our proposed procedure in Section 2.2.

4Simulations

We now present simulation results to demonstrate improvement of the proposed robust method over the least-square based method [?] and Kendall’s tau based method [?] when factors and errors are heavy-tailed and even elliptically distributed.

However, one must be cautious of the choice of the tuning parameter , since it plays an important role in the quality of the robust estimates. Out of this concern, we shall discuss the intricacy of choosing parameter before presenting the performance of robust estimates of covariance matrices.

4.1Robust estimates of variances and covariances

For random variables with zero mean that may potentially exhibits heavy-tailed behavior, the sample mean of is not good enough for our estimation purpose. Though being unbiased, in the high dimensional setting, there is no guarantee that multiple sample means stay close to the true values simultaneously.

As shown in theoretical analysis, this problem is alleviated for robust estimators constructed through M-estimators, whose influence functions grow slowly at extreme values. The desired concentration property in ( ?) depends on the choice of parameter , which decides the range outside which large values cease to become more influential. However, in practice, we have to make a good guess of as the theory suggests; even so, we may be too conservative in the choice of .

Figure 1: The histograms show the estimates of \mbox{Var}(X_i) with different paramters \alpha in 1000 runs. X_i \sim t_{4} so that the true variance \mbox{Var}(X_i) = 1.909. The sample size n = 100.
Figure 1: The histograms show the estimates of with different paramters in runs. so that the true variance . The sample size .

To show this, we plot in Figure 1 the histograms of our estimates of in runs, where is generated from a t-distribution with degree of freedom . The first three histograms show the estimates constructed from Huber’s M-estimator, with parameter

where is respectively, and the last histogram is the usual sample estimate (or ). The quality of estimates ranges from large biases to large variances. We also plot in Figure 2 the histograms of estimates of , where is generated from a multivariate t-distribution with and an identity scale matrix. The only difference is that in (Equation 9), the variance of is replaced by the covariance of .

Figure 2: The histograms show the estimates of \text{Cov}(X_i,X_j) with different paramters \alpha in 1000 runs. The true covariance \mbox{Cov}(X_i,X_j) = 0. n = 100 and the degree of freedom is 4.
Figure 2: The histograms show the estimates of with different paramters in runs. The true covariance . and the degree of freedom is .

From Figure 1, we observe a bias-variance tradeoff phenomenon as varies. This is also consistent with the theory in Section 3.3. When is small, the robust method underestimate the variance, yielding a large bias due to the asymmetric of the distribution of . As increases, a larger variance is traded for a smaller bias, until , in which case the robust estimator simply becomes the sample mean.

For the covariance estimation, Figure 2 exhibits a different phenomenon. Since the distribution of is symmetric for , there is no bias incurred when is small. Since the variance is smaller when is smaller, we have a net gain in terms of the quality of estimates. In the extreme case where is zero, we are actually estimating the median. Fortunately, under distributional symmetry, the mean and the median are the same.

The simple simulations help us to understand how to choose in practice: if the distribution is close to a symmetric one, one can choose aggressively, i.e. making smaller; otherwise, a conservative is preferred.

4.2Covariance matrix estimation

We implemented the robust estimation procedure with three initial pilot estimators , and . We simulated samples of from a multivariate t-distribution with covariance matrix diag and various degrees of freedom. Each row of is independently sampled from a standard normal distribution. The population covariance matrix of is . For running from to and , we calculated errors of the robust procedure in different norms. As suggested by the experiments in the previous section, we chose a larger parameter to estimate the diagonal elements of , and a smaller one to estimate its off-diagonal elements. We used the thresholding parameter .

Figure 3: Errors of robust estimates against varying dimensionality p. Blue line represents ratio of errors with \widehat\Sigma_z^R over errors with \widehat\Sigma_z^S, while black line represents ratio of errors with \widehat\Sigma_z^K over errors with \widehat\Sigma_z^S. z_t is generated by multivariate t-distribution with df = 3 (solid), 5 (dashed) and \infty (dotted). The median errors and their IQR over 100 simulations are reported.
Figure 3: Errors of robust estimates against varying dimensionality . Blue line represents ratio of errors with over errors with , while black line represents ratio of errors with over errors with . is generated by multivariate t-distribution with (solid), (dashed) and (dotted). The median errors and their IQR over simulations are reported.

The estimation errors are gauged in the following norms: , and as shown in Theorem ?. We considered two different settings: (1) is generated from multivariate t distribution with very heavy (), medium heavy (), and light ( or Gaussian) tail; (2) is element-wise iid one-dimensional t distribution with degree of freedom and . They are separately plotted in Figures Figure 3 and Figure 4. The estimation errors of applying sample covariance matrix are used as the baseline for comparison. For example, if is used to measure performance, the blue curve represents ratio while the black curve represents ratio where are respectively estimators given by the robust procedure with initial pilot estimators for . Therefore if the ratio curve moves below , the method is better than naive sample estimator given in [?] and vice versa. The more it gets below , the more robust the procedure is against heavy-tailed randomness.

The first setting (Figure 3) represents a heavy-tailed elliptical distribution, where we expect the two robust methods work better than the sample covariance based method, especially in the case of extremely heavy tails (solid lines for ). As expected, both black curves and blue curves under the three measures behave visibly better (smaller than ). On the other hand, if data are indeed Gaussian (dotted line for ), the method with sample covariance performs better under most measures (greater than ). Nevertheless, our robust method still performs comparably with the sample covariance method, as the median error ratio stays around whereas Kendall’s tau method can be much worse than the sample covariance method. A plausible explanation is that the variance reduced compensates for the bias incurred in our procedure. In addition, the IQR plots tell us the proposed robust method is indeed more stable than Kendall’s tau.

Figure 4: Errors of robust estimates against varying p. Blue line represents ratio of errors with \widehat\Sigma_z^R over errors with \widehat\Sigma_z^S, while black line represents ratio of errors with \widehat\Sigma_z^K over errors with \widehat\Sigma_z^S. z_t is generated by element-wise iid t-distribution with df = 3 (solid), 5 (dashed) and \infty (dotted). The median errors and their IQR over 100 simulations are reported.
Figure 4: Errors of robust estimates against varying . Blue line represents ratio of errors with over errors with , while black line represents ratio of errors with over errors with . is generated by element-wise iid t-distribution with (solid), (dashed) and (dotted). The median errors and their IQR over simulations are reported.

The second setting (Figure 4) provides an example of non-elliptical distributed heavy-tailed data. We can see that the performance of the robust method dominates the other two methods, which verifies the approach in this paper especially when data comes from a general heavy-tailed distribution. While our method is able to deal with more general distributions, Kendall’s tau method does not apply to distributions outside the elliptical family, which excludes the element-wise iid distribution in this setting. This explains why under various measures, our robust method is better than Kendall’s tau method by a clear margin. Note that even in the first setting where the data are indeed elliptical, with proper tuning, the proposed robust methods can still outperform Kendall’s tau.

5Real data analysis

In this section, we will look into financial historical data during 2005 - 2013, and assess to what extent our factor model characterizes the data.

The dataset we used in our analysis consists of daily returns of stocks, all of which are large market capitalization constituents of S&P index, collected without missing values from 2005 to 2013. This dataset has also been used in [?], where they investigated how covariates (e.g. size, volume) could be utilized to help estimate factors and factor loadings, whereas the focus of the current paper is to develop robust methods in the presence of heavy tailed data.

In addition, we collected factors data for the same period, where the factors are calculated according to Fama-French three-factor model [?]. After centering, the panel matrix we will use for analysis, is a by matrix , in addition to a factor matrix of size by . Here is the number of daily returns and is the number of stocks.

5.1Heavy tailedness

First, we look at how the daily returns are distributed. Especially, we are interested in the behaviors of their tails. In Figure 5, we made Q-Q plots that compare the distribution of with either Gaussian distribution or t-distributions. In the four plots, the base distributions are Gaussian distribution, and t-distribution with varying degree of freedom, ranging from to . We also fit a line for each plot, showing how much the return data deviate from the base distribution. It is clear that the data has a tail heavier than that of a Gaussian distribution, and that t-distribution with is almost in alignment with the return data. Similarly, we made the Q-Q plots for the factors in Figure 6. The plots also show that -distribution is better in terms of fitting the data; however, the tails are even heavier, and t-distribution seems to best fit the data.

Figure 5: Q-Q plot of excess returns y_{it} for all i and t against Gaussian distribution and t-distribution with degree of freedom 2,4 and 6. For each plot, a line is fitted by connecting points at first and third quartile.
Figure 5: Q-Q plot of excess returns for all and against Gaussian distribution and t-distribution with degree of freedom and . For each plot, a line is fitted by connecting points at first and third quartile.
Figure 6: Q-Q plot of factor f_{it} against Gaussian distribution and t-distribution with degree of freedom 2,4 and 6. For each plot, a line is fitted by connecting points at first and third quartile.
Figure 6: Q-Q plot of factor against Gaussian distribution and t-distribution with degree of freedom and . For each plot, a line is fitted by connecting points at first and third quartile.

5.2Spiked covariance structure

Figure 7: Left panel: Histogram of eigenvalues of sample covariance matrix Y Y^T/n. The histogram is plotted on the logarithmic scale, i.e. each bin counts the number of \log \lambda_i in a given range. Right panel: Proportion of residue eigenvalues \sum_{i=K+1}^p \lambda_i / \sum_{i=1}^p \lambda_i, against varying K, where \lambda_i is the i^{th} largest eigenvalue of sample covariance matrix Y Y^T/n.
Left panel: Histogram of eigenvalues of sample covariance matrix Y Y^T/n. The histogram is plotted on the logarithmic scale, i.e. each bin counts the number of \log \lambda_i in a given range. Right panel: Proportion of residue eigenvalues \sum_{i=K+1}^p \lambda_i / \sum_{i=1}^p \lambda_i, against varying K, where \lambda_i is the i^{th} largest eigenvalue of sample covariance matrix Y Y^T/n.
Figure 7: Left panel: Histogram of eigenvalues of sample covariance matrix . The histogram is plotted on the logarithmic scale, i.e. each bin counts the number of in a given range. Right panel: Proportion of residue eigenvalues , against varying , where is the largest eigenvalue of sample covariance matrix .

We now consider how the covariance matrix of returns looks like, since a spiked covariance structure would justify the pervasiveness assumption. To find the spectral structure, we calculated eigenvalues of the sample covariance matrix , and made a histogram based on logarithmic scale (see the left panel Figure 7). In the histogram, the counts in the rightmost four bins are , , and , representing only a few large eigenvalues, which is a strong signal of a spiked structure. We also plotted the proportion of residue eigenvalues , against in the right panel of Figure 7. The top eigenvalues account for a major part of the variances, which lends weight to the pervasive assumption.

The spiked covariance structure has been studied in [?], [?] and many other papers, but under their regime, the top eigenvalues or “spiked” eigenvalues do not grow with the dimension. In this paper, the spiked eigenvalues have stronger signals, and thus are easier to be separated from the rest of eigenvalues. In this respect, the connotation of “spiked covariance structure” is closer to that in [?]. As empirical evidence, this phenomenon also buttresses the motivation of study in [?].

5.3Portfolio risk estimation

We consider portfolio risk estimation. To be specific, for a portfolio with weight vector on all the market assets, its risk is measured by quantity where is the true covariance of excess returns of all the assets. Note that might be time varying. Here we consider a class of weights with gross exposure , that is and . We consider four scenarios . Note that represents the case of no short selling and the other values measure different levels of exposure to short selling.

Figure 8: (R^R(w), R^S(w)) for multiple randomly generated w. The four plots compare the errors of the two methods under different settings (upper left: no short selling; upper right: exposure c = 1.4; lower left: exposure c = 1.8; lower right: exposure c = 2.2). The red diamond in the first plot corresponds to uniform weights. The dashed line is the 45 degree line representing equal performance. Our robust method gives smaller errors.
Figure 8: for multiple randomly generated . The four plots compare the errors of the two methods under different settings (upper left: no short selling; upper right: exposure ; lower left: exposure ; lower right: exposure ). The red diamond in the first plot corresponds to uniform weights. The dashed line is the 45 degree line representing equal performance. Our robust method gives smaller errors.

To assess how well our robust estimator performs compared with sample covariance, we calculated the covariance estimators and , using the daily data of preceding months, where is our robust covariance estimator and is the sample covariance, for every trading day from 2006 to 2013. We indexed those dates by where runs from to . Let be the excess return of the following trading day after . For a weight vector , the error we used to gauge the two approaches is

Note the bias-variance decomposition where . The first term measures the systematic risk that cannot be reduced while the second term is the estimation error for the risk of portfolio .

To generate multiple random weights with gross exposure , we adopted the strategy used in [?]: (1) for each index let (long) with probability and (short) with probability ; (2) generate iid by exponential distribution; (3) for , let and for , let . We made a set of scatter plots in Figure 8, in which the x-axis represents and the y-axis . In addition, we highlighted in the first plot the point with uniform weights (i.e. ), which serves as a benchmark for comparison. The dashed line shows where the two approaches have the same performance. Clearly, for all the robust approach has smaller risk errors, and therefore has better empirical performance in estimating portfolio risks.


AAppendix

Since we have robust estimator such that , we clearly know achieve the same rate. Using this, let us first prove . Obviously,

because the multiplication is along the fixed dimension and each element is estimated with the rate of convergence . Also , therefore is good enough to estimate with error in max norm.

Once the max error of sparse matrix is controlled, it is not hard to show the adaptive procedure in Step 2 gives such that the spectral error [?] where we define . Furthermore, . So is also due to the lower boundedness of . So ( ?) is valid.

Proving ( ?) is trivial. when is chosen as the same order and thus

Next let us take a look at the relative Frobenius convergence ( ?) for .

We bound the four terms one by one. The last term is the easiest,

Bound for uses the fact that and are and . So

Bound for needs additional conclusion that , where and the last inequality is shown in [?]. So

Lastly, by similar trick, we have