Efficient Covariance Estimation from Temporal Data

Efficient Covariance Estimation from Temporal Data

Hrayr Harutyunyan USC Information Sciences InstituteMarina Del ReyCalifornia90292 hrayrh@isi.edu Daniel Moyer USC Information Sciences InstituteMarina Del ReyCalifornia90292 moyerd@usc.edu Hrant Khachatrian YerevaNNYerevanArmenia hrant@yerevann.com Greg Ver Steeg USC Information Sciences InstituteMarina Del ReyCalifornia90292 gregv@isi.edu  and  Aram Galstyan USC Information Sciences InstituteMarina Del ReyCalifornia90292 galstyan@isi.edu
Abstract.

Estimating the covariance structure of multivariate time series is a fundamental problem with a wide-range of real-world applications—from financial modeling to fMRI analysis. Despite significant recent advances, current state-of-the-art methods are still severely limited in terms of scalability, and do not work well in high-dimensional undersampled regimes. In this work we propose a novel method called Temporal Correlation Explanation, or T-CorEx, that (a) has linear time and memory complexity with respect to the number of variables, and can scale to very large temporal datasets that are not tractable with existing methods; (b) gives state-of-the-art results in highly undersampled regimes on both synthetic and real-world datasets; and (c) makes minimal assumptions about the character of the dynamics of the system. T-CorEx optimizes an information-theoretic objective function to learn a latent factor graphical model for each time period and applies two regularization techniques to induce temporal consistency of estimates. We perform extensive evaluation of T-Corex using both synthetic and real-world data and demonstrate that it can be used for detecting sudden changes in the underlying covariance matrix, capturing transient correlations and analyzing extremely high-dimensional complex multivariate time series such as high-resolution fMRI data.

copyright: noneconference: KDD 2019; August 4 - 8, 2019; Anchorage, Alaska - USAjournalyear: 2019article: 4price: 15.00

1. Introduction

Many complex systems in finance, biology, and social sciences can be modeled by multivariate time series. One of the first steps in analyzing such time-varying complex systems is temporal covariance estimation—that is, estimation of the covariance matrix of the variables in different time periods. Such an analysis can be helpful for understanding relationships between components of the system, spotting trends, making predictions, detecting shifts in the underlying structure and for other tasks (Engle et al., 2017; De Nard et al., 2018; Ahmed and Xing, 2009; Monti et al., 2014).

There is an increasing need for efficient temporal covariance estimation methods that can work in high-dimensional undersampled regimes. In this regime, even static covariance estimation is a formidable problem (Fan et al., 2016). Extending covariance estimation to the temporal case adds unique challenges. First, the samples from different time steps generally are not independent and identically distributed. Second, the dynamics of the system can be quite complex (e.g., financial time series, biological systems) and hard to model without strong assumptions. Furthermore, the number of time steps whose observations are relevant for estimating the covariance matrix for a particular time period is limited when the underlying covariance matrix changes quickly.

Current state-of-the-art temporal covariance estimation methods learn a sparse graphical model for each time period using variants of graphical lasso, while adding a regularization term for inducing temporal consistency of estimates (Hallac et al., 2017; Tomasi et al., 2018). Unfortunately, these methods have at least cubic time complexity and quadratic memory complexity with respect to the number of variables, and do not scale to truly high-dimensional problems (e.g., the approaches described in Refs. (Hallac et al., 2017; Tomasi et al., 2018) do not scale beyond thousands of time series).

In this work we propose a novel temporal covariance estimation method, T-CorEx, that addresses the aforementioned challenges. T-CorEx is based on linear CorEx (Steeg and Galstyan, 2017), which learns static latent variable graphical models by minimizing an information-theoretic objective function. Our method trains a linear CorEx for each time period and employs two regularization techniques to enforce temporal consistency of learned models. T-CorEx has linear time and memory complexity with respect to the number of variables and can be applied to temporal data with several orders of magnitude more variables than what existing methods can handle (e.g., it takes less than an hour on a moderate PC to estimate the covariance structure for time series with variables). The only assumption T-CorEx makes about the dynamics of the system is that on average, the underlying covariance matrix changes slowly with time.

We compare T-CorEx against other methods on both synthetic and real-world datasets. Our experiments show that T-CorEx yields state-of-the-art performance in highly undersampled regimes. More specifically, T-CorEx outperforms other methods in terms of describing existing correlations in the data (quantified by log-likelihood). We also apply T-CorEx to stock market and high-resolution functional magnetic resonance image (fMRI) data. For the former, we demonstrate that T-CorEx can detect transient correlations and sudden change-points of the underlying covariance matrix, not detectable with static methods. For the latter, we show that T-CorEx successfully scales for time series with 150K variables, and finds meaningful functional connectivity patterns.

We summarize our main contributions in this paper as follows:

  • We introduce a new temporal covariance estimation method that has linear time and memory complexity with respect to the number of variables—which allows us to analyze systems that are several orders of magnitude larger than what was possible with previous methods.

  • We conduct extensive experiments on both synthetic and real-world datasets and demonstrate that T-CorEx outperforms existing state-of-the-art methods in highly undersampled regimes.

  • To illustrate the practical value of T-CorEx, we apply it to two real-world datasets, financial returns and high-resolution fMRI, and show that it is capable of extracting meaningful and qualitatively novel patterns.

2. Problem Definition

Figure 1. An example of time series with 3 variables. The underlying covariance matrix is constant in the first period, exhibits smooth dynamics in the second period, and after a sudden change between the second and third periods, becomes diagonal and remains static till the end.

To derive our method, we formulate the temporal covariance estimation problem for a sequence of multivariate Gaussian observations. We are given a sequence of observations , where is a collection of i.i.d. samples generated from a -dimensional normal distribution with zero mean and covariance matrix . The goal is to estimate the unknown covariance matrices for each . Note that can be 0 and can be different for different time steps. In particular, when , then is a regular multivariate time series.

Without further assumptions, the problem is equivalent to having independent static covariance estimation tasks. Usually covariance matrices of different time steps are related, and efficient temporal covariance estimation methods should exploit that relation. However, the relation can be quite complicated, making it hard to model without strong assumptions. To avoid making such assumptions, we only assume that on average, covariance matrices of neighboring time steps are close to each other with respect to some metric. Fig. 1 shows an example of a regularly sampled multivariate time series where the underlying covariance matrix exhibits different dynamics in different periods, but our assumption is met.

3. Methods

Any static covariance estimation method can be used for temporal covariance estimation by applying it in sliding window fashion. Namely, we divide the period into periods of size and apply on each period.111There are some subtleties such as deciding whether to overlap periods, or what to do with remaining time steps when does not divide . There is no universal choice for , and different methods work best with different values of . Methods that work well with small values of are usually better choices for temporal covariance estimation. However, these static estimators do not exploit the temporal aspect of the problem, and in general, produce worse estimates (see Section 4).

One way to build better temporal covariance estimation methods is to apply a static covariance estimation method in sliding window fashion while also adding a regularization term that enforces temporal consistency of estimates (Hallac et al., 2017; Tomasi et al., 2018). We use a similar approach to extend linear CorEx (Steeg and Galstyan, 2017) for temporal covariance estimation. There are several reasons why we choose to base our method on linear CorEx. First, linear CorEx has been shown to have low sample complexity. This makes it possible to select small window sizes, which allows the method to detect short-term variations. Second, the complexity of linear CorEx scales linearly with allowing us to apply it to extremely high-dimensional data (where is greater than ). Third, the objective of linear CorEx is differentiable, which allows us to use any variant of gradient descent to train it.

3.1. Linear CorEx

To proceed further we need some details of linear CorEx and one definition.

Definition 3.1 ().

A jointly Gaussian distribution with observed variables and hidden variables is called a non-overlapping Gaussian latent factor (NGLF) model if the joint distribution factorizes in the following way:

where .

The factorization of an NGLF model defines a directed probabilistic graphical model shown in Fig. 2. In other words, the NGLF model corresponds to the product of one-factor Gaussian models.

for Gaussians

Figure 2. The directed probabilistic graphical model corresponding to an NGLF model. Gaussian latent factor models with non-overlapping structure admit an equivalent information-theoretic characterization. The figure is taken from (Steeg and Galstyan, 2017).

Linear CorEx is a latent factor model designed to uncover the structure of a multivariate Gaussian variable. It is a variant of the more general CorEx method (Ver Steeg and Galstyan, 2014). For given dimensional Gaussian random variable , the algorithm finds an dimensional Gaussian random variable , such that the joint system is close to a NGLF model. The authors show that a jointly Gaussian distribution is an NGLF model if and only if and . This makes it possible to learn a NGLF model by minimizing under the constraint . Linear CorEx attempts to solve the following optimization problem:

(1)

Note that if , then the joint distribution is Gaussian. Assuming , after some modification of the original optimization problem (1), the algorithm solves the following problem (the details can be found in (Steeg and Galstyan, 2017)):

(2)

where is the conditional mean of given under the constraint that , and is calculated in the following way:

The objective of problem (2) upper bounds the objective of problem (1), ignoring constants. Furthermore, that upper bound is tight when . After inferring the parameters , the covariance matrix of is estimated with the following formula:

(3)

Even though linear CorEx is designed to recover the structure of NGLF models, experiments show that it can be successfully applied to non-Gaussian cases. One reason for this is that many real-world datasets can be approximated by a Gaussian distribution. Another reason is that linear CorEx doesn’t find a model of data that corresponds exactly to the probabilistic graphical model shown in Fig. 2 (which is a fairly restricted model), but rather learns a model close to it.

3.2. Time-Varying Linear CorEx

Hereafter, we assume that the data is already split into periods. We can represent the data as a sequence of collections of observations, , where . As mentioned earlier, we train one linear CorEx for each time period and use regularization techniques to enforce temporal consistency of estimates (i.e., enforcing adjacent time periods to have similar covariance matrix estimates). As a first attempt towards building time-varying linear CorEx, we try to solve the following optimization problem:

(4)

where is the linear CorEx objective with parameters applied to data , and is the penalty function, which in this work is either or vector norm. The former is suitable for systems with sudden changes, while the latter is better suited for smoothly varying systems. We name the method of Eq. (4) T-CorEx-simple.

While T-CorEx-simple follows the general framework of building a temporal covariance estimation method from a static covariance estimation method, it is not powerful enough to get significantly better performance compared to linear CorEx applied in sliding window fashion without a regularization term (see Table 1). This happens because of some properties of linear CorEx. Namely, in linear CorEx, is a function of and (see Eq. 3). Therefore, even when the regularization coefficient is infinity, making , still in general. Therefore, the model can never produce very close estimates of covariance matrices. If we choose to explicitly penalize the difference, instead of the difference , in general, there will be no that will make the new regularization term equal to zero.

To make the regularization effective, we note that depends on through , which in turn depends on and , where the expectations are taken over samples of using parameters . When contains a small number of samples, these quantities can be quite noisy, resulting in noisy estimates . To reduce the noise without increasing the window size, we propose a simple remedy. To estimate and for time period , we use not only the samples of , but also samples of other time periods . Of course, samples belonging to time periods far from are less important than the samples belonging to time periods close to . Since both and are expectations over the sample distribution, we change sample weights/probabilities to capture the intuition above. We introduce a new hyper-parameter that defines the rate of decay of “importance” as we go far from the current time period . Samples of time period have weight when they are used to estimate and for time period . Below shows how we estimate for time period :

  1.      

We estimate for each time period similarly. For computational efficiency, at time period we do not consider time periods for which . Summing up, we get the following optimization problem:

(5)

where is the linear CorEx objective with parameters applied on data , where samples of have weight . We name the method of Eq. (5) T-CorEx.

Note that along the way we extended linear CorEx to work in cases where samples have weights assigned to them. This extended method can be helpful when different samples have different importance, reliability, or relevance.

3.3. Implementation

The authors of linear CorEx derive a quasi-Newton optimization algorithm to train their model. Unfortunately, we could not use a similar derivation for T-CorEx. Instead, we use gradient descent to optimize the objective of T-CorEx. More specifically, in all our experiments we use the Adam optimizer (Kingma and Ba, 2014) with and . We noticed that optimizing the objective function of linear CorEx with gradient descent requires 2-4 times more iterations than with quasi-Newton, but it eventually converges to similar solutions in terms of objective value. We initialize the weights of T-CorEx with the weights of a linear CorEx trained on all samples of all time periods. Since training of T-CorEx involves many matrix multiplications, GPUs can be used for speeding up the training. In fact, when , a single GPU can reduce the training time up to 10 times. T-CorEx is implemented in PyTorch (Paszke et al., 2017). The code can be found on GitHub.222https://github.com/harhro94/T-CorEx

3.4. Scalability

The time complexity of the linear CorEx at time period is , where is the number of observed variables, is the number of hidden variables, is the number of samples used for estimating and for time period . Since we don’t consider time periods , for which , we get that . Therefore, the time complexity of T-CorEx is , where is the total number of samples. Explicitly computing covariance matrix using Eq. 3 has complexity. However, one can avoid computing it explicitly, as the covariance estimates of T-CorEx are diagonal plus low-rank matrices. Multiplying such matrices with other matrices can be made faster using their low-rank plus diagonal decomposition. One can compute the determinant of such matrices in time using the generalization of the matrix determinant lemma. Furthermore, the inverse of such matrices has diagonal plus low-rank form and can be computed in time using the Woodbury matrix identity. The memory complexity of T-CorEx is . We want to emphasize that the proposed method has linear time and memory complexity w.r.t. the number of variables , assuming , and don’t depend on . To our best knowledge, T-CorEx is the only temporal covariance estimation method that scales linearly with . Time-varying graphical lasso (TVGL) (Hallac et al., 2017) and latent variable time-varying graphical lasso (LTGL) (Tomasi et al., 2018), which are the direct competitors of our method, have time complexity. Fig. 3 shows the scalability comparison of different temporal covariance estimation methods.

Figure 3. Scaling comparison of time-varying graphical lasso (TVGL), latent variable time-varying graphical lasso (LTGL) and T-CorEx. The data is a time series with time periods, each having samples generated from an NGLF model with observed variables and hidden variables. Time-varying graphical lasso gives memory error starting from . T-CorEx methods were trained with 64 latent factors. All models were trained on a computer with an Intel i5-6600K CPU, 32GB RAM, and a GeForce GTX 1080 Ti GPU.

4. Experiments

We compare T-CorEx with other static and temporal covariance estimation methods on both synthetic and real-word datasets. The simplest baseline is the diagonal matrix with the sample variances of corresponding variables. Other static baselines are the Ledoit-Wolf (LW) method (Ledoit and Wolf, 2004), oracle approximating shrinkage (OAS) method (Chen et al., 2010), factor analysis (FA), sparse PCA (Zou et al., 2006; Mairal et al., 2009), linear CorEx (Steeg and Galstyan, 2017), graphical lasso (GLASSO) (Friedman et al., 2008), and latent variable graphical lasso (LVGLASSO) (Chandrasekaran et al., 2012). The temporal baselines are time-varying graphical lasso (TVGL) (Hallac et al., 2017) and latent variable time-varying graphical lasso (LTGL) (Tomasi et al., 2018). In the experiments with synthetic data, we add two additional baselines: T-CorEx-no-reg (T-CorEx with ) and T-CorEx-simple (T-CorEx with ) to show the importance of the regularization term and the importance of introducing sample weights respectively.

In all quantitative experiments we split the data into train, validation and test sets. We use the validation set to select hyperparameters of the baselines and we report the final scores on the test set. The metric we use in our experiments is the negative log-likelihood of estimates averaged over time periods. The grid of hyperparameters for each baseline can be found in the supplementary material.

4.1. Synthetic Data

Method Sudden Change Smooth Change
Ground Truth 196.0 196.0 196.0 196.0 196.0 230.2 230.2 230.2 230.2 230.2
Diagonal 279.6 260.6 256.8 255.6 254.9 277.4 268.3 263.6 261.5 261.4
LW 286.2 266.7 252.8 239.9 227.5 287.3 278.9 270.1 262.7 254.9
OAS 281.5 266.3 252.7 239.9 227.5 282.9 277.9 269.8 262.7 254.8
FA - 524.4 236.9 208.5 201.3 - 630.5 267.5 242.3 235.6
Sparse PCA 270.5 232.9 212.8 205.3 200.7 274.7 260.1 247.4 238.8 235.4
Linear CorEx 312.3 221.5 204.6 199.7 197.7 333.0 267.8 243.0 235.5 233.0
GLASSO 266.5 242.0 221.3 212.3 205.5 280.1 262.5 249.2 241.9 238.3
LVGLASSO 271.6 245.5 235.5 217.6 210.2 276.6 267.1 254.7 248.5 240.7
TVGL 237.5 224.5 213.4 207.6 203.7 259.0 251.9 244.0 239.4 236.7
LTGL 248.6 230.0 218.7 209.6 204.7 265.0 256.2 247.1 241.8 238.7
T-CorEx 228.0 213.8 205.3 199.6 197.7 250.6 243.3 237.6 234.5 232.7
T-CorEx-simple 275.8 217.9 204.7 199.7 197.7 294.5 261.3 242.5 235.8 233.1
T-CorEx-no-reg 245.3 228.7 207.5 199.6 197.8 259.2 252.5 241.6 235.4 232.9
Table 1. Time-averaged negative log-likelihood of the estimates produced by different methods on synthetic data ().

We design experiments with synthetic data to compare our method with other baselines in the case when the modeling assumptions of T-CorEx are met (i.e., the data of each period is generated from an NGLF model). We generate synthetic data for two scenarios. In the first scenario, the underlying covariance matrix is constant for the first half of the time periods. Then a sudden change happens after which the underlying covariance matrix remains constant for the remaining half of time periods. We call this scenario sudden change. In the second scenario, the underlying covariance matrix is slowly changing from to . We call this scenario smooth change. The full details of the data generation process for these two scenarios are presented in the supplementary material.

Sudden Change

To create data with a sudden change, we generate two different NGLF models— and . The data of the first five periods is generated from , while the data of the next five periods is generated from . We generate training, 16 validation and 1000 testing samples for each period. The left multi-column of Table 1 shows the comparison of baselines on this type of data for and varying values of .

Smooth Change

To create a synthetic dataset with smooth change, we generate two NGLF models, and . We start from and smoothly change the model into , so that for each time period the joint distribution remains NGLF. Let denote the NGLF model at time period . We generate training, 16 validation and 1000 testing samples from . The right multi-column of Table 1 shows the comparison of baselines on this type of data for and varying values of .

The results of sudden change and smooth change experiments show that the proposed method gives state-of-the-art results when NGLF modeling assumptions are met. The advantage of T-CorEx is significant when the number of samples is small. Comparing T-CorEx-simple with linear CorEx, we see that indeed, the regularization term of T-CorEx-simple is not enough to get the best performance. Furthermore, the results show that introducing sample weights improves the results significantly. Comparing T-CorEx with T-CorEx-no-reg, we conclude that even after introducing sample weights, having a regularization term that induces temporal consistency improves the results.

4.2. Stock Market Data

Next, we evaluate our method on a stock prices dataset, where the modeling assumptions are not met (i.e. the data of each time step does not correspond to an NGLF model). We take the daily sampled historical data of stock prices of S&P 500 index from January, 2000 to January, 2016.333The data is downloaded using the API of tradingeconomics.com. For simplicity, we keep the stocks that are present for at least 95% of the period. The resulting dataset contains daily prices for 391 stocks. We compute the daily log-returns, standardize each variable, and add isotropic Gaussian noise with variance.

Method
Diagonal 585.6 85.3 467.0 106.7 407.9 50.0 367.9 14.0
LW 424.9 40.1 384.7 100.3 372.4 54.6 465.6 21.9
OAS 405.5 34.2 376.8 85.5 403.1 51.9 556.8 30.3
FA - - 900.3 189.0 355.7 15.2
Sparse PCA 507.7 71.5 382.9 66.7 301.1 31.2 266.5 10.8
Linear CorEx 458.1 47.6 330.1 31.9 279.7 14.6 262.2 8.5
GLASSO 473.3 65.5 400.3 54.8 304.8 33.6 272.4 18.0
LVGLASSO 430.6 23.6 372.7 59.0 289.3 22.1 282.8 9.7
TVGL 335.6 24.8 298.3 46.0 260.5 32.6 243.3 6.9
LTGL 329.3 18.1 308.5 49.1 252.0 27.9 231.8 9.8
T-CorEx 313.5 16.4 269.5 16.1 246.1 12.9 244.0 4.8
Table 2. Time-averaged negative log-likelihood of the estimates produced by different methods on stock market data, shown in mean standard deviation format. The means and standard deviations are computed for 10 random train/validation/test splits.
Figure 4. Scores of 10 random train/val/test splits. The diagonal and FA baselines are not shown to keep the plots readable.

As all baselines used in this work need the time series to be divided into periods, we break the time series into non-overlapping periods, each containing samples. We randomly partition samples of each period into train, validation and test sets containing samples respectively. After breaking time series into periods, we consider only the last 10 periods. Table 2 shows the results for averaged over 10 random train/validation/test splits. Again, the metric is time-averaged negative log-likelihood of the estimates under multivariate Gaussian distribution. We justify our choice of the metric with the fact that all baselines besides the Ledoit-Wolf method assume the data has Gaussian distribution.

For better interpretation of results, we suggest looking at the distribution of test scores for those 10 different runs (Fig. 4). First of all, many baselines have high standard deviations, but the figure shows that it is usually caused by some runs that produce very bad estimates. The results suggest that T-CorEx is doing significantly better when the number of samples is small (). For , we see that TVGL, LTGL and T-CorEx perform similarly, even though T-CorEx has slightly better mean because it has fewer outliers with very poor performance. At , LTGL starts to produce better estimates. Notably, T-CorEx always has the smallest standard deviation. We hypothesize that the assumption of T-CorEx of data distribution being approximately NGLF is more restrictive than the sparsity assumption of TVGL and LTGL. Therefore, the variance of T-CorEx is relatively smaller. On the other hand, the assumptions of T-CorEx do not introduce large biases. In fact, we see that T-CorEx has the smallest bias when the number of samples is small, and has comparable bias otherwise. Interestingly, as we increase , LW and OAS become worse. This happens because these methods start to shrink the sample covariance matrix less as the number of samples increases.

Qualitative Analysis

With quantitative analysis we showed that the estimates of T-CorEx are much more accurate than those of linear CorEx applied in sliding window fashion. Next, we want to see what are the qualitative differences of those better estimates for stock market data. For this purpose we plot the entries of inverse correlation matrix that have absolute value greater than some threshold.444In our experiments the threshold is set to 0.025. This can be interpreted as plotting the adjacency matrix of a Markov random field. First, we train a linear CorEx on the whole period, ignoring the temporal aspect (the left part of Fig. 5). This shows how the system looks like “on average.” We see that most of the edges are within sectors. Then we train the proposed model with window size equal to two weeks and plot its estimate for a random time period (the middle part of Fig. 5). First, We see that the sectors are not as densely connected. Second, T-CorEx captures some dependencies that are not present for the whole period. The opposite is also true—two variables can be directly connected on average, but be disconnected for some period of time (e.g., some connections between sectors “Materials” and “Energy”). The advantage of T-CorEx lies in the ability of detecting correlations that occur only for a short period of time. Methods requiring large number of samples (large window size) cannot detect such correlations. To finalize our analysis, we fit linear CorEx on samples around the same random time period . When is too small, the estimates are too noisy. When is too large the estimates are less related to the true statistics of time period . The right part of Fig. 5 shows the estimate of linear CorEx for the best value of . The estimate of linear CorEx is qualitatively different from that of T-CorEx and gives a significantly worse score, indicating the inherent problems of applying a static covariance estimation method in sliding window fashion.

Figure 5. Thresholded inverse correlation matrix estimates on the stock market data for three cases. Left: linear CorEx trained on whole data (2000-2016) without considering the temporal aspect. Middle: the estimate of T-CorEx for a random two-week period. Right: linear CorEx trained on 128 samples around the same two-week period. The stocks are sorted according to their sectors. For visualization purposes not all sectors are shown.

Change Point Detection

Estimated covariance/precision matrices can be used to detect the shifts in the underlying system (Hallac et al., 2017; Tomasi et al., 2018). One simple way to detect changes in the underlying structure is to look at the Frobenius norm of the difference of inverse correlation matrices of neighboring time periods. Fig. 5 shows that from 2000 to 2016, T-CorEx detects all but one of the major US stock market events. Interestingly, for some events major changes are visible up to two months earlier. However, the algorithm cannot be used for early detection because both the regularization term and the temporal smoothing exploit data of future time steps. This is also true for TVGL and LTGL methods. Fig. 5 also shows the same analysis for sliding window linear CorEx with optimal window size. The peaks of linear CorEx align much worse with the ground truth events. Furthermore, the estimates of linear CorEx are temporally less consistent. As expected, linear CorEx does not detect changes that occur in very short periods, such as the changes related to the September 11 attacks and the 2010 flash crash.

Figure 6. Frobenius norm of the difference of inverse correlation matrices of neighboring time periods for T-CorEx (left axis) and linear CorEx (right axis). The marked events on axis correspond to major US stock marked events555The events are listed in https://en.wikipedia.org/w/index.php?title=List_of_stock_market_crashes_and_bear_markets&oldid=859807280.— the collapse of a technology bubble (10 March 2000), the stock market downturn of 2002 (9 Oct 2002), the start of the US bear market of 2007-09 (11 Oct 2007), the financial crisis of 2007-08 (16 Sep 2008), the 2010 flash crash (6 May 2010), the August 2011 stock markets fall (1 Aug 2011), and the 2015-16 stock market selloff (18 Aug 2015).

4.3. High-Resolution fMRI Data

To demonstrate the scalability and usefulness of the proposed method, we apply it to high-dimensional functional magnetic resonance images (fMRI). The most common measurement in fMRI is blood oxygen level-dependent (BOLD) contrast, which measures blood flow changes in biological tissues (“activation”). In a typical fMRI session, which lasts 3-10 minutes, hundreds of high-resolution brain images are captured, each having 100K-600K volumetric pixels (voxels). Correlation analysis is widely used to study functional connections between brain regions (Preti et al., 2017). While in general these analyses are conducted assuming static covariance, recently time-varying covariance (“dynamic functional connectivity”) has received more attention (Chang and Glover, 2010; Lurie et al., 2018). This latter case is exactly the use case of T-CorEx.

We demonstrate the feasibility of T-CorEx in fMRI analysis by inducing an artificial change point in an otherwise stable time series. While the induced shift is clearly synthetic, this experiment shows possible value for the fMRI community in detecting natural change points and/or using T-CorEx for more nuanced domain-specific analyses, demonstrating that T-CorEx can scale to the 100K+ variable regime. First, we fit T-CorEx on a resting-state fMRI session666We use the processed resting-state fMRI of session 014 (dataset version: 1.0.4). We then do spatial smoothing using a Gaussian filter with fwhm=8mm. from the MyConnectome project (Poldrack et al., 2015). The session has 518 time-point volumes, each having 148262 voxels. We divide the whole period into 20 non-overlapping periods, ignoring the first 18 time-points.

The blue curve in Fig. 7 shows the Frobenius norm of differences of inverse correlation matrices of neighboring time periods.777Note, although the correlation matrices are extremely large, we are able to compute the inverses and norms of differences, since estimates of T-CorEx are diagonal plus low-rank matrices. We see relatively large variability in the beginning and in the end of the session. Next, we consider the middle 12 periods (i.e., removing 4 periods from both sides). We swap the first 6 and the last 6 periods of those periods, creating an artificial change in the middle, and retrain T-CorEx on the resulting data. The orange plot of Fig. 7 shows the Frobenius norm of differences of inverse correlation matrices of neighboring time periods for this case. T-CorEx detects the shift we created, while other methods do not scale to this regime.

Figure 7. Blue – Frobenius norm of the difference of inverse correlation matrices of neighboring time periods for a resting state fMRI session. Orange – the same statistics for a T-CorEx trained on the middle 12 time periods after swapping its first 6 and last 6 time periods.
(a) Right Brodmann Area 40, with symmetric Left Brodmann Area 40 activation
(b) Left Brodmann Area 44/45 (Broca’s Area), with asymmetric activations
(c) Left Brodmann Area 6, with asymmetric frontal and parietal activations
Figure 8. Some of the clusters found by T-CorEx for time period 12. The cross-hairs correspond to the specified regions.

T-CorEx can also provide secondary analyses by grouping correlated regions. We assign each voxel to the latent factor that has the largest mutual information with it, forming groups by each factor. This produces clusters of voxels for each time period. Fig. 8 shows three clusters from time period 12, identified using NeuroSynth (Yarkoni et al., 2011). Clusters displayed in (a) and (b) correspond to standard anatomic regions (Brodmann, 1909), namely both left and right Brodmann area 40, as well as left Brodmann areas 44/45 (Broca’s Area), which is known to be a highly lateralized (asymmetric) region. Cluster (c) displays distributed activation in the left hemisphere. We found that cluster (a) is present in all time periods; (b) is present starting at time period 3. These two clusters exhibit small variations over time. Cluster (c) is present starting at time period 11, but varies more compared to the other two clusters.

5. Related Work

Many works have addressed the problem of high-dimensional covariance estimation (Fan et al., 2016). One major direction of estimating high-dimensional covariance matrices is introducing sparsity constraints. Sparsity constraints are often used to reduce the effective number of parameters of the model, to regularize the model or to incorporate our beliefs. Sparse covariance matrix estimation for multivariate Gaussian random variables is investigated in (Bien and Tibshirani, 2011). Most often sparsity constraints are imposed on the inverse covariance matrix (precision matrix). The precision matrix encodes the conditional independences between pairs of variables. Learning a sparse precision matrix corresponds to learning a sparse graphical model (Lauritzen, 1996; Banerjee et al., 2008). The graphical lasso method (Friedman et al., 2008) does sparse inverse covariance matrix estimation for multivariate Gaussian random variables. The problem of network inference using graphical lasso is well studied (Friedman et al., 2008; Yuan and Lin, 2007; Ravikumar et al., 2011; Meinshausen and Bühlmann, 2006).

In many real-world applications, there are latent factors influencing the system. Modeling those factors usually leads to better estimates of the covariance matrix of the observed variables. Factor analysis and probabilistic PCA (Tipping and Bishop, 1999) are latent factor models that can be used for covariance estimation. However, they fail in undersampled regimes. Sparse PCA (Zou et al., 2006; Mairal et al., 2009) remedies this problem. Covariance estimation can also be done by learning graphical models with latent factors (Choi et al., 2011, 2010; Chandrasekaran et al., 2012). The latent variable graphical lasso method (Chandrasekaran et al., 2012) learns a sparse graphical model with latent factors. Linear CorEx (Steeg and Galstyan, 2017) is another latent factor model and is central to this work. Some details of linear CorEx are provided in Subsection 3.1.

Many works extended a particular covariance estimation method for temporal covariance estimation. Sparse PCA has been adapted for high-dimensional multivariate vector autoregressive time series (Wang et al., 2013). The time-varying graphical lasso method (Hallac et al., 2017) extends graphical lasso. It breaks down the time series into periods and applies graphical lasso on each period, while also adding a regularization term for inducing temporal consistency of estimates. In a similar fashion, latent variable time-varying graphical lasso (Tomasi et al., 2018) extends latent variable graphical lasso. T-CorEx takes the same approach, but has a couple of differences. Instead of forcing sparsity, T-CorEx attempts to learn models closer to NGLF. Additionally, unlike T-CorEx, none of the mentioned methods use weighted samples for estimating their parameters.

The time and memory complexity of a method are crucial in high-dimensional settings. All non-CorEx temporal covariance estimation methods listed here have at least quadratic time complexity with respect to the number of observed variables, . In fact, most of them have time complexity as they require computing the inverse and/or SVD of a matrix. Most methods store matrices explicitly. All these methods become inapplicable for systems having more than variables. There are successful attempts to build faster static covariance/precision matrix estimation methods (Hsieh et al., 2014; Hsieh et al., 2013; Zhang et al., 2018; Honorio and Jaakkola, 2013). To our knowledge, our proposed method is the only temporal covariance estimation method that has linear time and memory complexity with respect to .

6. Conclusion and Future Work

We developed a novel temporal covariance estimation method called T-CorEx. The proposed method has linear time and memory complexity with respect to the number of observed variables and makes minimal assumptions about the dynamics of the system. We evaluated our method on both synthetic and real-world datasets, showing state-of-the-art results in highly undersampled regimes. We also studied the range of possible applications of T-CorEx.

In future research we aim to simplify the hyperparameter selection of T-CorEx. An interesting direction of research is inferring the sample weights automatically. A successful implementation of this will increase the modeling capacity of T-CorEx. Another interesting extension of the model is adapting it to work in the online regime. In fact, exploiting samples of future time steps does not allow us to use the method for early detection and forecasting.

Acknowledgements.
The authors thank Federico Tomasi for his valuable help on training latent variable time-varying graphical lasso, and Neal Lawton for insightful discussions. This research is based upon work supported in part by DARPA, via W911NF-16-1-0575, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2016-16041100004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Additionally, H. Harutyunyan is supported by an Annenberg fellowship.

References

  • (1)
  • Ahmed and Xing (2009) Amr Ahmed and Eric P Xing. 2009. Recovering time-varying networks of dependencies in social and biological studies. Proceedings of the National Academy of Sciences 106, 29 (2009), 11878–11883.
  • Banerjee et al. (2008) Onureena Banerjee, Laurent El Ghaoui, and Alexandre d’Aspremont. 2008. Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. J. Mach. Learn. Res. 9 (June 2008), 485–516.
  • Bien and Tibshirani (2011) Jacob Bien and Robert J. Tibshirani. 2011. Sparse estimation of a covariance matrix. Biometrika 98, 4 (2011), 807–820.
  • Brodmann (1909) Korbinian Brodmann. 1909. Vergleichende Lokalisationslehre der Grosshirnrinde in ihren Prinzipien dargestellt auf Grund des Zellenbaues. Barth.
  • Chandrasekaran et al. (2012) Venkat Chandrasekaran, Pablo A. Parrilo, and Alan S. Willsky. 2012. Latent variable graphical model selection via convex optimization. Ann. Statist. 40, 4 (08 2012), 1935–1967.
  • Chang and Glover (2010) Catie Chang and Gary H. Glover. 2010. Time–frequency dynamics of resting-state brain connectivity measured with fMRI. NeuroImage 50, 1 (2010), 81 – 98.
  • Chen et al. (2010) Y. Chen, A. Wiesel, Y. C. Eldar, and A. O. Hero. 2010. Shrinkage Algorithms for MMSE Covariance Estimation. IEEE Transactions on Signal Processing 58, 10 (Oct 2010), 5016–5029.
  • Choi et al. (2010) Myung Jin Choi, Venkat Chandrasekaran, and Alan S Willsky. 2010. Gaussian multiresolution models: Exploiting sparse Markov and covariance structure. IEEE Transactions on Signal Processing 58, 3 (2010), 1012–1024.
  • Choi et al. (2011) Myung Jin Choi, Vincent YF Tan, Animashree Anandkumar, and Alan S Willsky. 2011. Learning latent tree graphical models. Journal of Machine Learning Research 12, May (2011), 1771–1812.
  • De Nard et al. (2018) Gianluca De Nard, Olivier Ledoit, Michael Wolf, et al. 2018. Factor models for portfolio selection in large dimensions: the good, the better and the ugly. Technical Report. Department of Economics-University of Zurich.
  • Engle et al. (2017) Robert F. Engle, Olivier Ledoit, and Michael Wolf. 2017. Large Dynamic Covariance Matrices. Journal of Business & Economic Statistics 0, 0 (2017), 1–13.
  • Fan et al. (2016) Jianqing Fan, Yuan Liao, and Han Liu. 2016. An overview of the estimation of large covariance and precision matrices. The Econometrics Journal 19, 1 (2016), C1–C32.
  • Friedman et al. (2008) Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2008. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9, 3 (2008), 432–441.
  • Hallac et al. (2017) David Hallac, Youngsuk Park, Stephen Boyd, and Jure Leskovec. 2017. Network Inference via the Time-Varying Graphical Lasso. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’17). ACM, New York, NY, USA, 205–213.
  • Honorio and Jaakkola (2013) Jean Honorio and Tommi Jaakkola. 2013. Inverse Covariance Estimation for High-dimensional Data in Linear Time and Space: Spectral Methods for Riccati and Sparse Models. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI’13). AUAI Press, Arlington, Virginia, United States, 291–300.
  • Hsieh et al. (2014) Cho-Jui Hsieh, Mátyás A. Sustik, Inderjit S. Dhillon, and Pradeep Ravikumar. 2014. QUIC: Quadratic Approximation for Sparse Inverse Covariance Estimation. Journal of Machine Learning Research 15 (2014), 2911–2947.
  • Hsieh et al. (2013) Cho-Jui Hsieh, Mátyás A Sustik, Inderjit S Dhillon, Pradeep K Ravikumar, and Russell Poldrack. 2013. BIG & QUIC: Sparse inverse covariance estimation for a million variables. In Advances in Neural Information Processing Systems. 3165–3173.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  • Lauritzen (1996) Steffen L Lauritzen. 1996. Graphical models. Vol. 17. Clarendon Press.
  • Ledoit and Wolf (2004) Olivier Ledoit and Michael Wolf. 2004. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis 88, 2 (2004), 365 – 411.
  • Lurie et al. (2018) Daniel J Lurie, Daniel Kessler, Danielle S Bassett, Richard F Betzel, Michael Breakspear, Shella Keilholz, Aaron Kucyi, Raphaël Liégeois, Martin A Lindquist, Anthony R McIntosh, and et al. 2018. On the nature of resting fMRI and time-varying functional connectivity. https://doi.org/10.31234/osf.io/xtzre
  • Mairal et al. (2009) Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. 2009. Online Dictionary Learning for Sparse Coding. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML ’09). ACM, New York, NY, USA, 689–696.
  • Meinshausen and Bühlmann (2006) Nicolai Meinshausen and Peter Bühlmann. 2006. High-Dimensional Graphs and Variable Selection with the Lasso. The Annals of Statistics 34, 3 (2006), 1436–1462.
  • Monti et al. (2014) Ricardo Pio Monti, Peter Hellyer, David Sharp, Robert Leech, Christoforos Anagnostopoulos, and Giovanni Montana. 2014. Estimating time-varying brain connectivity networks from functional MRI time series. NeuroImage 103 (2014), 427 – 443.
  • Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS-W.
  • Poldrack et al. (2015) Russell A Poldrack, Timothy O Laumann, Oluwasanmi Koyejo, Brenda Gregory, Ashleigh Hover, Mei-Yen Chen, Krzysztof J Gorgolewski, Jeffrey Luci, Sung Jun Joo, Ryan L Boyd, et al. 2015. Long-term neural and physiological phenotyping of a single human. Nature communications 6 (2015), 8885.
  • Preti et al. (2017) Maria Giulia Preti, Thomas AW Bolton, and Dimitri Van De Ville. 2017. The dynamic functional connectome: State-of-the-art and perspectives. NeuroImage 160 (2017), 41 – 54. Functional Architecture of the Brain.
  • Ravikumar et al. (2011) Pradeep Ravikumar, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. 2011. High-dimensional covariance estimation by minimizing -penalized log-determinant divergence. Electron. J. Statist. 5 (2011), 935–980.
  • Steeg and Galstyan (2017) Greg Ver Steeg and Aram Galstyan. 2017. Low complexity gaussian latent factor models and a blessing of dimensionality. arXiv preprint arXiv:1706.03353 (2017).
  • Tipping and Bishop (1999) Michael E Tipping and Christopher M Bishop. 1999. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61, 3 (1999), 611–622.
  • Tomasi et al. (2018) Federico Tomasi, Veronica Tozzo, Saverio Salzo, and Alessandro Verri. 2018. Latent Variable Time-varying Network Inference. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’18). ACM, New York, NY, USA, 2338–2346.
  • Ver Steeg and Galstyan (2014) Greg Ver Steeg and Aram Galstyan. 2014. Discovering Structure in High-Dimensional Data Through Correlation Explanation. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 577–585.
  • Wang et al. (2013) Zhaoran Wang, Fang Han, and Han Liu. 2013. Sparse Principal Component Analysis for High Dimensional Multivariate Time Series. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research), Carlos M. Carvalho and Pradeep Ravikumar (Eds.), Vol. 31. PMLR, Scottsdale, Arizona, USA, 48–56.
  • Yarkoni et al. (2011) Tal Yarkoni, Russell A Poldrack, Thomas E Nichols, David C Van Essen, and Tor D Wager. 2011. Large-scale automated synthesis of human functional neuroimaging data. Nature methods 8, 8 (2011), 665.
  • Yuan and Lin (2007) Ming Yuan and Yi Lin. 2007. Model Selection and Estimation in the Gaussian Graphical Model. Biometrika 94, 1 (2007), 19–35.
  • Zhang et al. (2018) Richard Zhang, Salar Fattahi, and Somayeh Sojoudi. 2018. Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 5766–5775.
  • Zou et al. (2006) Hui Zou, Trevor Hastie, and Robert Tibshirani. 2006. Sparse Principal Component Analysis. Journal of Computational and Graphical Statistics 15, 2 (2006), 265–286.

Supplementary Material: Efficient Covariance Estimation from Temporal Data

Appendix A Details of Generating NGLF Data

Figure 9. An example of a covariance matrix corresponding to a NGLF model. The variables are sorted according to their parents.

To describe a NGLF model, it is enough to specify 6 quantities: , , , – the parent of , and – the correlation of and its parent. Note the moments of don’t affect the marginal distribution of , so we set and . We also set . Summing up, we need only and . In our experiments, we have , . To define the correlations , for each we first sample the signal to noise ratio (snr) of uniformly from [0, 5] and then set the correlation between and , , with . This way we control the average signal to noise ratio similar to the experiments done in (Steeg and Galstyan, 2017). An example of covariance matrix corresponding to an NGLF model is shown in Fig. 9.

As stated in the main text, for creating a synthetic dataset with smooth change, we generate 2 NGLF models, and . Let the former be characterized by , and the latter be characterized by . We start from and smoothly change the model into , so that for each time period the joint distribution remains NGLF. We define the parameters of in the following way:

with . To define , we randomly select when each variable will change its parent from to . Formally, we sample and set if , otherwise we set .

Appendix B Implementation Details

In this section we discuss the implementations of the baselines. We use scikit-learn implementations of LW, OAS, FA, and Sparse PCA methods. We use the original implementations of linear CorEx, TVGL, and LTGL. For GLASSO, we tried the scikit-learn implementation, the QUIC method (http://www.cs.utexas.edu/~sustik/QUIC/), and TVGL with . In our experiment, the TVGL implementation was always better. Therefore, we selected the latter implementation. For LVGLASSO, we used the implementation availabile in the REGAIN repository (https://github.com/fdtomasi/regain), which also contains the original implementation of LTGL.

Appendix C Hyperparameter Grids

Method Hyperparameters
Sparse PCA
alpha [0.1, 0.3, 1.0, 3.0, 10.0, 30.0]
max_iter 500
Linear CorEx
max_iter 500
GLASSO
lamb [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]
max_iter 500
LVGLASSO
alpha [0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
tau [1.0, 3.0, 10.0, 30.0, 100.0, 300.0]
rho
max_iter 500
TVGL
lamb [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]
beta [0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
indexOfPenalty [1, 2, 3]
max_iter 500
LTGL
alpha [0.3, 1.0, 3.0, 10.0]
tau [10.0, 30.0, 100.0, 300.0, 1e3]
beta [1.0, 3.0, 10.0, 30.0, 100.0]
psi [l1, l2, Laplacian]
eta [3.0, 10.0, 30.0]
phi [l1, l2, Laplacian]
rho
max_iter 500
T-CorEx
l1 [0.0, 0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
l2 [0, 0.1, 0.3, 1.0, 3.0, 10.0, 30.0, 100.0, 300.0]
gamma [1e-9, 0.1, 0.3, 0.4, 0.5, 0.6, 0.7]
max_iter 500
Table 3. Hyperparameter grids in the sudden change experiment.

In this section we describe the hyperparameter grids we used in our quantitative experiments. Tables 3, 4, and 5 describe the grids of hyperparameters we tried in the sudden change, smooth change, and stock market experiments respectively. Note that the excluded baselines (beside T-CorEx-simple and T-CorEx-no-reg) have no hyperparameters. In all our experiments we made sure that increasing the “max_iter” parameter does not improve the results of any baseline. Also, we made sure that the best values of hyperparameters are not the corner values. In the sudden and smooth change experiments we know the ground truth value of (the number of latent factors). Therefore, we do not search the best value of for FA, sparse PCA, linear CorEx, and T-CorEx baselines. This way we compare the modeling performances of our baselines, rather than how they depend on the choices of their hyperparameters. In T-CorEx, “l1” and “l2” hyperparameters are mutually exclusive, meaning that if one is set to a non-zero value, the other one will be zero. In LVGLASSO and LTGL, we set , where is the number of training samples in each period. We noticed that these methods are not sensitive to this hyperparameter. We found this choice in their code repository.

Method Hyperparameters
Sparse PCA
alpha [0.1, 0.3, 1.0, 3.0, 10.0, 30.0]
max_iter 500
Linear CorEx
max_iter 500
GLASSO
lamb [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]
max_iter 500
LVGLASSO
alpha [0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
tau [1.0, 3.0, 10.0, 30.0, 100.0, 300.0]
rho
max_iter 500
TVGL
lamb [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]
beta [0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
indexOfPenalty [1, 2, 3]
max_iter 500
LTGL
alpha [3.0, 10.0, 30.0, 1e2]
tau [30.0, 100.0, 300.0, 1e3, 3e3]
beta [1.0, 3.0, 10.0, 30.0, 100.0]
psi [l1, l2, Laplacian]
eta [1.0, 3.0, 10.0]
phi [l1, l2, Laplacian]
rho
max_iter 500
T-CorEx
l1 [0.0, 0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
l2 [0, 0.1, 0.3, 1.0, 3.0, 10.0, 30.0, 100.0, 300.0]
gamma [1e-9, 0.1, 0.3, 0.4, 0.5, 0.6, 0.7]
max_iter 500
Table 4. Hyperparameter grids in the smooth change experiment.
Method Hyperparameters
FA
n_components [16, 32, 64, 128]
Sparse PCA
alpha [0.1, 0.3, 1.0, 3.0, 10.0, 30.0]
n_components [16, 32, 64, 128]
max_iter 500
Linear CorEx
n_hidden () [16, 32, 64, 128]
max_iter 500
GLASSO
lamb [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]
max_iter 500
LVGLASSO
alpha [0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
tau [1.0, 3.0, 10.0, 30.0, 100.0, 300.0]
rho
max_iter 500
TVGL
lamb [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]
beta [0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
indexOfPenalty [1, 2]
max_iter 500
LTGL
alpha [0.3, 1.0, 3.0, 10.0]
tau [30.0, 100.0, 300.0, 1e3]
beta [10.0, 30.0, 100.0]
psi [l1, l2, Laplacian]
eta [0.3, 1.0, 3.0]
phi [l1, l2, Laplacian]
rho
max_iter 500
T-CorEx
l1 [0.0, 0.03, 0.1, 0.3, 1.0, 3.0, 10.0]
gamma [1e-9, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
n_hidden () [16, 32, 64, 128]
max_iter 500
Table 5. Hyperparameter grids in the stock market experiment.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
370679
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description