Parametric Risk Parity
Abstract
Any optimization algorithm based on the risk parity approach requires the formulation of portfolio total risk in terms of marginal contributions. In this paper we use the independence of the underlying factors in the market to derive the centered moments required in the risk decomposition process when the modified versions of Value at Risk and Expected Shortfall are considered.
The choice of the Mixed Tempered Stable distribution seems adequate for fitting skewed and heavy tailed distributions. The ensuing detailed description of the optimization procedure is due to the existence of analytical higher order moments. Better results are achieved in terms of out of sample performance and greater diversification.
2cm2cm1cm1cm
1 Introduction
Nowadays there is much more emphasis on the sources of risk rather than just only on the levels. In addition to the interest on the marginal contribution to risk of a particular factor, we have to deal with new concepts such as the Risk Parity. It is an approach in portfolio management which focuses on allocation of risk rather than on the capital allocation (see Denis et al., 2011, for further details) it suggests that in a welldiversified portfolio all asset classes should have the same marginal contribution to the total risk of the portfolio.
In financial literature non parametric methods based on historical simulation have been studied deeply but, as observed in Meucci (2009), an approach that takes into account only past realizations of the variables of interest depends on the choice of the time interval. Stability issues for estimates require large sample sizes (see for example Martellini and Ziemann (2010), Hitaj and Mercuri (2013) in the context of sample moments applied to the portfolio selection problem) but on the other hand realizations observed in the farther past can be less realistic since market conditions may have changed meanwhile. A simple answer to this problem is the use of exponentially decaying weights for the observations, i.e instead of giving equal weight to each observation in the past we consider as more relevant recent realizations. But in so doing we may wrongly give little importance to scenarios in the past that realized in similar conditions to the today’s market. Parametric distributions enough flexible to fit time series of financial returns can be a starting point for procedures based on estimates of moments that present statistical properties even for not so large sample sizes.
Recently a new class of distributions, named Mixed Tempered Stable distribution (MixedTS hereafter), has been introduced in Rroji and
Mercuri (2014a, b). The idea behind is to generalize the Normal Variance Mean Mixtures (NVMM henceforth) substituting the normality assumption with the Tempered Stable distribution (Cont and
Tankov, 2003, see). In this way the new distribution overcomes some limits of the NVMM. In particular the MixedTS is more flexible in capturing the higher moments since in the NVMM the sign of skewness is given by the sign of the drift parameters and the level depends on the mixing random variable and drift parameter while in the MixedTS, the asymmetry depends also on the tempering parameters of the Tempered Stable distribution.
A similar argument holds for the kurtosis since for particular choice of the tempering parameters, the tail behavior of the MixedTS varies from semiheavy (i.e. the tail decays exponentially) to heavy (power law decay), while the tail behavior for the NVMM depends only on the tail behavior of the mixing random variable (see BarndorffNielsen et al., 1982, for a complete discussion on tail behavior). Here, we find an advantage of the MixedTS in modeling
financial returns since we do not need to know a priori if we have to consider a heavy or semiheavy distribution.
The main contribution of this paper is the introduction of a general setup for obtaining risk parity portfolios by modeling directly the underlying factors in a given market. For factor identification, we apply the Independent Component Analysis (ICA) introduced in Comon (1994). Details and algorithms on this subject are given in Hyvarinen et al. (2001). Exploiting the ability of ICA to decompose observed signals in independent random variables, in the proposed approach we need only to model each individual component since the dependence structure of factors is captured by the mixing matrix obtained through the algorithm.
From the Euler theorem for homogeneous functions we have that a homogenous risk measure can be written as a weighted sum of the marginal risk contribution where the weights are the exposures to the factors (see Tasche, 1999, for a complete treatment). Consequently, risk parity portfolios are achieved as a solution of a constrained minimization problem as proposed for example in Maillard et al. (2010).
In this paper we focus on three standard homogeneous risk measures: Volatility, Value at Risk (VaR) and Expected shortfall (ES). In particular for the last two measures we consider the modified versions proposed in Zangari (1996) for the VaR and in Boudt
et al. (2007) for the ES. The idea behind both modified measures is to consider asymptotic expansions for the underlying distribution based on the first four moments that, in our approach, can be easily derived using the ICA approach and assuming factors to be MixedTSdistributed.
The outline of the paper is as follows. In Section 2 we briefly recall the risk parity approach and its connection with other portfolio optimization methods. The main results concerning the Mixed Tempered Stable distribution are reviewed in Section 3 while in Section 4 we analyze the risk parity approach for portfolio optimization using the modified VaR and the modified ES. Empirical results are given in Section 5 and Section 6 concludes the paper.
2 Portfolio construction using the Risk Parity approach
Risk parity is an approach of allocating risk rather than capital. It overcomes some of the limits of standard approaches like for example meanvariance optimization. Indeed, as observed in Maillard et al. (2010), the meanvariance approach has two drawbacks in practice. First, optimal portfolios seem to be concentrated in a few assets. Second, small changes in the estimated parameters give rise to relevant modifications in the optimal portfolio that as remarked by Merton (1980) is more relevant in the case of portfolio expected return estimation. To avoid this lack of stability, researchers proposed several regularization techniques. The most used are resampling of the objective function proposed by Michaud (1989) and shrinkage estimators of the covariance matrix developed in Ledoit and
Wolf (2003). In literature we also find heuristic approaches that do not require return estimation like Equally weighted (EW), Equal risk contributions (ERC) or Minimum Variance (MV) portfolios. Through these methods, we put constraints directly on portfolio weights and do not require advanced programming issues. These methods are not completely distant from each other. For example Equally weighted portfolios can be seen as a particular case of Equal risk contributions supposed we have the same risk and the same correlation for all the factors.
A common way of expressing portfolio returns is as a linear combination of factor returns () with weights given by the portfolio exposures in :
(1) 
Identifying all factors that influence portfolio returns is not an easy task but once we get them a very important concept when dealing with risk analysis is the marginal contribution to risk (MRC) of a factor or an asset class defined as:
(2) 
This quantity represents the additional risk of our portfolio for each additional unit of exposure to the ith factor. Of particular interest is the product of the exposure with the marginal contribution to risk known as total risk contribution (TRC):
(3) 
The use of TRC makes risk attribution easier to understand as it becomes the split of risk in portions that are additive and constitute the portfolio total risk.
Risk parity, as other portfolio optimization rules, aims at identifying portfolio weights (or exposures) that satisfy a certain criteria. In practice, TRC must be the same for each factor considered in the portfolio construction. Maillard et al. (2010) propose to perform the following minimization to get the desired weights:
(4)  
subject to  
where the inequality constraints refer to the noshort selling conditions.
It is worth noting that the objective function in the optimization problem introduces a penalty when TRCs are different from each other. In this way, the resulting portfolio has similar TRC for each considered factor.
3 Mixed Tempered Stable distribution
In this section we review the main results on the Mixed Tempered Stable introduced in Rroji and
Mercuri (2014a) and investigate the methods for computing risk measures in the univariate case. Before introducing the MixedTS we start from the definition of Normal Variance Mean Mixtures.
NVMM models are based on the normality assumption while we try to generalize this concept. In fact a Normal Variance Mean Mixture has the form :
(5) 
where the parameters and . is continuously distributed on the positive halfaxis. The main idea behind the MixedTS is to substitute the normality assumption for the r.v. in formula (5) with the Tempered Stable that ensures more flexibility to the new distribution.
We recall that the Tempered Stable distribution is obtained by multiplying the Lévy density of an Stable with a decreasing tempering function (Cont and
Tankov (2003)). Tail behavior changes from heavy to semiheavy characterized by exponential instead of power decay and the existence of the conventional moments is ensured.
Tweedie (1984) introduced the one side Tempered Stable distribution by exponentially tilting the tail of a positive Stable distribution. Rosinski (2007) generalized the tempered stable distributions and classified them according to their Lévy measure. With this generalization it is also possible to obtain distributions with the whole real axis as support. Küchler and
Tappe (2013) observed that the Tempered Stable defined on real axis can be obtained as a difference of two independent one sided Tempered Stable. This distribution and the corresponding process has been widely applied in finance (see Küchler and
Tappe, 2014; Mercuri, 2008, for modeling asset returns and the recent textbook Rachev
et al. (2011)).
In this paper we consider a parametric distribution, the Mixed Tempered Stable, for modeling asset returns and use it in risk computation. We say that a continuous random variable follows a Mixed Tempered Stable distribution if:
(6) 
where is Standardized Classical Tempered Stable distributed ( Küchler and Tappe (2013)). V is an infinitely divisible distribution defined on positive axis and its m.g.f always exists. The logarithm of the m.g.f. is:
(7) 
We compute the characteristic function for the new distribution and apply the law of iterated expectation:
The characteristic function identifies the distribution at time one of a time changed Lévy process and the distribution is infinitely divisible. Despite the fact that this distribution has nice features from a theoretical point of view, it allows a dependence of the standard higher moments not only on the mixing r.v but also on the Standardized Classical Tempered Stable distribution parameters . As observed in Rroji (2013), it is important to have a flexible distribution for accommodating the differences in terms of asymmetry and tail heaviness.
Proposition 1
The first four moments of the MixedTS have an analytic expression since:
(9) 
Where and are the third and fourth central moments respectively.
See Appendix 7 for details on the derivation of the moments.
The choice of using this distribution comes from the fact that if we assume that , we have as special cases some wellknown distributions in modeling financial returns. We get the Variance Gamma (Madan and
Seneta, 1990; Loregian
et al., 2012) for and the Standardized Classical Tempered Stable when and goes to infinity.
Under the assumption that
Remark 2
For univariate random variables, risk measures can be computed directly once we have the characteristic function of a r.v since we evaluate its distribution function using the formula based on the Inverse Fourier Transform:
The Value at Risk at he confidence level is obtained inverting the distribution function:
Under the assumption of existence for the , the Expected Shortfall is computed using the formula:
In a multivariate context the distribution function can not obtained trivially since it is based on a model that captures the dependence of assets and requires the computation of multiple integrals. In the next section we present a methodology for computing the portfolio risk measures where the dependence structure of its assets is reconstructed through an ICA analysis and each signal is modeled through the MixedTS ditribution.
4 Parametric risk decomposition
We focus on homogeneous continuously differentiable risk measures for which risk contribution can be determined using the Euler’s theorem for homogeneous functions (see Tasche, 1999, for more details).
Let be an positive homogeneous risk measure, applying the Euler’s theorem we get:
(10) 
where the Total Risk Contribution of the ith risk factor is (see Tasche, 1999) defined in equation (3). In particular the for the risk measures considered in this paper are listed below.

For volatility:
(11) where is the variancecovariance matrix of the factors.

For the valueatrisk (see Gourieoux et al., 2000, for a complete treatment):
(12) where is the valueatrisk of the portfolio evaluated at the level .

For the expected shortfall (see Tasche, 2002, for more details):
(13)
The total risk contribution for a given factor can be easily computed using the historical approach. Indeed, we need only the matrix containing in the first column the vector while in the other columns
we put the factor returns. Consider for example to the Value at Risk that is the quantile of a distribution. We take the complete data matrix and order all the data following the column of portfolio returns. Observe that once sorted the matrix we have all the information needed for risk decomposition. The marginal contribution to risk for the factors is then computed on the sorted factor columns. However as observed in Boudt
et al. (2007) the estimating results obtained using historical ValueatRisk and historical Expected Shortfall, have a large variation in the outofsample observations than those based on a correctly specified parametric class of distributions.
In a nongaussian parametric framework, the modified VaR proposed in Zangari (1996) and the modified ES developed in Boudt
et al. (2007) seem to be attractive approaches since both measures preserve the homogeneity property and they can be easily computed once the multivariate moments of the factors are available.
Using (1), we model each asset return as a weighted average of factor returns. The mean vector for the factors is while is their variancecovariance matrix of dimension .
Coskewness factor matrix of dimension is:
(14) 
while their cokurtosis matrix is of dimension :
(15) 
where denotes the Kronecker product. The second, third and fourth order centered moments of the vector are respectively:
(16) 
The skewness (skew)and kurtosis (kurt) are defined based on the centered moments :
(17) 
and
(18) 
In order to compute , and and consequently the centered moments, we need the multivariate distribution for the factor returns or their dependence structure by means of a copula function. Here we face the problem from a different point of view, that is we look for the underlying independent factors that generate the observed returns. In practice, the ICA analysis (see Hyvarinen, 1999) applied to the factors simplifies the computation of , and since:
(19) 
where in we have the original sources and is the mixing matrix to be estimated. Each signal is modeled using the MixedTS, i.e:
(20) 
As shown in Appendix 8 the computation of the elements of the moment matrices is quite easy and fast due to the factor independence:
(21) 
Computed the moments and comoments, the modified VaR is obtained using the formula derived in Zangari (1996):
(22) 
where the quantity:
(23) 
corrects the Gaussian VaR by considering skewness () and kurtosis () of the return vector and . Observe that denotes the distribution of the standard normal while its inverse is used for the quantile determination. Modified Expected Shortfall defined in Boudt et al. (2007) is a linear transformation of the expected value of the returns below the Cornish fisher quantile where the second order Edgeworth expansion of the true distribution is considered:
(24) 
with . The extended formula is:
(25)  
(26) 
where
(27) 
and . The partial derivatives formulas for the centered moments are:
(28) 
Modeling the source signals with the Mixed Tempered Stable makes the computations easier. Partial derivatives allow us to obtain the total risk contribution for factors for modified VaR using the following formula:
In the same way, total risk contributions for modified Expected Shortfall can be obtained using a similar formula given in Boudt
et al. (2007). The derivative of (24) requires straightforward calculations but can be implemented directly using standard algebra in any programming language.
In Figure 1 we give a detailed description of the entire procedure described in this Section.
Insert here Figure 1.
5 Empirical analysis
In this Section we show stepby step how to obtain a risk parity portfolio using the MixedTS for modeling the source signals in the market. The dataset is composed by daily log returns of the Vanguard Fund Index (VFIAX) which replicates the performance of the SP 500 and the the ten sector indexes: Utility, Telecommunications, Materials, Information Technology, Industrial, Health, Financial, Energy, Consumption Staple and Consumption Discretionary that are considered as risk factors. The dataset refers to the period from 24/06/2010 to 10/07/2013. In Table 1 we give the main statistics of the time series we use in this Section. Observe that they result to be negatively skewed and with tails heavier than what can be predicted from the normal distribution. The higher volatility of the Financial sector reflects the crisis that in this time frame was at its ultimate phase.
Insert here Table 1.
As a first step we want to show the univariate risk measures we get when the MixedTS distribution is used directly for modeling the observed time series. We fit the MixedTS distribution to the returns of the VFIAX fund and compare the historical VaR and ES for the entire period with the respective parametric versions using formulas (22) and (24). The analysis is done for confidence levels ranging in . The historical VaR for results to be lower than the VaR computed using the MixedTS as observed in Figure 2. The difference becomes noticeable only for . The results concerning the ES highlight more the importance of choosing parametric or nonparametric methods for measuring risk. In fact, we have that the historical method for the ES gives higher values than the MixedTS based ES. The difference is bigger for . Notice that ES is a conditional mean and is highly influenced by extreme values. We consider the comparison with the empirical (robust see Cont et al. (2010)) ES that is a trimmed mean since its definition for is:
(29) 
The robust ES is less sensible to extreme values and the empirical quantities we get are similar to the MixedTS based ES.
Insert Figure 2.
In the next step we consider the returns of the VFIAX fund as a linear combination of the sector returns. As described in the previous section, we perform an ICA analysis on the matrix whose rows are the sector indexes returns. The output of this algorithm is the mixing matrix in Table 2 and the time series of the underlying signals. Following the idea of the algorithm, each market return time series is a linear transformation of the independent factors that lead the market.
Insert here Table 2.
We fit the MixedTS to the independent factor time series. The fitted parameters refered to the first window are reported in Table 3. Particular attention deserves the parameter since for we get the Variance Gamma distribution. We notice that only the fourth and the fifth components can be modeled with the Variance Gamma. The first four moments of each component are computed once we have the parameters. As discussed before, the independence hypothesis in the ICA algorithm gives rise to the analytic higher order moments for the matrix of the portfolio factors, i.e we compute the moments of the matrix whose rows are the returns of each sector.
Insert here Table 3.
Insert here Figure 3.
To have an intuition about our procedure we perform a rolling analysis and compare the outof sample performances of the VFIAX fund with the three risk based portfolios. We consider the period 24/06/2011 till 10/07/2013 with 250 closing prices as in sample data and the following 50 closing prices as out of sample data. In Table 4 we report the mean of returns for the S&P 500 index, the VFIAX Fund index, and for the risk parity parametric portfolios for three risk measures: Volatility, VaR and ES. in the rolling window analysis. First we give the results for each out of sample window and then the mean and standard deviation of all out of sample results. Observe that the choice of the risk measure does not have a great effect on the weights given to each sector based on our analysis and considering the MixedTS distribution for modeling the source signals.
Insert here Table 4.
In Figure 4 we plot the out of sample performance of two portfolios: the VFIAX fund and the risk parity portfolio when the risk measure considered is the Expected Shortfall. From this plot we can immediately observe that the risk parity portfolio has better outof sample performance. This result is valid for the other two risk parity based portfolios but we decided to show only one comparison since we think three similar plots would be redundant.
Insert here Figure 4.
Then we decide to assess the statement that risk parity portfolios are welldiversified and consider the Gini index for measuring diversification. In fact the Gini index for equally weighted portfolios equals and when all the weight is given to one asset i.e for perfectly concentrated portfolios. In Table 5 we give the concentration measures for our portfolio based on the consistent estimator of the Gini index G:
(30) 
where the observations are ordered, i.e .
Insert here Table5.
In particular, we report the respective indexes for each window of the rolling analysis. We find that risk parity portfolios based on the two risk measures Volatility and ES (its modified version) are less concentrated almost in all windows. The VFIAX fund weights follow the market capitalization of the sectors though the Gini index computed on these weights follows the market. Risk parity portfolios based on VaR (modified version) seems to be more concentrated than the alternative optimized portfolios. In order to make an investment decision we have to consider both performance and desired level of concentration. However, based on our results we have that risk parity portfolios are less concentrated and show better outof sample performances than a passive strategy that can be for example investing on a fund as the VFIAX that replicates the SP500 returns.
6 Conclusion
In this paper we give the steps required in a parametric risk decomposition framework. The idea of applying the ICA analysis on the factors and modeling each source signal with the MixedTS distribution gives rise to the possibility of having analytical formulas for the moments and flexibility in capturing tail behavior. This approach can be applied to any setup that considers an homogeneous risk measure. In the paper we consider the Volatility, the VaR and ES being the three most used in the practice and in academia. Our results suggest that the decision of which risk measure to consider is not so relevant for the portfolio composition but we observe that the risk parity strategy generates welldiversified portfolios with good outof sample performances.
7 Derivation of the moments
We derive the mean, variance, third and fourth order central moments of a MixedTS Random Variable. A continuous random variable is a Mixed Tempered Stable if it can be written as:
where given is a standardized tempered stable with parameters . We recall the formula for the cumulant of order n of the standardized tempered stable with parameters :
where the constant is fixed in order to ensure the standardization condition
In following we show how to determine the moments.
Mean:
Variance:
Applying the linearity and the iteration properties of the expected value, we obtain:
Third central moment:
Applying the iteration property we show that:
then
Using:
and:
By straightforward calculation and using the property of the Gamma function, we get:
Finally the third central moment is:
Fourth central moment:
We need to calculate explicitly only the last two terms since the others were determined before.
then
8 Moments using ICA
We derive the components of the variancecovariance matrix in (21)
Let us now compute the element as:
Let us now compute the element as:
Mean  Std  Skewness  Kurtosis  Max  Min  

VFIAX  5.22E04  0.0111  0.4990  7.4284  0.0463  0.0690 
COND  7.93E04  0.0119  0.5873  6.4336  0.0472  0.0690 
CONS  5.67E04  0.0076  0.4175  6.0214  0.0332  0.0390 
ENRS  4.77E04  0.0145  0.4215  6.8501  0.0687  0.0864 
FINL  4.00E04  0.0159  0.3977  7.9692  0.0789  0.1052 
HLTH  6.69E04  0.0096  0.4605  6.7295  0.0456  0.0540 
INDU  5.08E04  0.0129  0.4854  6.3092  0.0495  0.0711 
INFT  4.31E04  0.0121  0.2512  5.2089  0.0445  0.0596 
MATR  3.73E04  0.0147  0.3828  5.9989  0.0593  0.0756 
TELS  5.25E04  0.0096  0.2754  5.5523  0.0426  0.0550 
UTIL  3.07E04  0.0086  0.1836  7.2391  0.0414  0.0563 
Mixing Matrix  

0.0113  0.0024  0.0040  0.0029  0.0016  0.0009  0.0078  0.0033  0.0012  0.0022 
0.0066  0.0010  0.0007  0.0038  0.0039  0.0004  0.0029  0.0005  0.0015  0.0002 
0.0140  0.0007  0.0069  0.0008  0.0062  0.0013  0.0072  0.0023  0.0051  0.0023 
0.0173  0.0010  0.0091  0.0050  0.0030  0.0037  0.0034  0.0037  0.0039  0.0040 
0.0095  0.0011  0.0039  0.0032  0.0032  0.0004  0.0048  0.0019  0.0004  0.0010 
0.0117  0.0004  0.0053  0.0037  0.0038  0.0019  0.0085  0.0014  0.0040  0.0031 
0.0103  0.0032  0.0053  0.0024  0.0005  0.0024  0.0069  0.0009  0.0061  0.0017 
0.0128  0.0022  0.0074  0.0000  0.0068  0.0004  0.0078  0.0021  0.0046  0.0043 
0.0091  0.0036  0.0017  0.0022  0.0026  0.0026  0.0018  0.0012  0.0016  0.0011 
0.0095  0.0000  0.0017  0.0009  0.0025  0.0011  0.0019  0.0009  0.0014  0.0006 
I  II  III  IV  V  VI  VII  VIII  IX  X  

0.0989  0.1915  1.0361  0.0555  0.4227  0.5418  0.9911  0.7190  0.3449  0.7476  
0.0719  0.0745  0.3914  0.0579  0.0674  0.0991  0.1763  0.1094  0.0688  0.1386  
0.6847  0.5991  0.5766  0.5132  0.3285  0.4095  0.3798  0.3729  0.4490  0.4705  
2.1983  2.5824  2.6360  3.8144  6.6537  6.0530  5.8454  6.3537  5.0876  5.0049  
0.8740  1.7955  0.6383  2.0000  1.9904  0.0594  0.0100  1.5698  0.0100  0.1282  
1.1631  1.3175  1.2307  1.2924  1.2891  1.5148  1.9890  1.6767  1.6033  1.8090  
1.2186  1.4375  2.1308  2.9084  2.9103  2.6869  2.4690  4.0004  2.5576  2.4291 
Outofsample results for each window  

w  mean SPX  mean VFIAX  mean  mean  mean 
1  0.0213%  0.0209%  0.0278%  0.0312%  0.0302% 
2  0.0293%  0.0311%  0.0189%  0.0208%  0.0200% 
3  0.2045%  0.2058%  0.1654%  0.1631%  0.1698% 
4  0.0290%  0.0289%  0.0235%  0.0229%  0.0231% 
5  0.1132%  0.1102%  0.0876%  0.0895%  0.0934% 
6  0.0920%  0.0867%  0.0442%  0.0455%  0.0491% 
7  0.0481%  0.0466%  0.0503%  0.0502%  0.0509% 
8  0.1327%  0.1315%  0.1015%  0.1008%  0.1034% 
9  0.2913%  0.2940%  0.2467%  0.2473%  0.2564% 
10  0.1267%  0.1275%  0.0672%  0.0672%  0.0719% 
Global outofsample results  
SPX  VFIAX  
mean  0.0382%  0.0393%  0.0435%  0.0434%  0.0439% 
std  0.01242  0.01241  0.01090  0.010862  0.011040 
w  

1  0.301  0.194  0.247  0.197 
2  0.301  0.166  0.248  0.235 
3  0.302  0.178  0.222  0.189 
4  0.301  0.194  0.247  0.197 
5  0.300  0.198  0.244  0.185 
6  0.297  0.200  0.231  0.198 
7  0.297  0.186  0.218  0.203 
8  0.294  0.181  0.206  0.177 
9  0.299  0.193  0.246  0.150 
10  0.301  0.179  0.233  0.187 
References
 BarndorffNielsen, O., J. Kent, and M. Sørensen (1982). Normal variancemean mixtures and z distributions. International Statistical Review/Revue Internationale de Statistique, 145–159.
 Boudt, K., B. Peterson, and C. Croux (2007). Estimation and decomposition of downside risk for portfolios with nonnormal returns. DTEWKBI_0730, 1–30.
 Comon, P. (1994). Independent component analysis, a new concept? Signal Process. 36(3), 287–314.
 Cont, R., R. Deguest, and G. Scandolo (2010). Robustness and sensitivity analysis of risk measurement procedures. Quantitative Finance 10(6), 593–606.
 Cont, R. and P. Tankov (2003). Financial Modelling with Jump Processes. Chapman & Hall/CRC Financial Mathematics Series II.
 Denis, B., C. Jason, L. Feifei, and S. Omid (2011). Risk parity portfolio vs. other asset allocation heuristic portfolios. Journal of Investing 20(1), 108–118.
 Gourieoux, C., J. Laurent, and O. S. (2000). Sensitivity analysis of value at risk. Journal of Empirical Finance 7, 225–245.
 Hitaj, A. and L. Mercuri (2013). Portfolio allocation using multivariate variance gamma models. Financial markets and portfolio management 27(1), 65–99.
 Hyvarinen, A. (1999). Fast and robust fixedpoint algorithms for independent component analysis. IEEE Transactions on Neural Networks 10, 626–634.
 Hyvarinen, A., J. Karhunen, and E. Oja (2001). Independent component analysis.
 Küchler, U. and S. Tappe (2013). Tempered stable distributions and processes. Stochastic Processes and their Applications. in press.
 Küchler, U. and S. Tappe (2014). Exponential stock models driven by tempered stable processes. Journal of Econometrics 181(1), 53–63.
 Ledoit, O. and M. Wolf (2003). Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. Journal of empirical finance 10(5), 603–621.
 Loregian, A., L. Mercuri, and E. Rroji (2012). Approximation of the variance gamma model with a finite mixture of normals. Statistic & Probability Letters 82(2), 217–224.
 Madan, D. and E. Seneta (1990). The variance gamma (v.g.) model for share market returns. Journal of Business 63, 511–524.
 Maillard, S., T. Roncalli, and J. Teïletche (2010). The properties of equally weighted risk contribution portfolios. The Journal of Portfolio Management 36, 60–70.
 Martellini, L. and V. Ziemann (2010). Improved estimates of higherorder comoments and implications for portfolio selection. Review of Financial Studies 23(4), 1467–1502.
 Mercuri, L. (2008). Option pricing in a garch model with tempered stable innovations. Finance research letters 5(3), 172–182.
 Merton, R. C. (1980). On estimating the expected return on the market: An exploratory investigation. Journal of Financial Economics 8(4), 323–361.
 Meucci, A. (2009). Risk and asset allocation. Springer.
 Michaud, R. O. (1989). The markowitz optimization enigma: is’ optimized’optimal? Financial Analysts Journal, 31–42.
 Rachev, S. T., Y. S. Kim, M. L. Bianchi, and F. J. Fabozzi (2011). Financial models with Lévy processes and volatility clustering, Volume 187. John Wiley & Sons.
 Rosinski, J. (2007). Tempering stable processes. Stochastic Processes and Their Applications 117(6), 677–707.
 Rroji, E. (2013). Risk Attribution and semi heavy tailed distributions. PhD Thesis. Univ. Milano  Bicocca.
 Rroji, E. and L. Mercuri (2014a). Mixed tempered stable distribution. Quantitative Finance. to appear on.
 Rroji, E. and L. Mercuri (2014b). MixedTS: Mixed tempered stable distribution. url:http://cran.at.rproject.org/web/packages/MixedTS/index.html.
 Tasche, D. (1999). Risk contributions and performance measurement. Working paper, Technische Universitat München.
 Tasche, D. (2002). Expected shortfall and beyond. Journal Banking and Finance 26(7), 1519–1533.
 Tweedie, M. (1984). An index which distinguishes between some important exponential families. In Proc. Indian Statistical Institute Golden Jubilee International Conference,, pp. 579–604. J. Ghosh and J. Roy (Eds.).
 Zangari, P. (1996). A var methodology for portfolios that include options. RiskMetrics Monitor, 4–12.