Optimal starting times, stopping times and risk measures for algorithmic trading: Target Close and Implementation Shortfall
Abstract
We derive explicit recursive formulas for Target Close (TC) and Implementation Shortfall (IS) in the AlmgrenChriss framework. We explain how to compute the optimal starting and stopping times for IS and TC, respectively, given a minimum trading size. We also show how to add a minimum participation rate constraint (Percentage of Volume, PVol) for both TC and IS.
We also study an alternative set of risk measures for the optimisation of algorithmic trading curves. We assume a selfsimilar process (e.g. Lévy process, fractional Brownian motion or fractal process) and define a new risk measure, the variation, which reduces to the variance if the process is a Brownian motion. We deduce the explicit formula for the TC and IS algorithms under a selfsimilar process.
We show that there is an twoway relationship between selfsimilar models and a family of risk measures called variations. indeed, it is equivalent to have (1) a selfsimilar process and calibrate empirically the parameter for the variation, or (2) a Brownian motion and use the variation as risk measure instead of the variance. We also show that can be seen as a finetuning parameter which modulates the aggressiveness of the trading protocole: increases if and only if the TC algorithm starts later and executes faster.
Finally, we show how the parameter of the variation can be implied from the optimal starting time of TC. Under this framework can be viewed as a measure of the joint impact of market impact (i.e. liquidity) and volatility.
Keywords: Quantitative Finance, HighFrequency Trading, Algorithmic Trading, Optimal Execution, Market Impact, Risk Measures, Selfsimilar Processes, Fractal Processes.
Contents
 1 Introduction

2 Optimal starting and stopping times
 2.1 A review of the meanvariance optimisation of AlmgrenChriss
 2.2 The Shooting Method
 2.3 Derivation of the Target Close (TC) algorithm
 2.4 Derivation of the Implementation Shortfall (IS) algorithm
 2.5 Comparison between TC and IS
 2.6 Adding constraints: Percentage of Volume (PVol)
 2.7 Computing the optimal stopping time for TC
 2.8 Numerical results
 3 NonBrownian models: selfsimilar processes
 4 Assessing the effects of the risk measure
 5 Final remarks
1 Introduction
Purpose of the paper.
Trading algorithms are used by asset managers to control the trading rate of a large order, performing a balance between trading fast to minimise exposure to market risk and trading slow to minimise market impact (for an overview of quantitative trading methods, see [Lehalle, 2012] and [Abergel et al., 2012]). This balance is usually captured via a cost function which takes into account two joint effects, namely the market impact and the market risk. The first frameworks to be proposed were [Bertsimas and Lo, 1998] and [Almgren and Chriss, 2000], the latter using a meanvariance criteria. More sophisticated cost functions have been already proposed in the academic literature, leading to the use of different optimization approaches like stochastic control (see [Bouchard et al., 2011] or [Guéant et al., 2011]) or stochastic algorithms (see [Pagès et al., 2012]).
From the practitioners’ viewpoint, the cost function to choose is far from obvious. The easiest way to proceed is to replace the choice of the cost function by observable features of the market. This is the approach chosen in this paper, where the cost function generalises the meanvariance frameworks of both AlmgrenChriss and [Gatheral et al., 2010], which a meanvariation. Instead of complex cost functions and computeintensive parameter calibration, this paper proposes a simpler approach that covers a large class of parametrized cost function already in use by practitioners. We calibrate the parameters from observable variables like stopping times and maximum participation rates.
This approach is very flexible and customisable. Indeed, since it depends on a single finetuning parameter , a practitioner can either calibrate or modify it by hand to fit their risk budget. A good example is the maximum participation rate: actually most practitioners are using a meanvariance criteria with an arbitrary risk aversion parameter, but add a “control layer” to their algorithms in order to ensure that the participation on real time will never be more than a predetermined threshold (i.e. the trading algorithm will never buy or sell more than a certain percentage of the volume traded by the whole market); here we propose a way to include this constraint into the full optimisation process, at the very first step of the process. Moreover, some traders know that they would like to see a given algorithm finish a buy of a given number of shares within a certain time period; again; we propose a way to implicit the parameters of the cost function to achieve this.
An optimal trading framework for the target close and implementation shortfall benchmarks with percentage of volume constraints.
A TC (Target Close) algorithm is a trading strategy that aims to execute a certain amount of shares as near as possible to the closing auction price. Since the benchmark with respect to which the TC algorithm is measured is the closing price, the trader has interest in executing most of their order at the close auction. However, if the number of shares to trade is too large, the order cannot be totally executed at the close auction without moving the price too much due to its market impact [Gatheral and Schied, 2012]. Therefore, the trader has to trade some shares during the continuous auction phase (i.e. before the close) following one of the now wellknown optimal trading algorithms available, e.g. meanvariance optimisation (following [Almgren and Chriss, 2000]) or stochastic control (like in [Bouchard et al., 2011]).
As we have mentioned above, this paper will stay close to the original AlmgrenChriss framework, extending the risk measure from the variance to a general variation criterion. The goal of this paper to explain the practical interpretation of the variation parameter used in the optimisation scheme and show how to choose them optimally in practice.
The variation is an extension of the variance, depending on , since when we recover the variance. When the trader assumes that (1) the price is no longer a martingale, i.e. there are patterns in prices (trendfollowing or meanreverting), and (2) the timescaling properties of prices are not as in the Brownian motion. Therefore, a new risk measure other than variance is needed. This paper explores the impact of on the properties of the obtained optimal trading curve, and relates it with selfsimilar processes (e.g. fractional Brownian motion, Lévy processes and multifractal processes).
Inverting the optimal liquidation problem putting the emphasis on observables of the obtained trading process.
We will show that the TC (Target Close) algorithm can be seen as a “reverse IS” (Implementation Shortfall) –see equation (12) and following for details–. In this framework, the starting time for a TC is as important than the ending time for an IS. For practitioners this distinction is even more critical since shortening the trading duration of an IS because of an interesting price opportunity can always be justified, but beginning sooner or later than an “expected optimal start time” for a TC is more difficult to explain.
The paper also shows that the results obtained for the TC criterion can be applied to the IS criterion because TC and IS are both sides of the same coin. Indeed, on the one hand, the TC has a predetermined end time, its benchmark is the price at the end of the execution and the starting time is unknown. On the other hand, IS has a predetermined starting time, its benchmark is the price at the beginning of the execution and the stopping time is unknown. Therefore, there is no surprise that the recursive formula for IS turns out to be exactly the same that for TC but with the time running backwards.
It is customary for practitioners to put constraints on the maximum participation rate of a trading algorithms (say 20% of the volume traded by the market). Therefore, it is of paramount importance to find a systematic way of computing the starting time of a TC under a percentage of volume (PVol) constraint. Such an “optimal trading policy under PVol constraint” is properly defined and solved in this paper. A numerical example with real data is provided, where the optimal trading curves and their corresponding optimal starting times are computed.
Solving the TC problem under constraints allows us to analyze the impact of the parameters of the optimisation criterion on observable variables of the trading process. It should be straightforward for quantitative traders the task to implement our results numerically, i.e. to choose the characteristics of the trading process they would like to target and then infer the proper value of the parameters of the criterion they need.
Link between a mean variation criterion and self similar price formation processes.
[Almgren and Chriss, 2000] developed a meanvariance framework to trade IS (Implementation Shortfall) portfolios driven by a Brownian motion. More recently, [Lehalle, 2009] extended the model to Gaussian portfolios whilst [Gatheral and Schied, 2012] addressed the same problem for the geometric Brownian motion. In this article we extend the analysis to a broad class of nonBrownian models, the socalled selfsimilar models, which include Lévy processes and fractional Brownian motion (for empirical studies about the selfsimilarity of intraday data, see [Xu and Gençay, 2003], [Müller et al., 1990] or [Cont et al., 1997]). We study in detail the relationship between the exponent of selfsimilarity, the choice of the risk measure and the level of aggressiveness of the algorithm. We show that there are two opposite approaches that nevertheless give the same recursive trading formula: one assumes a selfsimilar process, estimates the exponent of selfsimilarity and chooses the variation via ; the other assumes a classical Brownian motion and chooses the variation as the risk measure instead of the the variance.
In the same way the starting time of a TC or the ending time of an IS can be used as an observable to infer values of parameters of the optimization program, the maximum participation rate is expressed as a function of for a mean variance criterion. In the light of this, a quantitative trader who has chosen to trade no more than 30% of the market volume during a given time interval, can modifie the value of to finetune their execution and respect their constraints.
This paper formalizes an innovative approach of optimal trading based on observable variables, risk budget and participation constraints. By doing so, it opens the door to a framework close to risk neutral valuation of derivative products in optimal trading: instead of choosing the measure under which to compute the expectation of the payoff (because optimal trading is always considered under historical measure), we propose to infer the value of some parameters of the cost function so that the trading process will satisfy some observable characteristics (start time, end time, maximum participation rate, etc). In this framework, instead of being hedged with respect to market prices, the trader will be hedged with respect to the riskperformance profile of an ideal trading process, i.e. a proxy she defined a priori.
Notice that we have chosen to extend the usual meanvariance criterion rather than going to more nonparametric approaches like stochastic control. The main reason for this approach is because our framework allows more explicit recurrent formulas, not to mention that our method can be easily extended to other execution algorithms besides TC and IS.
Organisation of the article.
In Section 2 we derive a nonlinear, explicit recursive formula for both the TC and IS algorithms with a nonlinear market impact. We explain how to build a TC algorithm under a maximum participation rate constraint (percentage of volume, PVol). We provide a numerical example using real data, in which we computed the trading curves and their optimal starting time. All our computations can be also applied to IS.
In Section 3 we extend the analysis for a class of nonBrownian models called selfsimilar processes, which include Lévy Processes, fractional Brownian motion and fractal processes. We define an ad hoc risk measure, denoted variation, which renders the cost functional linear in time. We show numerically that the exponent of selfsimilarity can be viewed as a finetuning parameter for the level of aggressiveness of the TC algorithm under PVol constraint.
In Section 4 we assess the effect of the parameter in terms of risk management. We show the existence of an equivalence between risk measures of variation type and selfsimilar models of exponent : choosing a selfsimilar model, estimating and defining for the risk measure yields the same trading curve as assuming a Brownian motion but changing the risk measure from variance to variation. We also study the effect of on the starting times for TC and the slopes of the corresponding trading curves
We conclude by showing how the parameter of the variation can be implied from the optimal starting time of TC. In that framework can be viewed measure of the joint impact of market impact (i.e. liquidity) and volatility.
2 Optimal starting and stopping times
2.1 A review of the meanvariance optimisation of AlmgrenChriss
This section recalls the framework, notation and results in [Almgren and Chriss, 2000] and [Lehalle, 2009]. Suppose we want to trade an asset throughout a time horizon . Assume that we have already set the trading schedule, i.e. we will do trades at evenly distributed times
The goal is, given a volume to execute , find the optimal quantity of shares to execute at time that minimise the joint effect of market impact and market risk under the constraint
(1) 
Define and assume that the price dynamics follows a Brownian motion, i.e.
(2) 
where are i.i.d. normal random variables of mean zero and variance , and are the historical volatilities at the trading times .^{4}^{4}4 We will be assuming in this paper that the historical volatility and volume curves and are noncostant throughout the day and known exante e.g. as the average over a period of time. In practice, both the average historical volatility and volume present a Ushape pattern: they are higher at the open and close of the market than in between. Following [Almgren and Chriss, 2000] and [Lehalle, 2009], we will model the temporary market impact as a function , i.e. depending solely on what happens at each trading time.^{5}^{5}5 Strictly speaking, the function is the temporary market impact. In our study we have decided to neglect the permanent market impact for two main reasons. First, the permanent impact is smaller than the temporary impact since we are not taking into account the relaxation due to the elasticity of prices. Therefore, we are overvaluing the real impact in the long run, and as such our analysis is thus conservative. Second, since the permanent impact is usually modelled as linear, any trading curve will have the same permanent impact. There is thus no loss of generality if we assume zero permanent impact. Under this framework, the wealth process (i.e. the full trading revenue upon completion of all trades) is
where if we buy at time and if we sell. Assume we have longonly portfolio, i.e. . In this framework, if we use the identity
and the change of variables
it follows that the wealth process becomes
(4) 
The expectation and variance of the wealth process (2.1) are, respectively,
Therefore, the corresponding meanvariance cost functional for a level of risk aversion is
In order to find the optimal trading curve we have to find the points that solve the system
If the market impact function is strictly monotone and differentiable for positive values, e.g. with , it is possible to obtain explicit recursive algorithms of the form
(6) 
constraints and .
2.2 The Shooting Method
Notice that (6) is completely determined once the values and are known. More generally, once two different values and with are known, the other values can be computed. However, the method is not necessarily explicit and recursive if or . In our case, we have the constraints and , i.e. an initial and a final condition, which implies that (6) is no longer an explicit and recursive algorithm. Nevertheless, the problem can be solved explicitly and recursively using a dichotomy method called Shooting Method.
We start with the following ordinary Differential Equation (ODE):
(7) 
where is a bounded and differentiable function. According to the standard theory of ODEs, the initialvalue problem (7) has a unique solution .^{6}^{6}6 Strictly speaking, the solution only exists locally, but it is globally defined if and all its partial derivatives are continuous and bounded (see e.g. [Perko, 2001]).
Now consider the boundary problem
(8) 
It is not evident that (8) has a solution. However, we can try to translate the boundary problem (8) into an initialvalue problem of type (7), for which we know that solutions do exist.
The shooting method consists exactly in this translation. Indeed, for any , the initialvalue problem (7) has a solution . To solve the boundary problem (8), we need to find such that . Roughly speaking, we are playing with the parameter in order to “hit” (see Figure 1). In consequence, the boundary problem (8) reduces to find a zero of the function
which can be solved using any numerical method, e.g. bisection or Newton (see e.g. [Stoer and Bulirsch, 1983]).
Suppose we have already found the optimal trading curve via a recursive algorithm of the form, i.e.
(9) 
under the constraints and . Using an induction argument it can be shown that is a function of and for . By induction we obtain that is a function of , i.e.
because has been already fixed to be equal to . If we define then is a free parameter that completely determines the optimal trading curve. An important remark is that can be related to the the slope of the trading curve at because
By considering as the “slope” we can see an analogy of the optimal trading curve with the shooting method. Under this new framework, our optimization problem reduces to find such that . Indeed, if we choose then , as we wished.
The beauty of the analogy with the shooting method is that we are working with a 1dimensional function instead of the dimensional functional . In consequence, using the shooting method we are always solving a 1D problem regardless of the number of trades . This fact renders our algorithm very appealing for high frequency trading.
2.3 Derivation of the Target Close (TC) algorithm
As in [Almgren, 2003], [Almgren, 2003] and [Bouchaud, 2010], we will consider a power market impact function, i.e.
(10) 
where is the amount of shares executed at the th pillar (i.e. at time ), is the historical volume at the th pillar, is the (normalised) historical volatility at the th pillar, and and are positive constants. Under this framework, the wealth process (4) takes the form
(11) 
where is the number of slices in the trading algorithm. The first term in the righthand side of (11) is the cost of executing shares; the second term models the market impact of the execution as a power law of the percentage of volume executed at each pillar .
For a TC algorithm, the benchmark is the closing price. Therefore, the wealth process relative to this benchmark is
Define
(notice that this notation differs from what we used in Section 2.1). Under this framework, it follows that
(12)  
Since the timestep is a constant multiplicative factor, we can consider a normalised relative wealth
We are not losing any generality with the normalisation because it is equivalent to use a normalised volatility . Under this new framework, the average and variance of are, respectively,
The corresponding meanvariance functional is thus
where if and . The optimal trading curve is determined by solving
i.e.
Returning to the variables we get
Finally, we obtain an explicit, nonlinear recursive formula of the optimal trading curve for a TC algorithm:
(13) 
2.4 Derivation of the Implementation Shortfall (IS) algorithm
For an IS algorithm, the starting time is given and we have to find the optimal stopping time for our execution. Since the benchmark is the price at the moment when the execution starts, the relative wealth of an IS algorithm is^{7}^{7}7 is the price at time i.e. the moment when the trader decided to start their execution. is the price at i.e. the moment the first trade took place. If we take into account the price slippage due to the delay between and then the benchmark for IS should be . On the contrary, if we neglect the delay then the benchmark is . In this paper we take the second approach.
Using the change of variables
and equation (11) it can be shown that the relative wealth process is
As in the TC case, we can consider a normalised relative wealth
whose mean and variance are, respectively,
In consequence, the corresponding meanvariance functional is
where if and . The optimal trading curve is determined by solving
i.e.
Returning to the variables we get
We thus obtain the recursive nonlinear formula for the optimal IS trading curve:
(14) 
2.5 Comparison between TC and IS
If the volatility is constant then the recursive algorithm (13) for TC is exactly the same as (14) for IS, except for the time in IS running backwards. In the light of this mirror property, the analysis we will be performing fro TC can be nturally extrapolated for IS, e.g. adding a maximum participation rate constraint and computing the optimal starting time.
If the volatility is not constant then there is a slight difference in the formulas (13) and (14), namely a factor for TC and a factor of . From the practitioner’s point of view, this difference is paramount. For TC it is the forward volatility which determines the weight of i.e. the shares already executed. This comes from the fact that the ending time is fixed (the market close) and the number of shares to trade per pillar increases in time. In this scenario, not only the trader has little room to change their schedule, but also they have to anticipate the volatility one step ahead in order to avoid nasty surprises. For IS it is the spot volatility that counts: since the number of shares per pillar decreases in time, the trader has more room to change the trading schedule the closer they are to the end of the execution. In consecuence, they can capitalise on potential arbitrage opportunities.
2.6 Adding constraints: Percentage of Volume (PVol)
The TC algorithm can have a participation rate constraint, meaning that the size of each slice cannot exceed a fixed percentage of the available volume (either current or historic average). This restriction is called Percentage of Volume (PVol), which is an execution algorithm itself. Under a PVol constraint, the trading slices of the TC algorithm satisfy the constraint
It is worth to notice that the PVol algorithm is not a solution of the AlmgrenChriss optimisation. Indeed, if it were then
and from (13) we would have that
In consequence, since it follows that for all , which contradicts (1).
In general, if two adjacent pillars and satisfy the PVol constraint then the previous argument shows that for all . Therefore, the two algorithms TC and PVol are mutually exclusive. This implies that a classical optimisation scheme of TC with the PVol constraint via Lagrange multipliers is not straightforward, to say the least. We thus have to find another way to obtain a solution of the TC algorithm under the PVol constraint.
From (13) we see that given , the corresponding depends on , i.e. the cumulative execution up to , which implies that curve is in general increasing. Therefore, in order to satisfy the constraint of maximum percentage of volume (PVol), if the total volume to execute is large then the algorithm has to be divided into two patterns:

As long as the constraint of maximum participation rate (PVol) is not reached, we execute the slices according to the AlmgrenChriss recursive formula. This corresponds to the TC pattern.

As soon as the PVol constraint is attained, the algorithm executes the minimum between the TC curve and PVol curve.
Loosely speaking, we start with a TC algorithm, but once the slices are saturated we switch to a PVol algorithm until the end of the execution. However, it can happen that the algo switches back to TC if the PVol curve is bigger at a further pillar; this situation is exceptional though, save for cases where the volume curve presents sharp peaks or gaps.
It is worth to mention that adding a PVol constraint to IS is essentially the same as adding the constraint for TC and running the TC algorithm backwards.
2.7 Computing the optimal stopping time for TC
Let us describe in detail all the steps of our TC algorithm under PVol constraint. Let be the starting time, the switching time (i.e. when we change from TC to PVol) and the number of shares we trade at pillar .

According to the historical estimates of the available volume at the close auction, plus the desired participation rate, we define the execution at the pillar (the close auction), denoted .

We compute the AlmgrenChriss algorithm for the residual shares i.e. the shares to execute in continuous, outside of the close auction. We start with and and launch the TC recursive argument (13). Since the algorithm is completely determined by , it suffices to find the right such that the cumulative shares at are equal to .

We compare the trading curve of the previous step with the PVol curve. If the PVol constraint is satisfied then we are done. If not, we saturate pillar with the PVol constraint and redefine the parameters: is now the shares to execute outside both pillars 103 and 102, i.e. , whilst is set to 101, i.e. .

Eventually, we will obtain a TC curve starting at that switches to PVol at , satisfying the constraint. Moreover, the algorithm finds the right at such that the total execution from to is equal to . Remark that the whole algorithm executes TC between to , PVol between and , and the desired participation at the close auction at .

In order to find the right starting time , we define a minimal trading size for each slice, that we denote . Let be the minimum of the trading curve we have already found in the previous step. If then we advance one pillar, i.e. passes from 1 to 2, and we recompute the trading curve. In order to continue hitting the target we change for the cumulative trades of the previous step up to pillar . We continue until we find the first pillar such that ; notice that in this case will be the cumulative trades of the previous step up to the pillar .
Therefore, is determined by the PVol constraint whilst is determined by the minimal trading size constraint . Notice however that the optimal starting pillar is determined after , which implies that depends not only on but also on the rest of the parameters, in particular the PVol curve, the participation rate at the close auction and the market impact parameters.
Observe that there is a systematic way of computing the stopping pillar for an IS algorithm: it corresponds to the backwards or symmetrical image of the starting time we computed for the TC algorithm.
2.8 Numerical results
In the first plot of Figure 2 we have the TC curve (solid line) under PVol constraint vs the PVol curve (broken line) of stock AIRP.PA (Air Liquide). In the second plot we have the cumulative execution of the TC curve (solid line) under PVol constraint vs the volume to execute (broken line). The parameters we used are shares, shares and a maximum participation of 20% in both the continuous trading period and the close auction. The historical volatility and volume curves, the market impact parameters and and the riskaversion coefficient were provided by the Quantitative Research at Cheuvreux  Crédit Agricole.
In Figure 2 we can also observe that the algorithm finds the optimal starting time at pillar (beginning of the horizontal axis), at which it starts to execute the order following the TC algorithm based on the AlmgrenChriss optimisation. At pillar (vertical line) the algorithm switches to PVol in order to satisfy the constraint. Moreover, during the whole execution, the PVol constraint has been satisfied. In the second plot we can see that the TC algorithm under PVol constraint successfully executed the whole order.
3 NonBrownian models: selfsimilar processes
3.1 The variation model
Let and be a random vector of mean zero. We define the variation of as
The variation and the norm in are related via
Notice that if is a time series of i.i.d. random variables then the 2variation reduces to the variance, i.e.
Moreover, it is easy to show that the variation defines a metric on , and since all norms in are equivalent there exist such that
Therefore, the variance (i.e. the 2variation) and the variation are two equivalent metrics on , and in particular
(15) 
Now let us define the variation for a special family of functions of random variables. Let a random vector of mean zero and consider the function defined as
We define the variation of as
Observe that if is a time series of mean zero then is the sample th moment of the time series multiplied by . Finally, for general functions such that
we define their variation as
It is worth o remark that if the random variables are i.i.d. of mean zero and variance 1 then the variation and the variance of coincide:
3.2 Optimal trading algorithms using variance as risk measure
Assume that the price dynamics is selfsimilar, i.e.
(16) 
where and are identicallydistributed random variables such that , not necessarily independent. We will assume a power market impact of the form
(17) 
In order to use the variation as a risk measure, we have to choose the right . From (16) we see that if we recover the classical Brownian motion, for which the variance is the most common choice for a risk measure. In this case we have and , which implies that the risk measure is linear in time. This suggests that the correct choice of is , since it is the only that renders the risk variation as a risk measure linear in time.
We would like to remark that the idea of a risk measure that is linear in time was also introduced by Gatheral and Schied [Gatheral and Schied, 2012], where the risk measure was the expectation of the timeaverage. The advantage of our approach is that we do not fix a priori the dynamics of the price process. Indeed, we first find empirically the right exponent of selfsimilarity and then we choose the correct risk measure via .
In order to derive the recursive formula for a process following (16), we normalise the relative wealth as in the previous case of Brownian motion. Under this framework, the normalised relative wealth of a TC algorithm is
(18) 
We will assume that the process (16) is normalised, i.e. for all . In the case of Brownian motion ( and ) this is equivalent to suppose that the increments have variance 1. Under this framework, the mean and variation of are
Therefore, the corresponding functional is
where if and . The optimal trading curve is determined by solving
i.e.
Returning to the variables we get
We thus obtain the recursive nonlinear formula for the optimal TC trading curve for a selfsimilar process:
(19) 
If we were interested in the IS algorithm, an argument similar to the previous one would show that the optimal trading curve for IS satisfies
(20) 
where if and .
3.3 Examples of selfsimilar processes
Amongst the class of continuous stochastic processes that admit a discretisation of the form (16), we have three processes in mind: Lévy processes, fractional Brownian motion and fractal processes (for more details we suggest [Bacry et al., 2001], [Bouchaud and Potters, 2004], [Embrechts, 2002], [Mandelbrot and Hudson, 2004] and [Mantegna and Stanley, 1994])).

Truncated Lévy processes. stable Lévy processes are the only selfsimilar processes satisfying (16) with and with independent, stationary increments. If we recover the classical Brownian motion. However, for such processes the th moment is infinite, and as such they cannot be used in our framework. Nevertheless, one can consider the socalled truncated Lévy distributions, which are Lévy within a bounded interval and exponential on the tails. This allows moments of any order, in particular the th moment, whilst within the bounded interval we keep the selfsimilarity given by (16).

Fractional Brownian motion. The fractional Brownian motion is the only selfsimilar process with stationary, Gaussian increments. Its exponent of selfsimilarity is called Hurst exponent). If we recover the classical Brownian motion. The fractional Brownian motion has moments of all orders, hence the variation is welldefined and we can apply our model. However, for the increments are autocorrelated (positively if and negatively if ) and our model does not take into account the auto correlations. Therefore, we can consider our model as an approximation when auto correlations are weak with respect to the market impact and the variance.

Multifractal processes. Multifractal processes are defined as follows. Given a stochastic process its fluctuation is defined as
For any define
We say that is multifractal of exponents if for any there exists such that
In the case where is linear, i.e. the process is called monofractal. If is not linear then is called multifractal. Notice that all selfsimilar processes are monofractal, in particular the fractional Brownian motion and Lévy processes. However, we will continue to use the term selfsimilar, even for monofractal process, since it is more common in the literature.
3.4 Numerical results
In Figure 3 we plotted three TC curves under the PVol constraint for three different selfsimilarity exponents , which gives three different ’s for the variation (recall ).
start pillar  switch pillar  

0.55  1.8  17  102 
0.50  2.0  34  94 
0.45  2.2  50  89 
Our numerical example renders the following evidence, which has been found in all runs we have performed:

If increases then the starting pillar of the execution decreases, i.e. the execution starts earlier.

If increases then the pillar at with we switch from TC to PVol increases, i.e. the PVol constraint is saturated later.
Since starting the execution later and saturating the PVol constraint earlier is related to higher levels of aggressiveness, we can infer from Figure 3 that the level of aggressiveness of TC under the PVol constraint decreases as increases. This finding is quite natural if we assume that the model is a fractional Brownian motion and is the Hurst exponent:

For the process has negative auto correlations, i.e. it behaves as a meanreverting process. Therefore, the market impact is reduced because prices go back to their level after an execution. In consequence, we can execute the order faster than in the case of a classical Brownian motion: we start the execution later and we go as fast as possible, and as such we saturate the constraint earlier.

For the process has positive auto correlations, i.e. it has a trend. Therefore, the market impact is of paramount importance because if we execute too fast then prices will move in the wrong direction. In consequence, we start the execution earlier and we go as slow as possible, and as such we saturate the constraint later.
In the next section we will study in detail, in the TC algorithm without PVol constraint, the relation between the risk measures of variation type and both the starting time and the slope at the last pillar.
4 Assessing the effects of the risk measure
In this section we consider the TC algorithm without PVol constraint and without participation at the close auction. As expected, the algorithm starts closer to the close without PVol constraint (Figure 4) than under the restriction (Figure 3).
We will show that the choice of the parameter for the variation plays a crucial role in both the model of the asset we are trading and the weight we give to the market risk with respect to the market impact.
4.1 Equivalence between risk measures and models
If we compare the recursive formula (13) for a Brownian motion with the recursive formula (19) for a selfsimilar process, we see that the former can be recovered from the latter by setting . Therefore, formula (19) can be derived for a Brownian model when the risk measure is not the variance but the variation.
In consequence, (19) is independent of the model we choose for the asset: assuming a selfsimilar process, estimating empirically its exponent of selfsimilarity and defining is equivalent to assuming a Brownian model, choosing and using the variation as the risk measure instead of the variance.
A direct consequence of this analysis is that the risk measure is of paramount importance. Indeed, it not only determines the weight we impose to the market risk but it is implicitly related to a model choice. Indeed, choosing is equivalent to choosing a selfsimilar process with , which for the fractional Brownian motion would mean a process with negative auto correlations, i.e. a meanreverting process. On the other hand, choosing implies , hence the corresponding fractional Brownian motion has positive auto correlations, i.e. it has a trend.
In summary:
Proposition 1
In order to obtain the recursive formula (19) for a TC algorithm via an AlmgrenChriss optimisation, the following two paths are equivalent:

Assuming a selfsimilar model for the asset, calibrating empirically its exponent of selfsimilarity and choosing the variation as the risk measure with .

Assuming a Brownian motion model for the asset and choosing the variation as the risk measure.
4.2 Risk measures, starting times and slopes for the TC algorithm
The aggressiveness of a TC algorithm can be measured in terms of both the starting time and the slope of the trading curve: an algorithm is more aggressive if it starts later and it executes faster the trades i.e. it puts more shares per pillar and the rate of change between consecutive pillars is bigger.
Our numerical simulations confirm the analytical results of the previous section: the optimal starting time and the slope of the trading curve are both monotone increasing in (see Figures 5 and 6, respectively). Under this framework, can be viewed as a tuning parameter for aggressiveness:
Proposition 2
The parameter in the recursive formula (19) measures the level of aggressiveness of the TC algorithm. More precisely:

increases the optimal stating time increases.

increases the slope of the cumulative trading curve at the last pillar increases.
In consequence, increases if and only the TC algorithm is more aggressive, i.e. it starts the execution later and executes more at each pillar.
4.3 Implied variation for CAC40 and link with liquidity
In Figure 7 we plotted the starting times as a function of for 39 out of 40 stocks in the CAC40 index.
The parameter has been considered so far as an input whilst the starting time was an output. However, since is increasing, we can consider the inverse problem: given a starting time there is a such that the trading curve for the TC algorithm executes the total number of shares between and the last trade before the closing auction. The is not unique because can only take discrete values, which implies that is piecewise constant. However, it can be rendered unique if we define the implied as
In consequence, we have an implied for the CAC40 index: given a common starting time , for each stock in the CAC40 index we find their such that . For the numerical simulations in Figure 7 we chose , which corresponds to the opening of the NYSE in the US, a very important time for European traders. We supposed that, for each name on the CAC40 index, we have to execute 6% of the total daily volume. For volume curves, volatility curves and market impact parameters we used those provided by the Quantitative Research at Cheuvreux.
The statistics of the implied are summarised in the next table:
minimum  1.60  quantile 25%  2.00 

maximum  2.40  median  2.10 
mean  2.11  quantile 75%  2.20 
std deviation  0.17 
The implied can be very useful for executing portfolios: we can synchronise all assets in our basket, so that all executions start at the same time . In this way we can assess and compare the market impact of the individual executions on the same ground, just as options traders use the implied volatility for that purpose. Of course, for real portfolio execution this is far from optimal, but at least it is a first step towards a systematic, quantitative measure of portfolio execution.
Finally, the implied can be viewed as a measure of the joint impact of the volatility and the liquidity, the latter modelled as the market impact. In order to illustrate that fact, we performed a linear regression on the implied with respect to the average volatility per year and the average market impact per pillar given by (17). The coefficients of the linear regression are:
(21) 
with . The results of (21) and Figure 8 can be interpreted as follows:

If the market impact decreases then the TC algorithm will start the execution later because the execution will have a smaller effect on moving the price in the wrong direction. However, since the starting pillar has been fixed, the TC algorithm compensates this fact by playing less aggressively, i.e. by decreasing .

If the volatility increases then the TC algorithm would like to start later in order to avoid paying the market risk. However, since the starting time is fixed, the algorithm plays less aggressive and thus it decreases .
5 Final remarks
What is the role of the Hurst exponent in the model?
As it was stated in Section 4.1, the choice of in the variation can come from two sources, either computing the Hurst exponent of the process and choosing , either by assuming a Brownian motion and choosing the value of that fits the trader’s schedule.
In practice, the Hurst exponent is not very robust statistically, which means that the right is to be implied, not computed directly from . One solution is to choose the starting time of the TC algorithm and find the that forces the optimal curve to start at , just as we did in Section 4.3. Another solution is to use as a finetuning parameter: since the optimal curves are defined exante, changing dynamically during the execution gives room to capitalise potential trends. For example, if the trader chose then they assumed that the prices would behave as a martingale. If a meanreversion dynamic in the prices is observed the trader can change for a new parameter , whilst if a trendfollowing dynamic is spotted the new parameter would be . The exact values of and depend on the risk budget of the trader, but in both cases there is a clear interpretation in terms of the Hurst exponent : the process is meanreverting for , a martingale for and has a trend for .
Why variation instead of the th moment of the wealth?
If we choose the central th moment as a risk measure, even in the case of i.i.d. random variables we would have several nonzero cross terms of the form
If is an integer then the wealth still has an explicit expression, which would lead to an explicit cost functional and therefore explicit equations for the optimal trading curve on the variables of the form
These equations can be solved computationally, although the numerical algorithm is no longer explicit and recursive but implicit. If is not an integer then the wealth is no longer explicit, which means that the computational effort needed to find the optimal trading curve is bigger.
Choosing the variation instead of the th moment present several advantages. First, the algorithm for the optimal trading curve is explicit and recursive of the form
This explicitness allows to study and interpret the effect of the parameters on the shape of the trading curves. Second, the variation can be interpreted as the norm of a vector intimately related to the wealth . Indeed, if we define the vectors
and consider the relative wealth of the TC algorithm given in (18), we obtain the “vectorial” identities
Third, the variation lets us to model the price as a general selfsimilar process (e.g. fractional Brownian Motion and Lévy process) without much of a fuss. Indeed, there is only a single parameter to adjust, namely , which can be found as the inverse of the Hurst exponent , i.e. . If we use the th moment instead then the equations become much more involved due to the autocorrelations. Fourth, the variation can be seen as an approximation of the the th moment, in which the cross terms are neglected.
Why adding a new risk parameter when we already have ?
There are two main reasons. The first is that, in the recursive formulas (19) and (20) for the TC algorithm, enters linearly whilst enters as a power. This implies that the effect on the optimal execution curve is stronger varying than , in particular the acceleration near the closing time. In consequence, playing with or is similar but not equivalent, since the nonlinear behaviour of cannot be replicated using the linear parameter . The second reason is that enters into play as a Langrange multiplier in the meanvariance optimisation, and as such it is not evident to choose a standard value of this risk adversion parameter. On the contrary, has a neat interpretation via the Hurst exponent using the identity . Moreover, there is a standard value for : if we assume martingality on the price process then .
Acknowledgements
Most of this research was done when both authors were working in the Department of Quantitative Research at Crédit Agricole Cheuvreux (now Kepler Cheuvreux); they are thankful for the help and support provided by the firm and the team. Mauricio Labadie would like to thank EXQIM as well for the help and support during the final stages of the paper. The authors would also like to thank an anonymous referee for their remarks and suggestions, which helped to improve this paper.