Time value of extra information against its timely valueI am grateful to Gerhard-Wilhelm Weber and Edward Hoyle for their comments, and to TÜBİTAK for supporting this research.

Time value of extra information against its timely valuethanks: I am grateful to Gerhard-Wilhelm Weber and Edward Hoyle for their comments, and to TÜBİTAK for supporting this research.

N. Serhan Aydin

We introduce an interactive market setup with sequential auctions where agents receive variegated signals with a known deadline. The effects of differential information and mutual learning on the allocation of overall profit & loss (P&L) and the pace of price discovery are analysed. We characterise the signal-based expected P&L of agents based on explicit formulae for the directional quality of the trading signal, and study the optimal trading pattern using dynamic programming and provided that there is a common anticipation by agents of gains from trade. We find evidence in favour of exploiting new information whenever it arrives, and market efficiency. Brief extensions of the problem to risk-adjusted gains as well as risk-averse agents are provided. We then introduce the ‘information-adjusted risk premium’ and recover the signal-based equilibrium price as the weighted average of the signal-based individual prices with respect to the risk-aversion levels.

Keywords: Information flow, signal-based pricing, random bridge processes, mutual learning, asymmetric information, optimal trading

AMS Classification: 60G35, 65P99, 49L20, 49M99

1 Introduction

The raison d’être of the markets we study is, in fact, to support information-based trading. Trade can occur on purely informational causes. In [6], for example, we are shown that there are situations in which both parties are strictly better off under a trade executed solely on the basis of their individual information. This is somewhat contrary to, e.g., [16] and [21]. Indeed, one can be overwhelmed by the task of handling a very broad spectrum of aspects where agent-level heterogeneity can arise, such as risk aversion levels, degrees of rationality, patience, beliefs, and information gathering, processing skills, and so on. A detailed classification of different market microstructure models, on the other hand, is given in [10] and beyond the scope of this study. However, we start with a review of the selected literature.

Perhaps one of the earliest sequential (discrete) trade models is the one described in the work of Glosten and Milgrom (cf. [15]), where an attempt is made to explain bid-ask spread as a purely informational phenomenon that is believed to be arising from adverse selection behaviour encountered by less-informed traders. The informational properties of transaction prices and the reaction of the spread to market-generated as well as other public information is also investigated. One of the interesting implications of this model is the possibility of market shutdowns due to severe informational inefficiencies. This is similar to the “lemons problem” of Akerlof [2]. The informational content of prices and the value of extra information to the holder are also examined in the work of Kyle (cf. [19]) through sequential as well as continuous auction models. Moreover, the latter two seem to converge as the trading interval gets smaller. One interesting result of the model discussed in [19], and to a certain extent in [15], is that modelling innovations as functions of quantities traded is found to be consistent with modelling price innovations as the consequence of new information arrivals. The ‘informativeness of prices’ (which is complementary to the amount of information which is yet to be incorporated into prices) in the context of [19] refers to the error variance of future dividend given the market clearing price. The question is how intensively the agent, given his superior signal, should trade over time to maximise his profit given his actions might disturb the market (i.e., prices and depth). This model is later on extended in [3] to general continuous distributions for the dividend. Then, a modified version of [15] in continuous time, where ‘bluffing’ (i.e., mixed strategy) is also allowed, is shown to converge to, again, a modified version of [19] with a random signal deadline in [4]. A rather game-theoretic approach to signal-based trade is taken in [6] where, this time, the dividend is let endogenously be determined by the action of the agent and its correspondence with the realised fundamental. The signals, in this case, are related to the action that needs to be taken. A sufficient level of signal precision is found to be necessary and sufficient for establishing the case where both seller and buyer are better off from trade in expectation (referred to as “common knowledge of gains from trade” in [6]), which is the equilibrium.

So far, there is no clear mention of the explicit dynamics of information flow, which is the subject of heterogeneity, and it is understood to be an ‘immediate access’ to a publicly unknown value without any noise component. Building on [3] and [19], a learning component is added in [5]. This means the signals are now long-lived with a signal-to-noise varying in time. Although this made possible the mentioning of information ‘flow’ in its true meaning, the interpretation of ‘learning’ through signal in [5] is slightly different in that when the noise-to-signal, i.e., reciprocal of signal-to-noise, is large, this means the agent is learning a lot. Yet, interestingly, given the total amount of information disparity in favour of the more informed, the pattern in which the information flows is found to be rather irrelevant in equilibrium. Later on, the long-lived signal process is associated with a exponential distributed random deadline (as in [4] earlier) in [12]. In fact, a random deadline changes the way the strategies for exploiting extra information are structured in various ways, with one way being that agents do not rush to unload their information before it becomes useless and, accordingly, trade frantically as deadline approaches. Backward induction methods of dynamics programming are also rendered inapplicable.

Perhaps the most interesting alternative to the models of ‘diverse information’ models (where agents do generally share the same probability measure but work over distinct probability spaces) are those of ‘diverse beliefs’. One way to account for the diversity of beliefs is through equivalent (i.e., defined over the same filtered probability space) probability measures which reflect agents’ personal beliefs on the true value of the dividend, as in [9]. This is maintained by likelihood ratio martingales (or, density processes). Interestingly, the equivalence of the latter two models is established, even without a particular choice of explicit signal structure for private information. And, not so interestingly, the greater the diversity of beliefs, the larger the volume of trade is. A similar approach is found in [13] where an equilibrium is established in terms of ‘surviving agents.’ In a belief-heterogeneous market, the surviving agent is found to be the one who is the most rational. Last but not least, in cognisance of the important role played by dynamic optimisation in approaching heterogeneous financial market equilibrium problems, we underline two recent accounts of the latter, i.e. [11] and [14], where how, in a market of two agents with heterogeneous characteristics, equilibria for various quantities can be found by means of a single backward-induction algorithm is vividly shown.

The approach, in the rest of this paper, to being informationally advantageous or disadvantageous is analogous to the one in [7]: we do not view the difference between the latter two as having or not having immediate access to the future value of a variable which is unknown to the public information. We rather view it as having access to efficient streams of information or, equivalently (cf. [1]), being more capable to compile and process large and complex datasets out of publicly available information. Both of these are associated with a higher of in the present context. Yet, in the sense of [9], the present framework can also be seen as a diverse-belief model where beliefs are shaped in time by the information itself.

2 Modelling Information Flow

The information-based framework was first introduced in [8] as a new way of modelling credit risk and, later on, applied to a broad spectrum of issues in financial mathematics, including the valuation of insurance contracts, modelling of defaultable bonds, pricing of inflation-linked assets, and modelling of insider trading, before it was generalised to a wider class of Lévy information processes in [17].

Accordingly, we introduce the signal process (or, the information process in the sense of [8])


which is a Brownian random bridge (BRB), as defined in [17] for the general class of Lévy processes (i.e., Lévy random bridges or LRBs). Here, (or, explicitly ) is a standard Brownian bridge over the period which takes on value at the beginning and end, and is a measure of true signal to noise (henceforth, just ‘signal-to-noise’). The latter governs the overall speed of revelation of true information about the actual value of the fundamental .

We also remark that Eq. (1) is not the only way to represent information flow. Some other forms have also been considered in the literature with slightly different characteristics, such as, , or .

More formally, we define a probability space , on which the filtration is constructed. Here, , i.e., the risk-neutral measure, is assumed to exist. The default measure is set to throughout the thesis, if not stated otherwise. The filtration is generated directly by and, thus, simply by itself. The latter simplification follows from the Markov property of .

Proposition 1

The process , as defined in Eq. (1), is conditionally Markovian.

Proof 1

Let . Defining as a Brownian motion, we can indeed express the signal process as


One can verify that these are identical to


which, in turn, implies


Equations (3) and (4) indeed follow from two other well-known representations of bridges (see, e.g., [22]). Eq. (4), on the other hand, directly implies that, given , is a Markov process with respect to its own filtration, i.e.,


for any , and any measurable, finite-valued function (cf. [22]).

We are now in a position to work out, with respect to the available information , the value and dynamics of an asset which generates a cashflow at time for some invertible function . The value , , is given by


where is the numéraire and the posterior density of the payoff. The quantities and are measurable with respect to , but not necessarily w.r.t. , . On an important note, we remark that , i.e., the pure noise, is not measurable w.r.t. , meaning that it is not directly accessible to market agents. Thus, an agent, although he observes , cannot separate true signal from noise until time .

Using Bayesian inference, can be expressed as


with and is the support of payoff distribution, whereas the PDE it satisfies as





One can indeed show, by referring to Lévy’s characterisation [20], that is a Brownian motion adapted to (cf. [8]).

Corollary 2

Assume, as a particular case, that is an identity, i.e., , and a priori. Then, Eq. (7) implies


where .

3 Model Setup

We assume that there is a pure dealership market comprising risk-neutral agents with heterogeneous informational access. For simplicity, and w.l.o.g., we assume there is a pair of agents, , with access to the filtrations generated by , , and a single risky asset with payoff not being measurable w.r.t. , . We also assume , i.e., agent 2 is informationally more susceptible than agent 1. In our dynamical model, for simplicity of analysis, we suppose that agents trade with each other futures contracts on the single risky asset at sequential auction times for , without any intertemporal consumption and exogenous wealth. Both agents simply follow a buy-and-hold strategy. In this setup, execution of trades, besides a potential profit or loss, results in two things. First, they help, e.g., the central-planner, consolidate information sets of agents at time to have a joint information bundle . Second, the competitive market price will be discovered immediately. Below we will analyse the latter two separately. Limit orders are cleared by a Walrasian matching engine (as in [11]), which can be deemed a central-planner in the context of [9] or a group of competitive market makers. The central-planner aims solely to maximise the overall expected profit (or, utility) of agents.

We also note that, for any given and a priori density , the price is a function of and , i.e., . This means, if is observed, then one needs to know to be able to back out . Without knowledge , the observer cannot infer how reliable an observed sample of is.

Moments before the sequential auction time , agents, having observed their signals, submit to the central planner the bid and ask prices at which they are willing to trade. One key property of our model is that an agent may not necessarily know his signal is superior (i.e., agnostic) and the agents will be able to infer each other’s prices, and also information (unless they are ‘omitters’, as described below), when, and only if, a price match occurs and a clearing price is set. Otherwise, limit orders are kept with the auction engine (i.e., closed limit order trading book). This also rules out ‘bluffing’ (cf. [4, 12]).

Individual bid and ask prices are based on the signal-implied prices worked out by virtue of Eq. (7) and trade occurs whenever


with and being the constant bid and ask multipliers, respectively, where and . Obviously, if Eq. (11) holds with equality, i.e., if or , then the market price will be discovered directly. In case of an inequality, under risk-neutrality assumption, the market will clear at the mid-price:


In [11], the authors vividly explain why the real-world interpretation of the price posted by a Walrasian auctioneering computer is the bid-ask midpoint.

The initial contract holdings of agents, as denoted by , , are set to . Here denotes the total time- net contract stock held by agent . We also define as the total net contract ‘stock’ held by the central clearing at time . Accordingly, total net order ‘flow’ at time should be which is given by


for some trading process , given by


with , . Market clearing conditions imply and, therefore, , . Now we define the increasing process , i.e., the time of the last trade prior to time , as follows:


It is apparent that is if , or if and . The ex-post (i.e., at contract expiry) profit/loss of agent coming from time transaction can be written as

(or, simply) (17)

where is as in Eq. (12), and and denote high- and low-type markets, respectively (cf. [19]). Eq. (3) is based on the correspondence of signal and reality. Market clearing conditions again will require .

4 Numerical Analysis

We now present some numerical results based on the setup above. Let and assume, in this first scenario, that both agents are “omitters” (or, “stubborn bigots” of [9]) who never change their mind and simply execute trades according to the following recurring procedure: (1) Observe signal . (2) Quote signal-based bid and ask prices. (3) Let the central-planner determine using the pre-announced and legally binding matching rule (12) the trade direction, if any, and the transaction price (which are then revealed to the agents). Note that agents execute trades “without” learning from each other who could, otherwise, update their information set and continue to rely solely on their own information sources.

Figure 1: Sample evolution of information-based transaction prices in scenario 1 (). Arbitrary parameter values: , , , , and (i.e., true value is set to ) with . The dotted lines are bid and ask prices based on and , respectively, with .
Figure 2: Evolution of information-based transaction P&L averaged over path simulations and based on parameters from Figure 1, except that .

In Figure 1, where the true fundamental value of is set to , we illustrate one possible path of such a scenario. Despite a bid-ask margin, occurrence of trade is highly likely in this case as agents do not learn from each other and as personal value judgements diverge. The informationally more (less) susceptible agent, though unknowingly, keeps trading in the right (wrong) direction due to superiority (inferiority) of his signal. Note from Figure 1 that even after the agent with better signal discovers the asset’s true value (around auction ), he is still able to execute profitable trades thanks to the matching rule. Figure 2, on the other hand, shows the profit-and-loss (P&L) results of such a scenario for each time step averaged over simulations, where the number of auctions is increased to . We note at the first glance that the qualitative behaviour of the P&L agrees with the qualitative behaviour of the magnitude of extra information held (cf. [7]).

On an additional note, when multiple (2) agents with various informational capabilities are involved in the market, our numerical results presented in Figure 3 suggest that, while P&L continue to agree with the qualitative behaviour of the magnitude of extra information held by the agent, it is also distributed between agents proportional to the quality of their signal (particularly once the differential informational reaches an adequate level).

Figure 3: Evolution of information-based transaction P&L of multiple agents averaged over path simulations and based on parameters given in Figure 1 , except that .

Yet, the exchanges generally do not operate quite this way. A more realistic scenario would be that agents are “attentive” and infer their counterpart’s posterior , and, therefore, likelihood , from their price quote at time . This would mean having, at any time , partial access to a larger , e.g., for agent , , generated by the join of and , i.e.,


Once agent gains partial access to , he updates his posterior from to (by updating to , i.e., the effective likelihood), which will be again of the form, e.g., for agent 1,


Note that we intentionally avoid the notations and (and use and instead) so as not to mean that one party’s signal is directly observable to the other at the last auction time (which is also not needed).

Thus, before submitting an order at time , having observed a new signal , the agent will need to update his effective information to (e.g., for agent 1) or (in the case of agent 2). Also, since is Markovian, for an agent, partially accessing the signal sample of his counterpart will be as valuable as partially accessing his entire signal history . Accordingly, right before the auction at time , the ‘useful’ effective likelihood for agent 1 will be


where we used the relation , with , to find , i.e., the correlation between and conditional on , as follows:


with being the correlation between and . We note that is a decreasing function of time, as expected, and also that, when , Eq. (20) simply reduces to


which also reduces to when (no trade). The signal-based price of agent , , is then given by


Accordingly, the new trading procedure is as follows: (1) Observe signal . (1a) Work out . (2) Quote signal-based bid and ask prices based on effective information. (3) Let the central-planner do his work (same as (3) above).

Figure 4: Sample evolution of information-based transaction prices along a sample path in scenario 2 (). Arbitrary parameter values: , , , , and with . The dotted lines are bid and ask prices based on and , respectively, with .
Figure 5: Evolution of information-based transaction P&L averaged over a series of path simulations and based on parameters given in Figure 4, except that .

One realisation of this second scenario is depicted in Figure 4. At the first glance, learning seems to have decreased profit margins substantially (i.e., to a level where they are often eaten up by the spread, preventing trade). In Figure 5, we again show average stepwise P&L of agents over realisations. It is apparent from the figure that the informationally more susceptible agent is no more able to extract rents that are as large as in the first scenario (see Figure 2), although he is still able to maintain some modest profits. His ability to maintain modest profits is most likely due to the lag in the learning process as there is still a room for the superior signal to provide the agent receiving it with extra information in-between auctions. The huge difference between the outcomes of two scenarios, i.e., “omitter” and “attentive”, implies that, when each agent deems his own signal superior, there might exist optimal strategies where agents can still be “attentive” but, this time, choose which time to reveal their information through trade.

Figure 6: Learning process: Bayesian updating of posteriors averaged over path simulations and based on parameters given in Figure 4, except that .
Figure 7: Learning process: Bayesian updating of posteriors averaged over path simulations and based on parameters given in Figure 4, except that .

To conclude this section, we compare, in Figures 6-7, the impact of allowing mutual learning on the speeds at which the two agents discover the true fundamental value of the asset. In the case where the differential between information flow speeds is high (refer to Figure 6), learning seems to work more in favour of the agent with less superior signal with little or no benefit to the agent with a superior signal, whereas, when the differential is minimal (cf. Figure 7), both agents equally benefit from sharing their information via trading.

5 Signal-based Optimal Strategy

The P&L figures provided in Section 3 are ex-post, i.e., calculated at the terminal date. In reality, when they trade, agents do so based on their signal-based expectations about the true fundamental value to be revealed at time . They learn whether their earlier trades in futures contracts turned out to be a profit or loss again at time . This, in fact, establishes the main argument which calls for the existence of optimal choices of trading times which maximise their signal-based expected profits: both agents believe that their trades will make them better off (or, there exists ‘a common knowledge of gains from trade’ in the sense of [6]). Throughout this section, we will regard the agents as ‘attentive,’ and assume .

5.1 Characterisation of Expected Profit

We recall from Section 4 that, just before the auction at time , the agent observes the value of his signal and works out his effective information before he makes a judgement of the asset’s value. Assuming and, again, , the expected (ex-ante) profit of agent from his possible trade at time can be decomposed as follows:


with and being the chances of agent getting correct and erroneous signals at time , respectively. And, again, . More formally,

where, again, and denote high- and low-type markets in the sense of [19], and


with and being the chances of agent getting correct and erroneous signals at time , respectively. And, again, . More formally,

where, again, and denote high- and low-type markets in the sense of [19], and


Then, Eq. (27) can be written more explicitly as follows:



with . When the payoff, i.e., , is continuous, however, Eq. (5.1) implies


where and ; and are normalised posteriors for high- and low-type markets, respectively; and, at this time,


The notation is used to denote , i.e., the signal-based price of the agent when the actual signal is pinned to the value . In a nutshell, expected profit of the agent is decomposed, through Eq. (5.1) and (33), into two components, i.e., whether the agent’s signal is pointing at the right (wrong) trade direction and, in that case, what the expected profit (loss) would be.

5.1.1 Directional Quality of Trading Signal (Digital Dividend)

Assume, without loss of generality, that , with , and the prior knowledge of the pair . In fact, any binary payoff structure , can be simplified as (a property which will simplify our calculations below). Let the true value of be . Equation (5.1) implies


We can calculate the likelihoods of receiving a correct trade signal for agent when (i.e., agents did already exchange their information through trading before time ) in high- and low-type markets, respectively, as follows:


A straightforward calculation yields