The time interpretation of expected utility theory
Abstract
Decision theory is the model of individual human behavior employed by neoclassical economics. Built on this model of individual behavior are models of aggregate behavior that feed into models of macroeconomics and inform economic policy. Neoclassical economics has been fiercely criticized for failing to make meaningful predictions of individual and aggregate behavior, and as a consequence has been accused of misguiding economic policy. We identify as the Achilles heel of the formalism its least constrained component, namely the concept of utility. This concept was introduced as an additional degree of freedom in the 18th century when it was noticed that previous models of decisionmaking failed in many realistic situations. At the time, only pre18th century mathematics was available, and a fundamental solution of the problems was impossible. We revisit the basic problem and resolve it using modern techniques, developed in the late 19th and throughout the 20th century. From this perspective utility functions do not appear as (irrational) psychological reweightings of monetary amounts but as nonlinear transformations that define ergodic observables on nonergodic growth processes. As a consequence we are able to interpret different utility functions as encoding different nonergodic dynamics and remove the element of human irrationality from the explanation of basic economic behavior. Special cases were treated in Peters and GellMann (2016). Here we develop the theory for general utility functions.
pacs:
02.50.Ey,05.10.Gg,05.20.Gg,05.40.JcThe first three sections are concerned with putting this work in context, and a brief summary of relevant aspects expected utility theory. The novel technical part starts in Section IV.
I Positioning
The present document is concerned with decision theory, part of the foundation of formal economics. It is therefore worth our while to spell out where in this vast context we feel our contribution is located. It addresses the most formal part of economics, something that is often called neoclassical economics. Broadly speaking this is the part of economics that builds simple quantitative models of economic processes, analyzes these models mathematically, and interprets their behavior by giving realworld meaning to model variables.
This approach to thinking about economic issues became particularly dominant in the second half of the 20th century. Soon after the rise of its popularity it began to be fiercely criticized. We take these criticisms very seriously and interpret them as an indication that something is fundamentally wrong in the way we conceptualize economic problems in the neoclassical approach.
It is certainly true that some of the predictions of neoclassical economics clash with observations. Paradoxes, that is, apparent internal inconsistencies stubbornly remain in the field (examples are the St. Petersburg paradox or the Equity Premium Puzzle). This situation may elicit different responses, for example

we can think of it as a normal part of science in progress. Of course there are unresolved problems – finding their solutions is the job of the economic researcher.

we may conclude that the penandpaper approach using models simple enough for analytical solution makes the representation of people too simplistic. Instead of analyzing such models, it has been argued, we should turn to numerical work and build insilico worlds of agents with more complex, more realistic behaviour.

we may reject the entire scientific approach, whether analytic or numerical. Proponents of this position argue that economic questions are fundamentally moral, not scientific, and that a scientific approach is bound to miss the most important aspects of the problem.
We consider all three responses valid but not mutually exclusive. Every discipline has open problems, and it would be foolish to dismiss an approach only because it has not resolved every problem it encountered. Turning to computer simulations is part of every scientific discipline – when simple models fail and more complex models are not analytically tractable, of course we should use computers. Nor can we dismiss the argument that building a good society entails more than building an economically wealthy society, and that mathematical models only elucidate the consequences of a set of axioms but cannot prove the validity, let alone the moral validity, of the axioms themselves.
The treatment we present here is most informative with respect to perspective 1. We agree with the neoclassical approach in the following sense: we believe that simple mathematical models can yield meaningful insights. We ask precisely how the failures of neoclassical economics may be interpreted as a flaw in the formalism that can be corrected. Such a flaw indeed exists, buried deep in the foundations of formal economics: often expectation values are taken where time averages would be appropriate. In this sense, formal economics has missed perhaps the most important property of decisions: they are made in time and affect the future. They are not made in the context of coexisting possibilities across which resources may be shared. We find reflections of this missing element, for instance in the criticism of “short termism” that is often levelled against neoclassical economics. Indeed, an approach that disregards time in this precise way will result in a formalism that is overly focused on the short term. For example, such a formalism will not provide an understanding of the fundamental benefits of cooperation Peters and Adamou (2015a).
We are led by this analysis to a correction of the formalism capable of resolving a number of very persistent problems. The work in the present paper is part of implementing the correction. It also helps clarify the relationship between existing work in neoclassical economics and our own work. Overall, we propose to revisit and redevelop the entire formalism from a more nuanced basis that gives the concept of time the central importance it must have if the formalism is to be of use to humans and collections of humans whose existence is inescapably subject to time.
Ii Epistemology
We begin with some remarks on rationality. Economics is the only science that frequently states that it assumes rational behavior. The comparison with physics is illuminating.


A strong though rarely articulated assumption in physics is that observed behavior can be explained, in the sense that it follows rules, laws, or tendencies that – once identified – enable us to predict and comprehend the behavior of a given system. This assumption is a fundamental belief. It is assumed that the world, or rather very little isolated bits of the world, can be understood. Without this assumption it would not be sensible to try to understand the behavior of physical systems.

When we say that we assume rational behavior in economics, as we do, we mean nothing else. We assume that observed behavior can be explained, in the sense that it follows rules, laws, or tendencies that – once identified – enable us to predict and comprehend human behavior.



In physics we proceed by specifying a model of the observed behavior, that is, a mathematical analog, our guess of the rules governing the physical system. For instance, we might say that electrons are point particles with mass kg that repel one another with Coulomb force.

Similarly, in economics we proceed by specifying our model of human behavior. For instance, we might say that humans choose the action that maximizes the expectation value of their monetary wealth.



We now confront our model with observations. No observation will be exactly as predicted by the model. No two electrons will be observed to repel each other with Coulomb force. There are too many other electrons around, and protons and gravity and countless perturbations. Nonetheless, the model is useful because it makes more or less sensible predictions of large groups of electrons. The behavior of a billion billion billion electrons over here and a billion billion billion electrons over there may be well described as the behavior of many electrons repelling each other with Coulomb force. But we may find a realm where the electrons behave irrationally. For instance, a lump of kg of matter should be able to absorb any amount of energy. But as it turns out, electrons bound to a nucleus only accept certain fixed amounts of energy. This presents a dilemma to the physicist. He now has a choice between i) declaring electrons as behaving irrationally, i.e. giving up the search for an explanation, and ii) declaring his model as deficient in the regime of interest and search for another model. Often a pretty good mathematical description of the irrational behavior is easily found but is perceived as a mathematical trick, just a description with no inherent meaning ^{1}^{1}1Both Boltzmann and Planck described some of their greatest discoveries as a mere mathematical trick (respectively, taking expectation values under equiprobability of microstates and quantizing blackbody energy).. Some years or centuries later an intuition evolves in a new context, and the previously purely formal model (the mathematical trick) now appears as a natural part of a bigger picture.

Similarly, observations of human behavior will not be exactly as predicted. There are too many idiosynchratic and circumstantial factors involved. No single person will be observed to maximize the expectation value of his wealth consistently. Nonetheless, an overall tendency may be predictable – a majority of people may prefer a 50/50 chance of receiving $2 or losing $1 over no change in their wealth. But we may find a realm where people behave consistently irrationally. Perhaps few people will prefer a 50/50 chance of winning $20,000 or losing $10,000 over no change in their wealth. Again, the scientist has a choice between i) giving up the fundamental belief that made him a scientist in the first place and declaring humans to be irrational, and ii) declaring his model deficient in the new regime and look for a better model. In the example we mentioned, a new model was quickly found in the early 18th century. While human behavior is not well described as maximizing the expectation value of wealth, it is quite well described as maximizing the expectation value of changes in the logarithm of wealth. Where the logarithm comes from is unclear – the psychological label “risk aversion” is attached to it but that’s just a label. In essence, this is a mathematical trick that seems to work well, just as Planck’s trick of quantizing energy worked well. Following the development of quantum mechanics, Planck’s trick doesn’t seem so strange any more. An intuition has arisen around it. The story of this paper is the story of the equivalent development in decision theory. Following the formulation of the concept of ergodicity, the logarithm – the mathematical trick that saved decision theory – does not seem so strange any more. We identify the use of the logarithm as a different model of rationality: it is rational to maximize average wealth growth over time under the null model of multiplicative growth; it is not rational to maximize the mathematical expectation of wealth. The two models give similar predictions for small monetary amounts, but entirely different predictions when the amounts involved approach the scale of total disposable wealth. Maximizing the rate of change of the logarithm of wealth now appears as a natural part of a bigger picture.

Firstly, we make the methodological choice to assume that human behavior can be understood in principle. This is sometimes called the rationality hypothesis.
Secondly, we postulate a specific form of rationality, that is, we state an axiom. Our axiom is that humans make decisions in a manner that would optimise the timeaverage growth rate of wealth, were those decisions to be repeated indefinitely. In our treatment, decisions are choices between different stochastic processes, not choices between different random variables as is usually the case in decision theory.
Just as the description of electrons as charged point masses is not a good description in all contexts, our treatment is not a good description of economic decisions in all contexts. For example, we expect our axiom to be a poor representation of reality if relevant time scales are short. Of course “short” is a relative term that depends on the stochastic process. Time scales are short if a typical trajectory is dominated by noise. In this regime the underlying tendencies of an individual’s decisions have no time to emerge, and are subsumed by randomness.
Having pointed out the descriptive limits of our treament, we add that our theory is not normative either. We simply point out the logical and mathematical connections between our treatment and classsical decision theory.
Iii Expected utility theory
Expected utility theory is the bedrock of neoclassical economics. It provides the discipline’s answer to the fundamental decision problem of how to choose between different sets of uncertain outcomes. The generality of the framework is allencompassing. Everything in the past is certain, whereas everything in the future comes with a degree of uncertainty. Any decision is about choosing one uncertain future over alternative uncertain futures, wherefore expected utility theory is behind the answer of neoclassical economics to any problem involving human decision making.
To keep the discussion manageable, we restrict it to financial decisions, i.e. we will not consider the utility of an apple or of a poem but only utility differences between different dollar amounts. We restrict ourselves to situations where any nonfinancial attendant circumstances of the decision can be disregarded. In other words we work with a form of homo economicus.
For a decision maker facing a choice between different courses of action, the workflow of expected utility theory is as follows

Imagine everything that could happen under the different actions:
Associate with any action a set of possible future events , , … 
Estimate how likely the different consequences of each action are and how they would affect your wealth:
For set , associate a probability and a change in wealth with each elementary event , and similarly for all other sets. 
Specify how much these outcomes would affect your happiness:
Define a utility function, , that only depends on wealth and describes the decision maker’s risk preferences. 
Aggregate the possible changes of happiness for any given event:
Compute the expected changes in utility associated with each available action, , and similarly for actions … 
Pick the action that makes you happiest:
The option with the highest expected utility change is the decision maker’s best choice.
Each step of this process has been criticized, but we assume that all steps are possible. This does not reflect a personal opinion that they are unproblematic in reality but is a methodological choice. By overlooking some undeniable but possibly solvable difficulties we are able to inspect and question aspects at a deeper level of the formalism. Thus we assume that all possible future events, associated probabilities and changes in wealth are known, that a suitable utility function is available, and that the mathematical expectation of utility changes is the mathematical object whose ordering reflects preferences among actions. For simplicity we also make the common assumption that the time between taking an action and experiencing the corresponding change in wealth is independent of the action taken.
Having accepted the basic premises of expected utility theory we acknowledge a remaining criticism. Expected utility theory may not be useful in practice. Of course usefulness can only be assessed if we know what we want to achieve. One aim of decision theory may be to genuinely help real people make decisions. On this score expected utility theory is limited. It is designed to ensure consistency in an individual’s choices, but judged against criteria other than the risk preferences of the individual the theory may produce consistently bad choices. For example, decision theory is not designed to find the decisions that lead to the fastest growth in wealth; the decisions it recommends are those that maximize the mathematical expectation of a model of the decision maker’s happiness. For a gambling addict, for instance, these decisions may lead to bankruptcy. Expected utility theory will recognize the individual as addicted to gambling, and conclude that he will be happiest behaving recklessly. It is a laissezfaire approach to decision theory. Such an approach is not illegitimate, however its limitations must be borne in mind. For instance, when designing policy it is no use to recognize that a financial institution that takes larger risks than are good for systemic stability is happiest when doing so. For any given decision maker it requires a utility function that can only be estimated by querying the decision maker, possibly about simpler choices that we believe he can assess more easily. Preferences of the decision maker are thus an input to the formalism. The output of the formalism is also a preference, namely the action that makes the decision maker the happiest. In other words, the output is of the same type as the input, which makes the framework circular. It may help the decision maker by telling him which action is most consistent with other actions he has taken or knows he would take in other situations.
We will interpret the basic findings of expected utility theory in a different light. We will remove the circularity, for better or worse, and using our model of rational behavior show that rationality according to our axioms under a reasonable model of wealth dynamics is equivalent to expected utility theory with commonly used utility functions. Some researchers consider this an irrelevant contribution because in that case we might just continue using expected utility theory. We disagree and consider our contribution an important step forward because it motivates new questions and provides answers that are not circular.
The range of questions we can answer in this way is surprising to us. Examples are: how does an investor choose the leverage of an investment Peters (2011a)? How can we resolve the St. Petersburg paradox Peters (2011b)? How can we resolve the equity premium puzzle Peters and Adamou (2011)? Why do people choose to cooperate Peters and Adamou (2015a)? Why do insurance contracts exist Peters and Adamou (2015b)? How can we make sense of the recent changes in observed economic inequality Berman et al. (2017)? Do economic systems change from one phase to another under different tax regimes?
We have variously referred to our approach as “dynamical” or “timebased” or as recognizing disequilibrium or nonergodicity. The best term to refer to our perspective may be “ergodicity economics” – in every problem we have treated we have asked whether the expectation values of key variables were meaningful, in particular how they were related to time averages.
Iv Technical
We repeat our two axioms.
1. Human behavior can be understood. It follows a rationale and is in that sense rational.
2. We explore the following model of this rationale. Humans make decisions so that the growth rate of their wealth would be maximized over time were those decisions repeated indefinitely.
We suppose that an individual’s wealth evolves over time according to a stochastic process. This is a departure from classical decision theory, where wealth is supposed to be described by a random variable without dynamic. To turn a gamble into a stochastic process and enable the techniques we have developed, a dynamic must be assumed, that is, a mode of repetition of the gamble, see Peters and GellMann (2016).
The individual is required to choose one from a set of alternative stochastic processes, say and . We suppose that this is done by considering how the decision maker would fare in their longtime limits.
At each decision time, , our individual acts to maximise subsequent changes in his wealth by selecting so that if he waits long enough his wealth will be greater under the chosen process than under the alternative process with certainty. Mathematically speaking, there exists a sufficiently large such that the probability of the chosen being greater than is arbitrarily close to one,
(1) 
where and
(2) 
with similarly defined.
The criterion is necessarily probabilistic since the quantities and are random variables and it might be possible for the latter to exceed the former for any finite . Only in the limit does the randomness vanish from the system.
Conceptually this criterion is tantamount to maximising or, equivalently, . However, neither limit is guaranteed to exist. For example, consider a choice between two geometric Brownian motions,
(3)  
(4) 
with and . The quantities and both diverge in the limit and a criterion requiring the larger to be selected fails to yield a decision.
To overcome this problem we introduce a montonically increasing function of wealth, which we call suggestively . We define:
(5)  
(6) 
The monotonicity of means that the events and are the same. Taking allows this event to be expressed as , whence the decision criterion in (Eq. 1) becomes
(7) 
Our decision criterion has been recast such that it focuses on the rate of change
(8) 
As before, it is conceptually similar to maximising
(9) 
If satisfies certain conditions, to be discussed below, then the function can be chosen such that this limit exists. We shall see that is then the timeaverage growth rate mentioned in Section II. For the moment we leave our criterion in the probabilistic form of (Eq. 7) but to continue the discussion we assume that the limit (Eq. 9) exists.
Everything is now set up to make the link to expected utility theory. Perhaps (Eq. 9) is the same as the rate of change of the expectation value of
(10) 
We could then make the identification of being the utility function, noting that our criterion is equivalent to maximizing the rate of change in expected utility. We note and hence are random variables but is not. Taking the longtime limit is one way of removing randomness from the problem, and taking the expectation value is another. The expectation value is simply another limit: it’s an average over realizations of the random number , in the limit . The effect of removing randomness is that the process is collapsed into the scalar , and consistent transitive decisions are possible by ranking the relevant scalars. In general, maximising does not yield the same decisions as the criterion espoused in (Eq. 7). This is only the case for a particular function whose shape depends on the process . Our aim is to find these pairs of processes and functions. When using such as the utility function, expected utility theory will be consistent with optimisation over time. It is then possible to interpret observed behavior that is found to be consistent with expected utility theory using the utility function in purely dynamical terms: such behavior will lead to the fastest possible wealth growth over time.
We ask what sort of dynamic must follow so that or, put another way, so that is an ergodic observable, in the sense that its time and ensemble averages are the same (Kloeden and Platen, 1992, p. 32).
We start by expressing the change in utility, , as a sum over equal time intervals,
(11)  
(12)  
(13) 
where and . From (Eq. 9) we have
(14)  
(15) 
keeping fixed. From (Eq. 10) we obtain
(16) 
where each is drawn independently from the distribution of .
We now compare the two expressions (Eq. 15) and (Eq. 16). Clearly the value of in (Eq. 15) cannot depend on the way in which the diverging time period is partitioned, so the length of interval must be arbitrary and can be set to the value of in (Eq. 16), for consistency we then call . Expressions (Eq. 15) and (Eq. 16) are equivalent if the successive additive increments, , are distributed identically to the in (Eq. 16), which requires only that they are stationary and independent.
Thus we have a condition on which suffices to make , namely that it be a stochastic process whose additive increments are stationary and independent. This means that is, in general, a Lévy process. Without loss of realism we shall restrict our attention to processes with continuous paths. According to a theorem stated in (Harrison, 2013, p. 2) and proved in (Breiman, 1968, Chapter 12) this means that must be a Brownian motion with drift,
(17) 
where is the infinitesimal increment of the Wiener process.
By arguing backwards we can address concerns regarding the existence of . If follows the dynamics specified by (Eq. 17), then it is straightforward to show that the limit always exists and takes the value . Consequently the decision criterion (Eq. 7) is equivalent to the optimisation of , the timeaverage growth rate. The process may be chosen such that (Eq. 17) does not apply for any choice of . In this case we cannot interpret expected utility theory dynamically, and such processes are likely to be pathological.
This gives our central result:
For expected utility theory to be equivalent to optimisation over time, utility must follow an additive stochastic process with stationary increments which, in our framework, we shall take to be a Brownian motion with drift.
This is a fascinating general connection. If the physical reason why we observe nonlinear utility functions is the nonlinear effect of fluctuations over time, then a given utility function encodes a corresponding stochastic wealth process. Provided that a utility function is invertible, i.e. provided that its inverse, , exists, a simple application of Itô calculus to (Eq. 17) yields directly the SDE obeyed by the wealth, . Thus every invertible utility function encodes a unique dynamic in wealth which arises from a Brownian motion in utility. This is explored further below.
V Dynamic from a utility function
We now illustrate the relationship between utility functions and wealth dynamics. For the reasons discussed above we assume that utility follows a Brownian motion with drift.
If can be inverted to , and is twice differentiable, then it is possible to find the dynamic that corresponds to the utility function . Equation (17) is an Itô process. Itô’s lemma tells us that will be another Itô process, and Itô’s formula specifies how to find in terms of the relevant partial derivatives
(18) 
We have thus shown that
Theorem 1.
For any invertible utility function a class of corresponding wealth processes can be obtained such that the (linear) rate of change in the expectation value of net changes in utility is the timeaverage growth rate of wealth.
As a consequence, optimizing expected changes in such utility functions is equivalent to optimizing the timeaverage growth, in the sense of Section IV, under the corresponding wealth process.
The origin of optimizing expected utility can be understood as follows: in the 18th century, the distinction between ergodic and nonergodic processes was unknown, and all stochastic processes were treated by computing expectation values. Since the expectation value of the wealth process is an irrelevant mathematical object to an individual whose wealth is modelled by a nonergodic process the available methods failed. The formalism was saved by introducing a nonlinear mapping of wealth, namely the utility function. The (failed) expectation value criterion was interpreted as theoretically optimal, and the nonlinear utility functions were interpreted as a psychologically motivated pattern of human behavior. Conceptually, this is wrong.
Optimization of timeaverage growth recognizes the nonergodicity of the situation and computes the appropriate object from the outset – a procedure whose building blocks were developed beginning in the late 19th century. It does not assume anything about human psychology and indeed predicts that the same behavior will be observed in any growthoptimizing entities that need not be human.
v.1 Examples
Equation (18), creates pairs of utility functions and dynamics . In discrete time, two such pairs were investigated in Peters and GellMann (2016), namely cases and below.
v.1.1 Linear utility
The trivial linear utility function corresponds to additive wealth dynamics (Brownian motion),
(19) 
v.1.2 Logarithmic utility
Introduced by Bernoulli in 1738 Bernoulli (1738), the logarithmic utility function is in wide use and corresponds to multiplicative wealth dynamics (geometric Brownian motion),
(20) 
In practice the most useful case will be multiplicative wealth dynamics. But to demonstrate the generality of the procedure, we carry it out for a different special case that is historically important.
v.1.3 Squareroot (Cramer) utility
The first utility function ever to be suggested was the squareroot function , by Cramer in a 1728 letter to Daniel Bernoulli, partially reproduced in Bernoulli (1738). This function is invertible, namely , so that (Eq. 18) applies. We note that the square root, in a specific sense, sits between the linear function and the logarithm: and . Since linear utility produces additive dynamics and logarithmic utility produces multiplicative dynamics, we expect squareroot utility to produce something in between or some mix. Substituting for in (Eq. 18) and carrying out the differentiations we find
(21) 
The drift term contains a multiplicative element (by which we mean an element with dependence) and an additive element. We see that the squareroot utility function that lies between the logarithm and the linear function indeed represents a dynamic that is partly additive and partly multiplicative.
(Eq. 21) is reminiscent of the CoxIngersollRoss model Cox et al. (1985) in financial mathematics, especially if . Similar dynamics, i.e. with a noise amplitude that is proportional to , are also studied in the context of absorbingstate phase transitions in statistical physics Marro and Dickman (1999); Hinrichsen (2000). That a 300yearold letter is related to recent work in statistical mechanics is not surprising: the problems that motivated the development of decision theory, and indeed of probability theory itself are farfrom equilibrium processes. Methods to study such processes were only developed in the 20th century and constitute much of the work currently carried out in statistical mechanics.
Vi Utility function from a dynamic
We now ask under what circumstances the procedure in (Eq. 18) can be inverted. When can a utility function be found for a given dynamic? In other words, what conditions does the dynamic have to satisfy so that optimization over time can be represented by optimization of expected net changes in utility ?
We ask whether a given dynamic can be mapped into a utility whose increments are described by Brownian motion, (Eq. 17).
The dynamic is an arbitrary Itô process
(22) 
where and are arbitrary functions of . For this dynamic to translate into a Brownian motion for the utility, must satisfy the equivalent of (Eq. 18) with the special requirement that the coefficients and in (Eq. 17) be constants, namely
(23) 
Explicitly, we arrive at two equations for the coefficients
(24) 
and
(25) 
Differentiating (Eq. 25), it follows that
(26) 
Substituting in (Eq. 24) for and and solving for we find the drift term as a function of the noise term,
(27) 
In other words, knowledge of only the dynamic is sufficient to determine whether a corresponding utility function exists. We do not need to construct the utility function explicitly to know whether a pair of drift term and noise term is consistent or not.
Having determined for some dynamic that a consistent utility function exists, we can construct it by substituting for in (Eq. 24). This yields a differential equation for
(28) 
or
(29) 
Overall, then the triplet noise term, drift term, utility function is interdependent. Given a noise term we can find consistent drift terms, and given a drift term we find a consistency condition (differential equation) for the utility function.
vi.1 Example
Given a dynamic, it is possible to check whether this dynamic can be mapped into a utility function, and the utility function itself can be found. We consider the following example
(30) 
We note that and . Equation (27) imposes conditions on the drift term in terms of the noise term . Substituting in (Eq. 27) reveals that the consistency condition is satisfied by the dynamic in (Eq. 30).
Because (Eq. 30) is internally consistent, it is possible to derive the corresponding utility function. Equation (25) is a firstorder ordinary differential equation for
(31) 
which can be integrated to
(32) 
with an arbitrary constant of integration. This constant corresponds to the fact that only changes in utility are meaningful, as was pointed out by von Neumann and Morgenstern von Neumann and Morgenstern (1944) – this robust feature is visible whether one thinks in dynamic terms and time averages or in terms of consistent measuretheoretic concepts and expectation values.
Substituting for from (Eq. 30), (Eq. 31) becomes
(33) 
which is easily integrated to
(34) 
plotted in Fig. 2. This expoential utility function is monotonic and therefore invertible, which is reflected in the fact that the consistency condition is satisfied. The utility function is convex. From the perspective of expectedutility theory an individual behaving optimally according to this function would be labelled “riskseeking.” The dynamical perspective corresponds to a qualitatively different interpretation: Under the dynamic (Eq. 30) the “riskseeking” individual behaves optimally, in the sense that his wealth will grow faster than that of a riskaverse individual. The dynamic (Eq. 30) has the feature that fluctuations in wealth become smaller as wealth grows. High wealth is therefore sticky – an individual will quickly fluctuate out of low wealth and into higher wealth. It will then tend to stay there.
Vii Wealth distribution from a dynamic
The dynamical interpretation of expected utility theory makes it particularly simple to compute wealth distributions. A utility function implies a dynamic , and that dynamic generates a wealth distribution . We know that follows a simple Brownian motion, wherefore we know that is normally distributed according to
(35) 
Since we know , the distribution of is easily derived. The wealth distribution in a large population, is
(36) 
vii.1 Example of a wealth distribution
The utility function (Eq. 34) corresponds to the example dynamic (Eq. 30). The wealth distribution at any time can be read off (Eq. 36)
(37) 
which is shown in Fig. 3. The distribution is sensible given what we know about the dynamic – since fluctuations diminish with increasing wealth many individuals will be found at high wealth (all those that have fluctuated away from low wealth), with a heavy tail towards lower wealth.
Viii Unboundedness of
The scheme outlined in Section VI is informative for the debate regarding the boundedness of utility functions. A wellestablished but false belief in the economics literature, due to Karl Menger Menger (1934); Peters (2011c), is that permissible utility functions must be bounded. We have argued previously that boundedness is an unnecessary restriction, and that Menger’s arguments are not valid Peters and GellMann (2016); Peters (2011a). Section VI implies that the interpretation of expected utility theory we offer here formally requires unboundedness of utility functions. Bounded functions are not invertible, and Menger’s incorrect result therefore contributed to obscuring the simple natural arguments we present here.
Of course whether is bounded or not is practically irrelevant because will always be finite. However, for a clean mathematical formalism an unbounded is highly desirable.
The problem is easily demonstrated by considering the case of zero noise. Since always follows a Brownian motion in our treatment, in the zeronoise case it follows
(38) 
meaning linear growth in time. For to be bounded, time itself would have to be bounded. Another way to see the problem is inverting to find . If we require simultaneously linear growth of in time, and boundedness from above, , then has to diverge in the finite time it takes for to reach , namely in (assuming for simplicity ).
Such features – an end of time or a finitetime singularity of wealth – are inconvenient to carry around in a formalism. Since they have no physical meaning, for simplicity a model without them should be chosen, i.e. unbounded utility functions will be much better. We repeat that Menger’s arguments against unbounded utility functions are invalid and we need not worry about them.
Ix Discussion
Expected utility theory is an 18thcentury patch, applied to a flawed conceptual framework established in the 17th century that made blatantly wrong predictions of human behavior. Because the mathematics of randomness was in its infancy in the 18th century, the conceptual problems were overlooked, and utility theory set economics off in the wrong direction. Without any of the arbitrariness inherent in utility functions it is nowadays possible to give a physical meaning to the nonlinear mappings people seem to apply to monetary amounts. These apparent mappings simply encode the nonlinearity of wealth dynamics.
References
 Peters and GellMann (2016) O. Peters and M. GellMann, Chaos 26, 23103 (2016), URL http://dx.doi.org/10.1063/1.4940236.
 Peters and Adamou (2015a) O. Peters and A. Adamou, arXiv:1506.03414 (2015a), URL http://arxiv.org/abs/1506.03414.
 Peters (2011a) O. Peters, Quant. Fin. 11, 1593 (2011a), URL http://dx.doi.org/10.1080/14697688.2010.513338.
 Peters (2011b) O. Peters, Phil. Trans. R. Soc. A 369, 4913 (2011b), URL http://dx.doi.org/10.1098/rsta.2011.0065.
 Peters and Adamou (2011) O. Peters and A. Adamou, arXiv preprint arXiv:1101.4548 (2011), URL http://arXiv.org/abs/1101.4548.
 Peters and Adamou (2015b) O. Peters and A. Adamou, arXiv:1507.04655 (2015b), URL http://arxiv.org/abs/1507.04655.
 Berman et al. (2017) Y. Berman, O. Peters, and A. Adamou (2017), URL https://ssrn.com/abstract=2794830.
 Kloeden and Platen (1992) P. E. Kloeden and E. Platen, Numerical solution of stochastic differential equations, vol. 23 (Springer Science & Business Media, 1992).
 Harrison (2013) J. M. Harrison, Brownian motion of performance and control (Cambridge University Press, 2013).
 Breiman (1968) L. Breiman, Probability (AddisonWesley Publishing Company, 1968).
 Bernoulli (1738) D. Bernoulli, Econometrica 22, 23 (1738), URL http://www.jstor.org/stable/1909829.
 Cox et al. (1985) J. C. Cox, J. E. Ingersoll, and S. A. Ross, Econometrica 53, 363 (1985), ISSN 00129682, 14680262, URL http://www.jstor.org/stable/1911241.
 Marro and Dickman (1999) J. Marro and R. Dickman, Nonequilibrium Phase Transitions in Lattice Models. (Cambridge University Press, 1999).
 Hinrichsen (2000) H. Hinrichsen, Adv. Phys. 49, 815 (2000).
 von Neumann and Morgenstern (1944) J. von Neumann and O. Morgenstern, Theory of games and economic behavior (Princeton University Press, 1944).
 Menger (1934) K. Menger, J. Econ. 5, 459 (1934), URL http://dx.doi.org/10.1007/BF01311578.
 Peters (2011c) O. Peters, http://arxiv.org/abs/1110.1578 (2011c), URL http://arxiv.org/abs/1110.1578.