A New Information Theoretical Concept:
Information-Weighted Heavy-tailed Distributions
Given an arbitrary continuous probability density function, it is introduced a conjugated probability density, which is defined through the Shannon information associated with its cumulative distribution function. These new densities are computed from a number of standard distributions, including uniform, normal, exponential, Pareto, logistic, Kumaraswamy, Rayleigh, Cauchy, Weibull, and Maxwell-Boltzmann. The case of joint information-weighted probability distribution is assessed. An additive property is derived in the case of independent variables. One-sided and two-sided information-weighting are considered. The asymptotic behavior of the tail of the new distributions is examined. It is proved that all probability densities proposed here define heavy-tailed distributions. It is shown that the weighting of distributions regularly varying with extreme-value index still results in a regular variation distribution with the same index. This approach can be particularly valuable in applications where the tails of the distribution play a major role.
Key words and phrases:information theory, information-weighted probability distribution, conjugated probability density function, heavy-tailed distributions.
2010 Mathematics Subject Classification:60E05, 62B10, 62E15, 94A15.
Information theory is a subject of relevance in many areas, particularly on Statistic MacKay, Cover-Thomas. Given an arbitrary random variable with a continuous probability density function (pdf), , we can compute the (Shannon) information amount associated with the event , that is, for each . This is given by .
(cumulative information pdf) The information-weighted density is defined by:
Let us define an operator , which maps the a probability density into another function according to Def. 1.1. This can be interpreted as a probability density pair and the new density is the former density, but weighted by the information provided by its cumulative distribution.
In the framework of distribution generalization theory, a mapping that takes a distribution in another
allows the construction of several new distributions (e.g. leao), which is particularly attractive due to the fact that the shape of the new distribution is quite flexible.
For instance, the beta generalized normal distribution (cintra2014) encompasses the beta normal,
beta Laplace, normal, and Laplace distributions as sub-models. This article is in a scope somewhat similar, providing the generation of new probability distributions. However, noteworthy here is the construction of heavy-tailed distributions, even from distributions that do not hold this attribute.
The information-conjugated distribution is denoted by inserting an before the standard distribution, e.g. for a normal distribution, . (remark: the terms information-conjugated and information-weighted are used interchangeably throughout the paper.) This first property of a conjugated pdf is concerning its support:
The support of is contained in the support of , i.e. .
This expression recalls the original definition of Shannon for the differential entropy of a continuous distribution (see Michalowicz), which is defined by
One of the troubling questions of this setting is the possibility of negative values for .
This is due to the fact that is not upper bounded by the unit. Replacing now by in the argument of the logarithm was our initial motivation as an attempt to address this issue, bearing in mind that .
However, rather to redefine entropy, this always resulted in unitary integral, leading the proposal laid down in this paper. The differential Entropy also has an interesting link with the wavelet analysis (deO). We show in the sequel that the integral Eqn 1.2 is always the unity, whatever the original probability density. Thus, the operator preserves probability densities and the calculation of the area under the curve is an isometry.
is a valid probability density.
In order to proof that this is a normalized nonnegative function, we shall prove that:
We remark first that , so (i) follows. Then we take
which can be rewritten in terms of a Stieltjes integral Protter:
Note that is the cumulative probability distribution (CDF) of . By the property of pars integration, we derive:
It is also straightforward to derive (by simple integration) that the CDF associated with pdf of Def. 1.1 is:
As expected, (since ) and .
How to model probabilistic events described by long-tailed distributions? There are relatively few distributions used in this setting (e.g. Cauchy, log-normal, Weilbull, Burr…), highlighting the Pareto distribution. A pleasent reading review of different classes of distributions with heavy tails can be found in (Werner). We are concerned particularly with two classes:
class D: subexponential distributions,
class C: regular variation with tail index .
We show in the sequel that this paper offers a profuson of new options, primarily concerning the class of subexponential distributions (Goldie).
2. Conjugated Information-Weighted Density Associated with Known Distributions
Now we compute the conjugated information density associated with selected standard distributions selected in Table 2 (see Walpole).