1 Introduction

An analysis of the Rüschendorf transform - with a view towards Sklar’s Theorem

Frank Oertel***Deloitte & Touche GmbH, FSI Assurance, Quantitative Services & Valuation, D - 81669 Munich, E-mail: f.oertel@email.de

Abstract: In many applications including financial risk measurement, copulas have shown to be a powerful building block to reflect multivariate dependence between several random variables including the mapping of tail dependencies.

A famous key result in this field is Sklar’s Theorem. Meanwhile, there exist several approaches to prove Sklar’s Theorem in its full generality. An elegant probabilistic proof was provided by L. Rüschendorf. To this end he implemented a certain “distributional transform” which naturally transforms an arbitrary distribution function to a flexible parameter-dependent function which exhibits exactly the same jump size as .

By using some real analysis and measure theory only (without involving the use of a given probability measure) we expand into the underlying rich structure of the distributional transform. Based on derived results from this analysis (such as Proposition 2.5 and Theorem 2.12) including a strong and frequent use of the right quantile function, we revisit Rüschendorf’s proof of Sklar’s theorem and provide some supplementing observations including a further characterisation of distribution functions (Remark 2.3) and a strict mathematical description of their “flat pieces” (Corollary 2.8 and Remark 2.9).
Keywords: Copulas, distributional transform, generalised inverse functions, Sklar’s Theorem.
MSC: 26A27, 60E05, 60A99, 62H05.

## 1 Introduction

The mathematical investigation of copulas started 1951, due to the following problem of M. Fréchet: suppose, one is given random variables , all defined on the same probability space , such that each random variable has a (non-necessarily continuous) distribution function . What can then be said about the set of all possible -dimensional distribution functions of the random vector (cf. [7])? This question has an immediate answer if the random variables were assumed to be independent, since in this case there exists a unique -dimensional distribution function of the random vector , which is given by the product . However, if the random variables are not independent, there was no clear answer to M. Fréchet’s problem.

In [15], A. Sklar introduced the expression “copula” (referring to a grammatical term for a word that links a subject and predicate), and provided answers to some of the questions of M. Fréchet.

In the following couple of decades, copulas (which are precisely finite dimensional distribution functions with uniformly distributed marginals), were mainly used in the framework of probabilistic metric spaces (cf. e. g. [13, 14]). Later, probabilists and statisticians were interested in copulas, since copulas defined in a “natural way” nonparametric measures of dependence between random variables, allowing to include a mapping of tail dependencies. Since then, they began to play an important role in several areas of probability and statistics (including Markov processes and non-parametric statistics), in financial and actuarial mathematics (particularly with respect to the measurement of credit risk), and even in medicine and engineering.

One of the key results in the theory and applications of copulas, is Sklar’s Theorem (which actually was proven in [13] and not in [15]). It says:

###### Sklar’s Theorem.

Let be a -dimensional distribution function with marginals . Then there exists a copula , such that for all we have

 F(x1,…,xn)=CF(F1(x1),…,Fn(xn)).

Furthermore, if is continuous, the copula is unique. Conversely, for any univariate distribution functions , and any copula , the composition defines a -dimensional distribution function with marginals .

Since the original proof of (the general non-continuous case of) Sklar’s Theorem is rather complicated and technical, there have been several attempts to provide different and more lucidly appearing proofs, involving not only techniques from probability theory and statistics but also from topology and functional analysis (cf. [4]).

Among those different proofs of Sklar’s Theorem, there is an elegant, yet rather short proof, provided by L. Rüschendorf, originally published in [12]. He provided a very intuitive, and primarily probabilistic approach which allows to treat general distribution functions (including discrete parts and jumps) in a similar way as continuous distribution functions. To this end, he applied a generalised “distributional transform” which - according to [12] - has been used in statistics for a long time in relation to a construction of randomised tests. By making a consequent use of the properties of this generalised “distributional transform” together with Proposition 2.1 in [12], the proof of Sklar’s Theorem in fact follows immediately (cf. Theorem 2.2 in [12]). Irrespectively of [12] the same idea was used in the (again rather short) proof of Lemma 3.2 in [11]. All key inputs for the proof of Sklar’s Theorem clearly are provided by Proposition 2.1 in [12]. However, the proof of the latter result is rather difficult to reconstruct. It says:

###### [12] - Proposition 2.1.

Let be two random variables, defined on the same probability space , such that and is independent of . Let be the distribution function of the random variable . Then , and -almost surely.

Here, denotes the (left-continuous) left -quantile of which in particular is the lowest generalised inverse of (cf. e.g. [14, Chapter 4.4], respectively [8, Definition 2]). In our paper we consistently adopt the very suitable symbolic notation of [14], respectively [8] to identify generalised inverse functions in general (cf. (2.2) and (2.3)).

While studying (and reconstructing) carefully the proof of Sklar’s Theorem built on Proposition 2.1 in [12], we recognise that it actually implements key mathematical objects which do not involve probability theory at all and play an important role beyond statistical applications.

The main contribution of our paper is to provide a thorough analysis of these mathematical building blocks by studying carefully properties of a real-valued (deterministic) function, used in the proof of Proposition 2.1 in [12]; the so-called “Rüschendorf transform”. We reveal some interesting structural properties of this function which to the best of our knowledge have not been published before, such as e. g. Theorem 2.12 which actually is a result on Lebesgue-Stieltjes measures, strongly built on the role of the right quantile function which seems to be not widely used in the literature (as opposed to the left quantile function).

Equipped with Theorem 2.12 we then revisit the proof of Proposition 2.1 in [12] (cf. also [10, Chapter 1.1.2]). However, in our approach Proposition 2.1 in [12] is an implication of Theorem 2.12 and Lemma 2.15. For sake of completeness we include a proof of Sklar’s Theorem again (cf. also [10, Chapter 1.1.2]) - yet as an implication of Theorem 2.12, finally leading to Remark 2.21.

Last but not least, by observing the significance of the jumps of the lowest generalised inverse, the proof of Theorem 2.12 indicates how to construct the -null set in Proposition 2.1 in [12] explicitly - leading to Theorem 2.18.

## 2 The Rüschendorf Transform

At the moment let us completely ignore randomness and probability theory. We “only” are working within a subclass of real-valued functions, all defined on the real line, and with suitable subsets of the real line.

Let be an arbitrary right-continuous and non-decreasing function. Let . Since is non-decreasing, it is well-known that both, the left-hand limit

 F(x−):=limz↑xF(z)=sup{F(z):z≤x},

and the right-hand limit

 F(x+):=limz↓xF(z)=inf{F(z):z≥x}

are well-defined real numbers, satisfying . Moreover, due to the assumed right-continuity of , it follows that for all . denotes the (left-hand) “jump” of at . We consider the following important transform of :

###### Definition 2.1.

Let and . Put

 RF(x,λ):=Fλ(x)=F(x−)+λΔF(x).

We call the real-valued function the Rüschendorf transform of . For given is called the Rüschendorf -transform of .

Clearly, we have the following equivalent representation of the Rüschendorf -transform :

 Fλ(x)=(1−λ)F(x−)+λF(x) for all x∈R.

In particular, for all the following inequality holds:

 F(x−)≤Fλ(x)≤F(x). (2.1)

Moreover, is continuous if and only if for all , and for all we have and .

###### Assumption 2.2.

In the following we assume throughout that is bounded on (i. e., the range is a bounded subset of ), implying that for some real numbers . Moreover, let us assume that for any the set is non-empty and bounded from below.In particular, cannot be a constant function on the whole real line. WLOG, we may assume from now on that and (else we would have to work with the function ).

Although its proof (by contradiction) mostly is an easy calculus exercise with sequences, the following observation - which does not require a right-continuity assumption - should be explicitly noted (cf. also (cf. [5, 6, 13])):

###### Remark 2.3.

Let an arbitrary non-decreasing function. Then the following statements are equivalent:

• and ;

• For any the sets and both are non-empty;

• For any the set is non-empty and bounded from below.

• is a well-defined real number for any .

Hence, given Assumption 2.2, the assumed right-continuity of and Remark 2.3 imply that (possibly after shifting and stretching adequately) actually is a distribution function! Therefore, its generalised inverse function , given by

 F∧(α):=inf{x∈R:F(x)≥α}, (2.2)

is well-defined and satisfies

 −∞α}=sup{x∈R:F(x)≤α}=:F∨(α)<∞ (2.3)

for any (cf. e. g. [9]). Actually, since is assumed to be right-continuous, it follows that

 F∧(α)=min{x∈R:F(x)≥α}

for all (cf. [5, Proposition 2.3 (4)]). Moreover, the following important inequality is satisfied:

 F(F∧(α)−δ)<α≤F(F∧(α)+ε) (2.4)

for all , , and for all . Hence,

 F(F∧(α)−)≤α≤F(F∧(α)+)=F(F∧(α)) (2.5)

for all . Also recall from e. g. [14] that , respectively for any .

Let us fix the distribution function . Then by we denote the set of all jumps of which is well-known to be at most countable.

Throughout the remaining part of our paper, we follow the notation of [12] and put for fixed . By taking a closer look at , we firstly note the following observation.

###### Remark 2.4.

Let and . Then

 F∧(Fλ(x))≤x.
###### Proof.

Fix and put , where . Then is well-defined. Since , the claim follows. ∎

The next result shows an important part of the role of Rüschendorf transform which can be more easily understood if one sketches the graph of including its jumps. Since is at most countable, it follows that , where either or . By making use of this representation and the canonically defined function , (cf. also [14, Chapter 4.4]) we arrive at the following

###### Proposition 2.5.
• Let . Then

 (F(x−),F(x))⊆{α∈(0,1):x=F∧(α)}⊆[F(x−),F(x)].

In particular, if then . Moreover,

 ⋃x∈JF(F(x−),F(x)) = {α∈(0,1):ΔF(F∧(α))>0 and α=Fλ(F∧(α)) for some 0<λ<1} = {Fλ(x):0<λ<1 and x∈JF} = RF(JF×(0,1)) = (0,1)∖(F(R)∪F−(R)),

implying that the mapping , is well-defined and bijective. Its inverse is given by

###### Proof.

To prove the first set inclusion, we may assume without loss of generality that is not continuous in . So, let . Then (else we would obtain the contradiction , respectively ) and for all . Hence, for all (cf. [5, Proposition 2.3 (5)]), implying the first inclusion. Now let such that . Due to (2.5) it follows that

 F(x−)≤α≤F(x),

which gives the second set inclusion.

To verify the representation of the disjoint union let for some . Then and hence and . Put

 λ(α):=α−F(ξ−)ΔF(ξ).

Then and

 α=Fλ(α)(ξ)=Fλ(α)(x).

Furthermore, a straightforward application of the inequality (2.4) (together with (2.5) and the monotonicity assumption on ) shows the graphically clear fact that there is no such that contains elements of the form , respectively for some . Now, given the construction of above and the listed properties of any of the sets , the assertion about the mapping follows immediately. ∎

###### Definition 2.6.

Let and . Put:

 Aλ,α:={x∈R:Fλ(x)≤α}.

Firstly note that is non-empty. To see this, consider any . Then for some . Hence, . To motivate the following representation of the set , let us assume for the moment that is continuous at . Due to (2.5), it follows that . Hence, in this case, , implying that .

However, in the general (non-continuous) case, need not be an element of the set . Therefore (by fixing and ), we are going to represent the set as a disjoint union of the following three subsets of the real line:

 A+λ,α:=Aλ,α∩{x∈R:x>ξ},
 A∼λ,α:=Aλ,α∩{x∈R:x=ξ},

and

 A−λ,α:=Aλ,α∩{x∈R:x<ξ}.

Thus,

 Aλ,α=A+λ,α\mathaccent0⋅∪A∼λ,α\mathaccent0⋅∪A−λ,α.

Next, we are going to simplify the sets and as far as possible. To this end, we have to analyse carefully the jump , implying that we have to check against the (finite) value of the largest generalised inverse of (cf. [9] and [14, Chapter 4.4])

 η:=F∨(α)=inf{x∈R:F(x)>α}=sup{x∈R:F(x)≤α}=F∧(α+).

The inequality (2.5) is also satisfied for (cf. [6, Lemma A. 15]):

 F(η−)≤α≤F(η). (2.6)

Note that since is a distribution function, (respectively ) is precisely the right (respectively left) -quantile of .

Clearly, for every . However, if , we even obtain equality of both sets - since:

###### Lemma 2.7.

Let and . Put and .

• If , then and . Moreover, the restricted function is continuous, and

 (2.7)
• If , then .

• Furthermore,

 JF = {x∈R:F∧(u)=x=F∨(u) and ΔF(F∧(u))>0 for some u∈(0,1)} = {x∈R:F∧(u)=x=F∨(u) and ΔF(F∨(u))>0 for some u∈(0,1)}.

In particular, the following statements are equivalent:

• ;

• .

###### Proof.

Put . Clearly, we always have .

To verify (i), let . Then for some . Thus, , implying that and . Assume by contradiction that . Then for all , implying the contradiction . Hence, . Proposition 2.5 therefore implies that .

Let . Assume by contradiction that is not continuous at . Then (since ). Since , we have for some . Thus,

 α≤F(ξ)≤F(ξ+12n)≤F(x−12n).

Hence, , which is a contradiction. Thus, the restricted function is continuous on . Let . Since is continuous at , it follows that

 α≤F(ξ)≤F(u)=Fλ(u)≤α.

Thus, .

To prove (ii), suppose that is non-empty. The previous calculations show that the existence of an element already implies . Consequently, cannot coincide with (since ), implying that .

To finish the proof of (i), we have to verify (2.7). To this end, let and . Then there exists such that . Consequently, . Thus,

 (ξ,η)⊆{x∈R:x>ξ and F(x)=α}=B.

Moreover, [5, Proposition 2.3 (6)] implies that

Hence,

 (ξ,η)⊆B⊆(ξ,η].

If , then and hence . If , then and hence .

Statement (iii) is a direct implication of (i) and Proposition 2.5. ∎

Regarding a visualisation of Lemma 2.7 consider the set . Note that

 {x∈R:F(x)=α}={x∈R:x>ξ and F(x)=α}\mathaccent0⋅∪Mα.

Thus, by joining Lemma 2.7 with Proposition 2.5 we immediately obtain the following tangible mathematical description of the (preimages of) “flat pieces” of (and hence allowing us to perfect related observations from e. g.[14, Chapter 4.4] and [5], Proposition 2.3, (6) coherently):

###### Corollary 2.8.

Let and . Put and .

• If , then

 ∅≠{x∈R:F(x)=α}=A+λ,α\mathaccent0⋅∪{ξ}={[ξ,η)if % F(η)>α[ξ,η]if F(η)=α
• If , then

 {x∈R:F(x)=α}={∅if F(η)>α{ξ}if F(η)=α

In particular, if and only if , and if and only if , and if , then if and only if .

###### Remark 2.9.

Let . Then, according to [1, Corollary 1.1] for a large class of distribution functions any non-empty set even emerges as a set of optimal solutions of the so called “single period newsvendor problem” which asks for the minimisation of coherent risk measures, such as the conditional-value-at-risk (which coincides with Expected Shortfall), corresponding to a cost function, induced by random demand. Here, one should recall that recently the Basel Committee on Banking Supervision (BCBS) suggested in their updated consultative document “Fundamental review of the trading book” to implement Expected Shortfall at in a bank’s internal market risk model to calculate its minimum capital requirements with respect to market risk.

Let denote the set of all Borel subsets of . In the following, let be the Lebesgue-Stieltjes measure of . For a detailed description of the construction and properties of the Lebesgue-Stieltjes measure (including Lebesgue-Stieltjes integration), we refer the reader to e. g. [2] and [3]. For the convenience of the reader, we recall the following fundamental result (cf. [3, Theorem 12.4]):

###### Theorem 2.10 (Lebesgue-Stieltjes measure).

Let be an arbitrary non-decreasing and right-continuous function. Then there exists a unique Borel measure satisfying

 μG((x,y])=G(y)−G(x)

for all .

Clearly, this crucial result implies that and hence

for all . Moreover, if and only if is a constant function on .

Returning to our distribution function , a direct application of leads to another important implication of Lemma 2.7:

###### Corollary 2.11.

Let and . Then , and

 μF(A+λ,α)=0.

In particular, if , then

 μF({x∈R:F(x)=α})=ΔF(ξ)=α−F(ξ−).
###### Proof.

Nothing is to prove if . So, let . Then .

Suppose first that . Then

 A+λ,α=(ξ,η)=∞⋃n=1(ξ,η−1n].

Consequently, since in general for all , it follows that

 μF(A+λ,α)=limn→∞μF((ξ,η−1n])=limn→∞(F(η−1n)−F(ξ))=α−α=0.

Now suppose that . Then , and it follows that is continuous at . Thus, . Since in this case

 A+λ,α=(ξ,η)\mathaccent0⋅∪{η},

it consequently follows that

 μF(A+λ,α)=limn→∞(F(η−1n)−F(ξ))+μF({η})=α−α+0=0.

Next, we are going to reveal in detail that the function is almost “left-invertible” at every which does not belong to the preimage of a “flat piece” of . More precisely:

###### Theorem 2.12.

Let . Assume that -almost everywhere. Then

 idR=F∧∘FλμF-almost everywhere.

In particular, if -almost everywhere, then

 idR=F∧∘FμF-% almost everywhere.
###### Proof.

Let . Consider the Borel set

 Nλ:={x∈R:Fλ(x)=0}\mathaccent0⋅∪{x∈R:Fλ(x)=1}\mathaccent0⋅∪⋃α∈JF∧A+λ,α,

where denotes the set of all jumps of the function .Note that by construction if . Since the (left-continuous) function is non-decreasing, is at most countable. Hence, if , there exists a subset of , and a sequence , consisting of pairwise distinct elements , such that . Thus, . Corollary 2.11 therefore implies that - in any case - and hence (since cannot be a constant function on the whole real line).

Let . Put . Then , and . Thus, is well-defined. Consider .

First, let . Then . Lemma 2.7 therefore implies that . In particular, . Hence, since , it consequently follows that

 x≤ξ(x)=F∧(α(x))=F∧(Fλ(x)),

and hence .

Now let . If , it follows again that and hence

 x≤ξ(x)=F∧(α(x))=F∧(Fλ(x))≤x,

as above. So, let . Then for some , and hence . Since , it follows once more again that , and hence

 x=ξ(x)=F∧(α(x))=F∧(Fλ(x)).

Next, we consider the set . Again, in line with [12], we put and . Then

 q+β=F(ξ)α≥q.

Obviously, we may write:

###### Remark 2.13.

.

Moreover, by using a similar argument like that one which has shown us that the set is non-empty, we further obtain

###### Remark 2.14.

.

Observe that only the subset of does depend on the choice of .

### 2.1 The inclusion of randomness

In addition to our assumptions above, we now fix a given probability space . Let and be two given random variables (on this probability space) such that is uniformly distributed over and independent of . In the following we consider the random variable , defined on as

 FV(X)(ω):=FV(ω)(X(ω))=Fλ(x),

where here , and §§§Since , we obviously have and hence .. Next, we have to evaluate ; i.e, we wish to calculate

 P(FV(X)≤α)=P({ω∈Ω:X(ω)∈AV(ω),α} and V∈(0,1])

Due to our previous observations, we have

 AV(ω),α=A+V(ω),α\mathaccent0⋅∪A∼V(ω),α\mathaccent0⋅∪A−V(ω),α

for all . Consequently, given the assumed independence of and , Lemma 2.7 implies thatHere, and , where . :

 P(FV(X)≤α) = P(X∈A+V,α and V∈(0,1])+P(X∈A∼V,α and V∈(0,1]) + P(X∈A−V,α and V∈(0,1]) = P(X>ξ and F(X)=α)+P(X=ξ)P(βV≤α−q and V∈(0,1])+P(X<ξ).

Apparently, to continue with the calculation of the respective probabilities, we have to consider the following two possible cases: and :

• Let . Thus, since , it follows that

 P(FV(X)≤α)=P(X>ξ and F(X)=α)+P(X≤ξ)
• Let . Since is uniformly distributed over , we have
. Hence, since , it follows that

 P(FV(X)≤α)=P(X>ξ and F(X)=α)+(α−F(ξ)β)P(X=ξ)+P(X≤ξ).

Moreover, by taking into account that in case (i) (since is continuous at if ), we have arrived at the following important

###### Lemma 2.15.

Suppose that is an arbitrary distribution function. Let . Put and . Let be two random variables, both defined on the same probability space , such that and is independent of . Then

 P(FV(X)≤α)−α=P(X>ξ% and F(X)=α)+cβ(P(X=ξ)−β)+(P(X≤ξ)−F(ξ)),

where if and if .

To conclude, let us slightly point towards the fact that Lemma 2.15 could also be viewed as a building block of a probabilistic limit theorem (whose detailed discussion would then exceed the main goal of this paper, though).

### 2.2 The role of the distribution function of X

From now on, is given as the distribution function of a given random variable .

###### Proposition 2.16.

Let be two random variables, both defined on the same probability space , such that and is independent of . Let be the distribution function of . Then is a uniformly distributed random variable. Moreover,

 P(F(X)≤α)=α=P(X≤F∧(α))

on the set