Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations

Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations

Abstract

Limit theorems for non-additive probabilities or non-linear expectations are challenging issues which have raised progressive interest recently. The purpose of this paper is to study the strong law of large numbers and the law of the iterated logarithm for a sequence of random variables in a sub-linear expectation space under a concept of extended independence which is much weaker and easier to verify than the independence proposed by Peng (2008b). We introduce a concept of extended negative dependence which is an extension of this kind of weak independence and the extended negative independence relative to classical probability appeared in recent literatures. Powerful tools as the moment inequality and Kolmogorov’s exponential inequality are established for this kind of extended negatively independent random variables, which improve those of Chen, Chen and Ng (2010) a lot. And the strong law of large numbers and the law of iterated logarithm are obtained by applying these inequalities.

LI-XIN ZHANG,{}^{*} Zhejiang University

{}^{*}\,Postal address: School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, P.R. China. Email:stazlx@zju.edu.cn

Keywords: sub-linear expectation; capacity; Kolmogorov’s exponential inequality; extended negative dependence; laws of the iterated logarithm; law of large numbers

2010 Mathematics Subject Classification: Primary 60F15

2010 Mathematics Subject Classification: Secondary 28A12; 60A05

1 Introduction.

Non-additive probabilities and non-additive expectations are useful tools for studying uncertainties in statistics, measures of risk, superhedging in finance and non-linear stochastic calculus, cf. Denis and Martini (2006), Gilboa (1987), Marinacci (1999), Peng (1999, 2006, 2008a) etc. This paper considers the general sub-linear expectations and related non-additive probabilities generated by them. Under the frameworks of the non-additive probability or non-linear expectation, the traditionary way for defining the independence is carried out through the non-additive probability by imitating the classical independence relative to the probability. Under such frameworks, it is hard to study the limit theorems unless some additional conditions (for example, the complete monotonicity of the non-additive probability) are assumed such that the non-additive probability is somehow close to the additive one (c.f. Maccheroni and Marinacci (2015), Terán (2014)). To the best of my knowledge, Peng (2008b) is the first one to give a reasonable definition of the independence through the non-linear expectation. Let \{X_{n};n\geq 1\} be a sequence of random variables in a sub-linear expectation space (\Omega,\mathcal{H},\widehat{\mathbb{E}}). Peng’s independence is that

\widehat{\mathbb{E}}\left[\psi(X_{1},\cdots,X_{n},X_{n+1})\right]=\widehat{% \mathbb{E}}\left[\widehat{\mathbb{E}}\big{[}\psi(x_{1},\cdots,x_{n},X_{n+1})% \big{]}\big{|}_{x_{1}=X_{1},\cdots,x_{n}=X_{n}}\right], (1.1)

for all n\geq 2 and any \psi\in C_{l,Lip}(\mathbb{R}_{n+1}), where C_{l,Lip}(\mathbb{R}_{n+1}) is the space of local Lipschitz functions in \mathbb{R}_{n+1}. Under Peng’s framework, many limit theorems have been being progressively established very recently, including the central limit theorem and weak law of large numbers (cf. Peng (2008b, 2010)), the law of the iterated algorithm (cf. Chen and Hu (2014), Zhang (2015a)), the small derivation and Chung’s law of the iterated logarithm (c.f. Zhang (2015b)), the moment inequalities for the maximum partial sums (cf., Zhang (2016)). Zhang (2016) gives the sufficient and necessary condition of the Kolomogov strong law of large numbers. For a sequence of independent and identically distributed random variables \{X_{n};n\geq 1\}, it is showed that the sufficient and necessary moment condition for the strong law of large numbers to hold is that the Choquet integral of |X_{1}| is finite:

C_{\mathbb{V}}(|X_{1}|)=\int_{0}^{\infty}\mathbb{V}(|X_{1}|\geq t)dt<\infty, (1.2)

where \mathbb{V} is the upper capacity generated by the sub-linear expectation \widehat{\mathbb{E}}.

Recall that two random variables X and Y are independent relative to a probability P if and only if for any Borel functions f and g, E_{P}[f(X)g(Y)]=E_{P}[f(X)]E_{P}[g(Y)] whenever the expectations considered are finite. Another possible way to define the independence of \{X_{n};n\geq 1\} is that

\displaystyle\widehat{\mathbb{E}}\left[\psi_{1}(X_{1},\ldots,X_{k})\psi_{2}(X_% {k+1},\ldots,X_{n})\right]
\displaystyle= \displaystyle\widehat{\mathbb{E}}\left[\psi_{1}(X_{1},\ldots,X_{k})\right]% \widehat{\mathbb{E}}\left[\psi_{2}(X_{k+1},\ldots,X_{n})\right] (1.3)

for all n>k\geq 1 and any \psi_{1}\in C_{l,Lip}(\mathbb{R}_{k}) and \psi_{2}\in C_{l,Lip}(\mathbb{R}_{n-k}) such that the sub-linear expectations considered are finite. If the independence is defined in this way, the functions \psi_{1} and \psi_{2} need to be limited in the class of non-negative functions, for otherwise we will conclude that \widehat{\mathbb{E}}[\cdot]=-\widehat{\mathbb{E}}[-\cdot] and so \widehat{\mathbb{E}} will reduce to the linear expectation. It can showed that (1.1) implies (1). A more weaker independence is defined as the extended independence in the sense that

\widehat{\mathbb{E}}\left[\prod_{i=1}^{n}\psi_{i}(X_{i})\right]=\prod_{i=1}^{n% }\widehat{\mathbb{E}}\left[\psi_{i}(X_{i})\right],\;\forall\;n\geq 2,\forall\;% 0\leq\psi_{i}(x)\in C_{l,Lip}(\mathbb{R}). (1.4)

This independence is much weaker than that of Peng and easier to verify. For the classical linear expectation, the above definitions of independence are equivalent. For the non-linear expectation, they are quite different. For example, Peng’s independence has direction, i.e., that Y is independent to X does not imply that X is independent to Y. But the independence as in (1.4) has no direction. One of the purposes of this paper is to show that, under this extended independence, the sufficient and necessary moment condition for the Kolomogov strong law of large numbers to hold is also that the Choquet integral of |X_{1}| is finite. The proof of the sufficiency part is somewhat similar to that of Zhang (2016) after establishing good estimation of the tail capacity of partial sums of random variables. Because we have not “the divergence part” of the Borel-Cantelli Lemma and no information about the independence under the conjugate expectation \widehat{\mathcal{E}} or the conjugate capacity \mathcal{V}, where \widehat{\mathcal{E}}[\cdot]=-\widehat{\mathbb{E}}[-\cdot] and \mathcal{V}(A)=1-\mathbb{V}(A^{c}), proving the necessary part is a challenging work.

By replacing the function space C_{l,lip}(\mathbb{R}) with the family of all Borel measurable functions, Chen, Wu and Li (2013) considered random variables which are independent in sense of (1.4) under a upper expectation \widehat{\mathbb{E}}[\cdot] being defined by

\widehat{\mathbb{E}}[X]=\sup_{P\in\mathscr{P}}E_{P}[X],

where \mathscr{P} is a family of probability measures defined on a measurable space (\Omega,\mathcal{F}). The strong law of large numbers was proved under finite (1+\alpha)-th moments (\alpha>0) which is much stringer than (1.2) when the random variables are identically distributed. Note that, if \{X_{n};n\geq 1\} are independent relative to each P\in\mathscr{P}, then we will have

\widehat{\mathbb{E}}\left[\prod_{i=1}^{n}\psi_{i}(X_{i})\right]\leq\prod_{i=1}% ^{n}\widehat{\mathbb{E}}\left[\psi_{i}(X_{i})\right],\;\forall\;n\geq 2,% \forall\;0\leq\psi_{i}(x)\in C_{l,Lip}(\mathbb{R}). (1.5)

But in general, the equality will not hold. So the random variables may not be independent under \widehat{\mathbb{E}}. A simple example is that

(X_{1},X_{2})\sim P_{\sigma}\in\mathscr{P}=\{N(0,\sigma^{2})\otimes N(0,\sigma% ^{-2}):1/2\leq\sigma\leq 2\}

for which \sup\limits_{\sigma}E_{\sigma}\left[X_{1}^{2}X_{2}^{2}\right]=1, \sup\limits_{\sigma}E_{\sigma}\left[X_{1}^{2}\right]\sup\limits_{\sigma}E_{% \sigma}\left[X_{1}^{2}\right]=16. It is of important interest to study the limit theorems for random variables satisfying the property (1.5).

The property (1.5) is very close to that of negatively dependent random variables. The concept of negative dependence relative to the classical probability has been extensively investigated since it appeared in Lehmann (1966). Various generalization of the concept of negative dependence and related limit theorems have been studied in literatures. One can refer to Joag-Dev and Proschan (1983), Newman (1984), Matula (1992), Su et al (1997), Shao and Su (1999), Shao (2000), Zhang (2001a, 2001b) etc. As a new extension, the concept of extended negative dependence was proposed in Liu (2009) and further promoted in Chen, Chen and Ng (2010). A sequence of random variables is said to be extended negatively dependent if the tails of its finite-dimensional distributions in the lower-left and upper-right corners are dominated by a multiple of the tails of the corresponding finite-dimensional distributions of a sequence of independent random variables with the same marginal distributions. The strong law of large numbers was established by Chen, Chen and Ng (2010). However, for the extended negatively dependent random variables, besides the type of the law of large numbers, very little is known about other kinds of fine limit theorems such as the central limit theorem and the law of the iterated logarithm. In this paper, we will introduce a concept of extended negative dependence under the sub-linear expectation which is weaker than the extended independence as defined in (1.4) and is an extension of the extended negative dependence relative to the classical probability. The strong law of large numbers will also be established for extended negatively dependent random variables. The result of Chen, Chen and Ng (2010) is extended and improved.

To establish the strong law of large numbers, some key inequalities for the tails of the capacities of the sums of extended negatively dependent random variables in the general sub-linear expectation spaces are obtained, including the moment inequalities and the Kolmogorov type exponential inequalities. These inequalities also improve those established by Chen, Chen and Ng (2010) for extended negatively dependent random variables relative to a classical probability, as well as those for independent random variables in a sub-linear expectation space. They may be useful tools for studying other limit theorems. We also establish the law of the iterated logarithm by applying the exponential inequalities. And as a corollary, the law of the iterated logarithm for extended negatively dependent random variables on a probability space is obtained. In the next section, we give some notations under the sub-linear expectations. In Section 3, we will establish the exponential inequalities. The law of large numbers is given in Section 4. In the last section we consider the law of the iterated logarithm.

2 Basic Settings

We use the framework and notations of Peng (2008b). Let (\Omega,\mathcal{F}) be a given measurable space and let \mathscr{H} be a linear space of real functions defined on (\Omega,\mathcal{F}) such that if X_{1},\ldots,X_{n}\in\mathscr{H} then \varphi(X_{1},\ldots,X_{n})\in\mathscr{H} for each \varphi\in C_{l,Lip}(\mathbb{R}_{n}), where C_{l,Lip}(\mathbb{R}_{n}) denotes the linear space of (local Lipschitz) functions \varphi satisfying

\displaystyle|\varphi(\bm{x})-\varphi(\bm{y})|\leq C(1+|\bm{x}|^{m}+|\bm{y}|^{% m})|\bm{x}-\bm{y}|,\;\;\forall\bm{x},\bm{y}\in\mathbb{R}_{n},
\displaystyle\text{for some }C>0,m\in\mathbb{N}\text{ depending on }\varphi.

\mathscr{H} is considered as a space of “random variables”. In this case we denote X\in\mathscr{H}. We also denote C_{b,Lip}(\mathbb{R}_{n}) to be the bounded Lipschitz functions \psi(x) satisfying

\displaystyle|\varphi(\bm{x})|\leq C,\;\;|\varphi(\bm{x})-\varphi(\bm{y})|\leq C% |\bm{x}-\bm{y}|,\;\;\forall\bm{x},\bm{y}\in\mathbb{R}_{n},
\displaystyle\text{for some }C>0,\text{ depending on }\varphi.
Definition 2.1

A sub-linear expectation \widehat{\mathbb{E}} on \mathscr{H} is a function \widehat{\mathbb{E}}:\mathscr{H}\to\overline{\mathbb{R}} satisfying the following properties: for all X,Y\in\mathscr{H}, we have

(a)

Monotonicity: If X\geq Y then \widehat{\mathbb{E}}[X]\geq\widehat{\mathbb{E}}[Y];

(b)

Constant preserving: \widehat{\mathbb{E}}[c]=c;

(c)

Sub-additivity: \widehat{\mathbb{E}}[X+Y]\leq\widehat{\mathbb{E}}[X]+\widehat{\mathbb{E}}[Y] whenever \widehat{\mathbb{E}}[X]+\widehat{\mathbb{E}}[Y] is not of the form +\infty-\infty or -\infty+\infty;

(d)

Positive homogeneity: \widehat{\mathbb{E}}[\lambda X]=\lambda\widehat{\mathbb{E}}[X], \lambda\geq 0.

Here \overline{\mathbb{R}}=[-\infty,\infty]. The triple (\Omega,\mathscr{H},\widehat{\mathbb{E}}) is called a sub-linear expectation space. Give a sub-linear expectation \widehat{\mathbb{E}}, let us denote the conjugate expectation \widehat{\mathcal{E}}of \widehat{\mathbb{E}} by

\widehat{\mathcal{E}}[X]:=-\widehat{\mathbb{E}}[-X],\;\;\forall X\in\mathscr{H}.

From the definition, it is easily shown that \widehat{\mathcal{E}}[X]\leq\widehat{\mathbb{E}}[X], \widehat{\mathbb{E}}[X+c]=\widehat{\mathbb{E}}[X]+c and \widehat{\mathbb{E}}[X-Y]\geq\widehat{\mathbb{E}}[X]-\widehat{\mathbb{E}}[Y] for all X,Y\in\mathscr{H} with \widehat{\mathbb{E}}[Y] being finite. Further, if \widehat{\mathbb{E}}[|X|] is finite, then \widehat{\mathcal{E}}[X] and \widehat{\mathbb{E}}[X] are both finite.

Next, we consider the capacities corresponding to the sub-linear expectations. Let \mathcal{G}\subset\mathcal{F}. A function V:\mathcal{G}\to[0,1] is called a capacity if

V(\emptyset)=0,\;V(\Omega)=1\;\text{ and }V(A)\leq V(B)\;\;\forall\;A\subset B% ,\;A,B\in\mathcal{G}.

It is called to be sub-additive if V(A\bigcup B)\leq V(A)+V(B) for all A,B\in\mathcal{G} with A\bigcup B\in\mathcal{G}.

In the sub-linear space (\Omega,\mathscr{H},\widehat{\mathbb{E}}), we denote a pair (\mathbb{V},\mathcal{V}) of capacities by

\mathbb{V}(A):=\inf\{\widehat{\mathbb{E}}[\xi]:I_{A}\leq\xi,\xi\in\mathscr{H}% \},\;\;\mathcal{V}(A):=1-\mathbb{V}(A^{c}),\;\;\forall A\in\mathcal{F},

where A^{c} is the complement set of A. Then

\begin{matrix}&\mathbb{V}(A):=\widehat{\mathbb{E}}[I_{A}],\;\;\mathcal{V}(A):=% \widehat{\mathcal{E}}[I_{A}],\;\;\text{ if }I_{A}\in\mathscr{H}\\ &\widehat{\mathbb{E}}[f]\leq\mathbb{V}(A)\leq\widehat{\mathbb{E}}[g],\;\;% \widehat{\mathcal{E}}[f]\leq\mathcal{V}(A)\leq\widehat{\mathcal{E}}[g],\;\;% \text{ if }f\leq I_{A}\leq g,f,g\in\mathscr{H}.\end{matrix} (2.1)

It is obvious that \mathbb{V} is sub-additive. But \mathcal{V} and \widehat{\mathcal{E}} are not. However, we have

\mathcal{V}(A\bigcup B)\leq\mathcal{V}(A)+\mathbb{V}(B)\;\;\text{ and }\;\;% \widehat{\mathcal{E}}[X+Y]\leq\widehat{\mathcal{E}}[X]+\widehat{\mathbb{E}}[Y] (2.2)

due to the fact that \mathbb{V}(A^{c}\bigcap B^{c})\geq\mathbb{V}(A^{c})-\mathbb{V}(B) and \widehat{\mathbb{E}}[-X-Y]\geq\widehat{\mathbb{E}}[-X]-\widehat{\mathbb{E}}[Y].

Also, we define the Choquet integrals/expecations (C_{\mathbb{V}},C_{\mathcal{V}}) by

C_{V}[X]=\int_{0}^{\infty}V(X\geq t)dt+\int_{-\infty}^{0}\left[V(X\geq t)-1% \right]dt

with V being replaced by \mathbb{V} and \mathcal{V} respectively. It is obvious that

C_{\mathbb{V}}(|X|)\leq 1+\widehat{\mathbb{E}}[|X|^{1+\alpha}]\int_{1}^{\infty% }t^{-1-\alpha}dt\leq 1+\alpha^{-1}\widehat{\mathbb{E}}[|X|^{1+\alpha}].

Also, it can be verified that, if \lim_{c\to\infty}\widehat{\mathbb{E}}[(|X|-c)^{+}]=0, then \widehat{\mathbb{E}}[|X|]\leq C_{\mathbb{V}}(|X|) (c.f. Lemma 3.9 of Zhang (2016)).


The concept of independence and identical distribution is introduced by Peng (2006,2008b).

Definition 2.2

(Peng (2006, 2008b))

(i)

(Identical distribution) Let \bm{X}_{1} and \bm{X}_{2} be two n-dimensional random vectors defined respectively in sub-linear expectation spaces (\Omega_{1},\mathscr{H}_{1},\widehat{\mathbb{E}}_{1}) and (\Omega_{2},\mathscr{H}_{2},\widehat{\mathbb{E}}_{2}). They are called identically distributed, denoted by \bm{X}_{1}\overset{d}{=}\bm{X}_{2} if

\widehat{\mathbb{E}}_{1}[\varphi(\bm{X}_{1})]=\widehat{\mathbb{E}}_{2}[\varphi% (\bm{X}_{2})],\;\;\forall\varphi\in C_{l,Lip}(\mathbb{R}_{n}),

whenever the sub-expectations are finite. A sequence \{X_{n};n\geq 1\} of random variables is said to be identically distributed if X_{i}\overset{d}{=}X_{1} for each i\geq 1.

(ii)

(Independence) In a sub-linear expectation space (\Omega,\mathscr{H},\widehat{\mathbb{E}}), a random vector \bm{Y}=(Y_{1},\ldots,Y_{n}), Y_{i}\in\mathscr{H} is said to be independent to another random vector \bm{X}=(X_{1},\ldots,X_{m}) , X_{i}\in\mathscr{H} under \widehat{\mathbb{E}} if for each test function \varphi\in C_{l,Lip}(\mathbb{R}_{m}\times\mathbb{R}_{n}) we have \widehat{\mathbb{E}}[\varphi(\bm{X},\bm{Y})]=\widehat{\mathbb{E}}\big{[}% \widehat{\mathbb{E}}[\varphi(\bm{x},\bm{Y})]\big{|}_{\bm{x}=\bm{X}}\big{]}, whenever \overline{\varphi}(\bm{x}):=\widehat{\mathbb{E}}\left[|\varphi(\bm{x},\bm{Y})|% \right]<\infty for all \bm{x} and \widehat{\mathbb{E}}\left[|\overline{\varphi}(\bm{X})|\right]<\infty.

(iii)

(Independent random variables) A sequence of random variables \{X_{n};n\geq 1\} is said to be independent, if X_{i+1} is independent to (X_{1},\ldots,X_{i}) for each i\geq 1.

Definition 2.3

(Extended Independence) A sequence of random variables \{X_{n};n\geq 1\} is said to be extended independent, if

\mathbb{E}\left[\prod_{i=1}^{n}\psi_{i}(X_{i})\right]=\prod_{i=1}^{n}\mathbb{E% }\left[\psi_{i}(X_{i})\right],\;\forall\;n\geq 2,\forall\;0\leq\psi_{i}(x)\in C% _{l,Lip}(\mathbb{R}). (2.3)

It can be showed that the independence implies the extended independence. It shall be noted that the extended independence of \{X_{n};n\geq 1\} under \widehat{\mathbb{E}} does not imply the extended independence under \widehat{\mathcal{E}}. The independence in sense of (2.3) was proposed in Chen, Wu and Li (2013). But their function space of \psi_{i}s is assumed to be the family of all non-negative Borel functions. Here we use the function space the same as Peng’s. The function space can also be limited to C_{b,Lip}(\mathbb{R}).

Recall that a sequence of random variables \{Y_{n};n\geq 1\} on a probability space (\Omega,\mathcal{F},\textsf{P}) are called to be lower extended negatively dependent (LEND) if there is some dominating constant K\geq 1 such that, for all x_{i}, i=1,2,\ldots,

\textsf{P}\left(\bigcap_{i=1}^{n}\left\{Y_{i}\leq x_{i}\right\}\right)\leq K% \prod_{i=1}^{n}\textsf{P}\left(Y_{i}\leq x_{i}\right),\;\;\forall\;n, (2.4)

and they are called upper extended negatively dependent (UEND) if for all x_{i}, i=1,2,\ldots,

\textsf{P}\left(\bigcap_{i=k=1}^{n}\left\{Y_{i}>x_{i}\right\}\right)\leq K% \prod_{i=1}^{n}\textsf{P}\left(Y_{i}>x_{i}\right),\;\;\forall\;n. (2.5)

They are called extended negatively dependent (END) if they are both LEND and UEND (cf., Liu (2009)). In the case K=1 the notion of END random variables reduces to the well known notion of so-called negatively dependent (ND) random variables which was introduced by Lehmann (1966) (cf. also Block et al. (1982), Joag-Dev and Proschan (1983) etc). It is showed that if \{Y_{n};n\geq 1\} are upper (resp. lower) extended negatively dependent, and the functions g_{i}\geq 0, i=1,2,\ldots, are all non-decreasing (resp. all non-increasing), then

\textsf{E}\left[\prod_{i=1}^{n}g_{i}(Y_{i})\right]\leq K\prod_{i=1}^{n}\textsf% {E}\left[g_{i}(Y_{i})\right],\;n\geq 1, (2.6)

(cf., Chen, Chen and Ng (2010)). Motivated by the above property (2.6) and Definition 2.3, we introduce a concept of extended negative dependence under the sub-linear expectation.

Definition 2.4

(Extended negative dependence) In a sub-linear expectation space (\Omega,\mathscr{H},\widehat{\mathbb{E}}), random variables \{X_{n};n\geq 1\} are called to be upper (resp. lower) extended negatively dependent if there is some dominating constant K\geq 1 such that

\widehat{\mathbb{E}}\left[\prod_{i=1}^{n}g_{i}(X_{i})\right]\leq K\prod_{i=1}^% {n}\widehat{\mathbb{E}}\left[g_{i}(X_{i})\right],n\geq 1, (2.7)

whenever the non-negative functions g_{i}\in C_{b,Lip}(\mathbb{R}), i=1,2,\ldots, are all non-decreasing (resp. all non-increasing). They are called extended negatively dependent if they both upper extended negatively dependent and lower extended negatively dependent.

It is obvious that, if \{X_{n};n\geq 1\} is a sequence of extended independent random variables and f_{1}(x),f_{2}(x),\ldots\in C_{l,Lip}(\mathbb{R}), then \{f_{n}(X_{n});n\geq 1\} is also a sequence of extended independent random variables, and they are extended negatively dependent with K=1; if \{X_{n};n\geq 1\} is a sequence of upper extended negatively dependent random variables and f_{1}(x),f_{2}(x),\ldots\in C_{l,Lip}(\mathbb{R}) are all non-decreasing (resp. all non-increasing) functions, then \{f_{n}(X_{n});n\geq 1\} is also a sequence of upper (resp. lower) extended negatively dependent random variables.

Example 2.1

Let (\Omega,\mathcal{F}) be a measurable space, \mathscr{P} be a family of probability measures on it and \{X_{n};n\geq 1\} be a sequence of random variables. Define a upper expectation \widehat{\mathbb{E}}[\cdot] by

\widehat{\mathbb{E}}[X]=\sup_{P\in\mathscr{P}}E_{P}[X].

Then \widehat{\mathbb{E}} is a sub-linear expectation. If \{X_{n};n\geq 1\} are extended negatively dependent in the sense of (2.5) and (2.4) relative to each P\in\mathscr{P} with the same dominating constant K, then they are extended negatively dependent under \widehat{\mathbb{E}}.

We will establish the exponential inequalities, the law of large numbers and the law of the iterated logarithm for this kind of extended independent random variables.

3 Exponential inequalities

In this section, we are going to establish some key inequalities for the sums of extended negatively dependent random variables, including moment inequalities and the exponential inequalities. These inequalities improve Lemmas 2.5 and 2.6 of Chen, Chen and Ng (2010). Let \{X_{1},\ldots,X_{n}\} be a sequence of random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}). Set S_{n}=\sum_{k=1}^{n}X_{k}, B_{n}=\sum_{k=1}^{n}\widehat{\mathbb{E}}[X_{k}^{2}] and M_{n,p}=\sum_{k=1}^{n}\widehat{\mathbb{E}}[|X_{k}|^{p}], M_{n,p,+}=\sum_{k=1}^{n}\widehat{\mathbb{E}}[(X_{k}^{+})^{p}], p\geq 2.

Theorem 3.1

Let \{X_{1},\ldots,X_{n}\} be a sequence of upper extended negatively dependent random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}) with \widehat{\mathbb{E}}[X_{k}]\leq 0. Then

(a)

For all x,y>0,

\displaystyle\mathbb{V}\left(S_{n}\geq x\right)\leq \displaystyle\mathbb{V}\left(\max_{k\leq n}X_{k}\geq y\right)
\displaystyle+K\exp\left\{-\frac{x^{2}}{2(xy+B_{n})}\Big{(}1+\frac{2}{3}\ln% \big{(}1+\frac{xy}{B_{n}}\big{)}\Big{)}\right\}; (3.1)
(b)

For any p\geq 2, there exists a constant C_{p}\geq 1 such that for all x>0 and 0<\delta\leq 1,

\mathbb{V}\left(S_{n}\geq x\right)\leq C_{p}\delta^{-2p}K\frac{M_{n,p,+}}{x^{p% }}+K\exp\left\{-\frac{x^{2}}{2B_{n}(1+\delta)}\right\}, (3.2)
(c)

We have for x>0, r>0 and p\geq 2,

\mathbb{V}\left(S_{n}^{+}\geq x\right)\leq\mathbb{V}\big{(}\max_{k\leq n}X_{k}% ^{+}\geq\frac{x}{r}\big{)}+Ke^{r}\left(\frac{rB_{n}}{rB_{n}+x^{2}}\right)^{r}, (3.3)
\displaystyle C_{\mathbb{V}}\left[(S_{n}^{+})^{p}\right]\leq \displaystyle p^{p}C_{\mathbb{V}}\Big{[}\big{(}\max_{k\leq n}X_{k}^{+}\big{)}^% {p}\Big{]}+C_{p}B_{n}^{p/2}
\displaystyle\leq \displaystyle p^{p}\sum_{k=1}^{n}C_{\mathbb{V}}\Big{[}(X_{k}^{+})^{p}\Big{]}+C% _{p}B_{n}^{p/2}. (3.4)

In particular,

\mathbb{V}\left(S_{n}\geq x\right)\leq(1+Ke)\frac{B_{n}}{x^{2}},\;\;\forall x>0. (3.5)
Proof.

Let Y_{k}=X_{k}\wedge y, T_{n}=\sum_{k=1}^{n}Y_{k}. Then X_{k}-Y_{k}=(X_{k}-y)^{+}\geq 0 and \widehat{\mathbb{E}}[Y_{k}]\leq\widehat{\mathbb{E}}[X_{k}]\leq 0. Note that \varphi(x)=:e^{t(x\wedge y)} is a bounded non-decreasing function and belongs to C_{b,Lip}(\mathbb{R}) since 0\leq\varphi^{\prime}(x)\leq te^{ty} if t>0. It follows that for any t>0,

\displaystyle\mathbb{V}\left(S_{n}\geq x\right)\leq \displaystyle\mathbb{V}\big{(}\max_{k\leq n}X_{k}\geq y\big{)}+\mathbb{V}\left% (T_{n}\geq x\right),
\displaystyle\mathbb{V}\left(T_{n}\geq x\right)\leq \displaystyle e^{-tx}\widehat{\mathbb{E}}[e^{tT_{n}}]\leq e^{-tx}K\prod_{k=1}^% {n}\widehat{\mathbb{E}}[e^{tY_{k}}],

be the definition of the upper extended negative dependence. The remainder of the proof is the similar to that of Zhang (2015a). For the completeness of this paper, we also present it here.

Note

e^{tY_{k}}=1+tY_{k}+\frac{e^{tY_{k}}-1-tY_{k}}{Y_{k}^{2}}Y_{k}^{2}\leq 1+tY_{k% }+\frac{e^{ty}-1-ty}{y^{2}}Y_{k}^{2}.

We have

\widehat{\mathbb{E}}[e^{tY_{k}}]\leq 1+\frac{e^{ty}-1-ty}{y^{2}}\widehat{% \mathbb{E}}[Y_{k}^{2}]\leq\exp\left\{\frac{e^{ty}-1-ty}{y^{2}}\widehat{\mathbb% {E}}[X_{k}^{2}]\right\}.

Choosing t=\frac{1}{y}\ln\big{(}1+\frac{xy}{B_{n}}\big{)} yields

\displaystyle\mathbb{V}\left(T_{n}\geq x\right)\leq \displaystyle Ke^{-tx}\exp\left\{\frac{e^{ty}-1-ty}{y^{2}}B_{n}\right\}
\displaystyle= \displaystyle K\exp\left\{\frac{x}{y}-\frac{x}{y}\Big{(}\frac{B_{n}}{xy}+1\Big% {)}\ln\Big{(}1+\frac{xy}{B_{n}}\Big{)}\right\}. (3.6)

Applying the elementary inequality

\ln(1+t)\geq\frac{t}{1+t}+\frac{t^{2}}{2(1+t)^{2}}\big{(}1+\frac{2}{3}\ln(1+t)% \big{)}

yields

\Big{(}\frac{B_{n}}{xy}+1\Big{)}\ln\Big{(}1+\frac{xy}{B_{n}}\Big{)}\geq 1+% \frac{xy}{2(xy+B_{n})}\Big{(}1+\frac{2}{3}\ln\big{(}1+\frac{xy}{B_{n}}\big{)}% \Big{)}.

(3.1) is proved.

Next we show (b). If xy\leq\delta B_{n}, then

\frac{x^{2}}{2(xy+B_{n})}\Big{(}1+\frac{2}{3}\ln\big{(}1+\frac{xy}{B_{n}}\big{% )}\Big{)}\geq\frac{x^{2}}{2B_{n}(1+\delta)}.

If xy\geq\delta B_{n}, then

\frac{x^{2}}{2(xy+B_{n})}\Big{(}1+\frac{2}{3}\ln\big{(}1+\frac{xy}{B_{n}}\big{% )}\Big{)}\geq\frac{x}{2(1+1/\delta)y}.

It follows that

\mathbb{V}\left(T_{n}\geq x\right)\leq K\exp\left\{-\frac{x^{2}}{2B_{n}(1+% \delta)}\right\}+K\exp\left\{-\frac{x}{2(1+1/\delta)y}\right\} (3.7)

by (3). Let

\beta(x)=\beta_{p}(x)=\frac{1}{x^{p}}\sum_{k=1}^{n}\widehat{\mathbb{E}}[(X_{k}% ^{+})^{p}],

and choose

\rho=1\wedge\frac{1}{2(1+1/\delta)\delta\log(1/\beta(x))},\;\;y=\rho\delta x.

Then by (3.7),

\displaystyle\mathbb{V}\big{(}S_{n}\geq(1+2\delta)x\big{)}\leq\mathbb{V}\big{(% }T_{n}\geq x\big{)}+\mathbb{V}\big{(}\sum_{i=1}^{n}(X_{i}-\rho\delta x)^{+}% \geq 2\delta x\big{)}
\displaystyle\leq \displaystyle K\exp\left\{-\frac{x^{2}}{2B_{n}(1+\delta)}\right\}+K\beta(x)+% \mathbb{V}\big{(}\max_{i\leq n}X_{i}\geq\delta x\big{)}+\mathbb{V}\big{(}\sum_% {i=1}^{n}(X_{i}-\rho\delta x)^{+}\wedge(\delta x)\geq 2\delta x\big{)}.

It is obvious that

\mathbb{V}\big{(}\max_{i\leq n}X_{i}\geq\delta x\big{)}\leq\delta^{-p}\beta(x).

On the other hand,

\displaystyle\mathbb{V}\big{(}\sum_{i=1}^{n}(X_{i}-\rho\delta x)^{+}\wedge(% \delta x)\geq 2\delta x\big{)}=\mathbb{V}\left(\sum_{i=1}^{n}\left[\Big{(}% \frac{X_{i}}{\delta x}-\rho\big{)}^{+}\wedge 1\right]\geq 2\right)
\displaystyle\leq \displaystyle e^{-2t}\widehat{\mathbb{E}}\exp\left\{t\sum_{i=1}^{n}\left[\Big{% (}\frac{X_{i}}{\delta x}-\rho\big{)}^{+}\wedge 1\right]\right\}\leq e^{-2t}K% \prod_{i=1}^{n}\widehat{\mathbb{E}}\exp\left\{t\left[\Big{(}\frac{X_{i}}{% \delta x}-\rho\big{)}^{+}\wedge 1\right]\right\}
\displaystyle\leq \displaystyle e^{-2t}K\prod_{i=1}^{n}\left[1+e^{t}\mathbb{V}(X_{i}\geq\rho% \delta x)\right]\leq K\exp\left\{-2t+e^{t}\sum_{i=1}^{n}\mathbb{V}(X_{i}\geq% \rho\delta x)\right\},

where the second inequality is due to the upper extended negative dependence. Assume \beta(x)<1. Let e^{t}\sum_{i=1}^{n}\mathbb{V}(X_{i}\geq\rho\delta x)=2 (while, if \sum_{i=1}^{n}\mathbb{V}(X_{i}\geq\rho\delta x)=0, we let t\to\infty). We obtain

\displaystyle\mathbb{V}\big{(}\sum_{i=1}^{n}(X_{i}-\rho\delta x)^{+}\wedge(% \delta x)\geq 2\delta x\big{)}\leq Ke^{2}\left(\frac{1}{2}\sum_{i=1}^{n}% \mathbb{V}(X_{i}\geq\rho\delta x)\right)^{2}
\displaystyle\leq \displaystyle Ke^{2}\left(\frac{\beta(x)}{2(\delta\rho)^{p}}\right)^{2}=Ke^{2}% 2^{-2}\delta^{-2p}(2(\delta+1))^{2p}\beta^{2}(x)\left(\log\frac{1}{\beta(x)}% \right)^{2p}\leq C_{p}\delta^{-2p}\beta(x),

where the last inequality is due to the fact that (\log 1/t)^{2p}\leq C_{p}/t (0<t<1). It follows that

\displaystyle\mathbb{V}\big{(}S_{n}\geq(1+2\delta)x\big{)}\leq K\exp\left\{-% \frac{x^{2}}{2B_{n}(1+\delta)}\right\}+C_{p}K\delta^{-2p}\beta(x).

If \beta(x)\geq 1, then the above inequality is obvious. Now letting z=(1+2\delta)x and \delta^{\prime}=(1+\delta)(1+2\delta)^{2}-1 yields

\displaystyle\mathbb{V}\big{(}S_{n}\geq z\big{)}\leq K\exp\left\{-\frac{z^{2}}% {2B_{n}(1+\delta^{\prime})}\right\}+C_{p}K(\delta^{\prime})^{-2p}\beta(z).

(b) is proved.

Finally, we consider (c). Putting y=x/r in (3), we obtain (3.3). Note that

C_{\mathbb{V}}\big{[}(X^{+})^{p}\big{]}=\int_{0}^{\infty}\mathbb{V}\big{(}X^{p% }>x)dx=\int_{0}^{\infty}px^{p-1}\mathbb{V}\big{(}X>x)dx.

Then putting r=p>p/2, multiplying both sides of (3.3) by px^{p-1} and integrating on the positive half-line, we conclude (3.1). ∎

4 The law of Large numbers

For a sequence \{X_{n};n\geq 1\} of random variables in the sub-linear expectation space (\Omega,\mathscr{H},\widehat{\mathbb{E}}), we denote S_{n}=\sum_{k=1}^{n}X_{k}, S_{0}=0. We first consider the weak law of large numbers.

Theorem 4.1
(a)

Suppose that X_{1},X_{2},\ldots are identically distributed and extended negatively dependent with \lim\limits_{c\to\infty}\widehat{\mathbb{E}}\left[(|X_{1}|-c)^{+}\right]=0. Then for any \epsilon>0,

\mathcal{V}\left(\widehat{\mathcal{E}}[X_{1}]-\epsilon\leq\frac{S_{n}}{n}\leq% \widehat{\mathbb{E}}[X_{1}]+\epsilon\right)\to 1. (4.1)
(b)

Suppose that X_{1},X_{2},\ldots are identically distributed and extended independent with \lim\limits_{c\to\infty}\widehat{\mathbb{E}}\left[(|X_{1}|-c)^{+}\right]=0. Then for any \epsilon>0,

\mathbb{V}\left(\Big{|}\frac{S_{n}}{n}-\widehat{\mathbb{E}}[X_{1}]\Big{|}\leq% \epsilon\right)\to 1 (4.2)

and

\mathbb{V}\left(\Big{|}\frac{S_{n}}{n}-\widehat{\mathcal{E}}[X_{1}]\Big{|}\leq% \epsilon\right)\to 1. (4.3)

We conjuncture that for any point p\in\big{[}\widehat{\mathcal{E}}[X_{1}],\widehat{\mathbb{E}}[X_{1}]\big{]}, \mathbb{V}\left(\Big{|}\frac{S_{n}}{n}-p\Big{|}\leq\epsilon\right)\to 1.

Proof.

Define

f_{c}(x)=(-c)\vee(x\wedge c),\;\;\widehat{f}_{c}(x)=x-f_{c}(x) (4.4)

and \overline{X}_{j}=f_{c}(X_{j}), j=1,2,\ldots. Then f_{c}(\cdot)\in C_{l,Lip}(\mathbb{R}), and \overline{X}_{j}, j=1,2,\ldots are extended negatively dependent (resp. extended independent) identically distributed random variables. It is easily seen that \widehat{\mathbb{E}}[f_{c}(X_{1})]\to\widehat{\mathbb{E}}[X_{1}], \widehat{\mathcal{E}}[f_{c}(X_{1})]\to\widehat{\mathcal{E}}[X_{1}] as c\to+\infty, and

\displaystyle\sup_{n}\mathbb{V}\left(\Big{|}S_{n}-\sum_{j=1}^{n}\overline{X}_{% j}\Big{|}\geq\epsilon n\right)\leq \displaystyle\sup_{n}\frac{1}{\epsilon n}\sum_{j=1}^{n}\widehat{\mathbb{E}}|f_% {c}(X_{j})|
\displaystyle\leq \displaystyle\frac{1}{\epsilon}\widehat{\mathbb{E}}(|X_{1}|-c)^{+}\to 0\text{ % as }c\to\infty.

So, it is sufficient to consider \{\overline{X}_{n};n\geq 1\} and then without loss of generality we can assume that X_{n} is bounded by a constant c>0. By (3.5),

\mathbb{V}\left(\frac{S_{n}}{n}-\widehat{\mathbb{E}}[X_{1}]\geq\epsilon\right)% \leq C\frac{\sum_{j=1}^{n}\widehat{\mathbb{E}}\left[\big{(}X_{j}-\widehat{% \mathbb{E}}[X_{j}]\big{)}^{2}\right]}{n^{2}\epsilon^{2}}\leq C\frac{nc^{2}}{n^% {2}\epsilon^{2}}\to 0, (4.5)

and similarly,

\mathbb{V}\left(\frac{S_{n}}{n}-\widehat{\mathcal{E}}[X_{1}]\leq-\epsilon% \right)=\mathbb{V}\left(\frac{-S_{n}}{n}-\widehat{\mathbb{E}}[-X_{1}]\geq% \epsilon\right)\to 0.

(4.1) is proved.

Now, suppose that X_{1},X_{2},\ldots, are extended independent. By noting (4.1), for (4.2) and (4.3) it is sufficient to show that

\mathbb{V}\left(\frac{S_{n}}{n}-\widehat{\mathbb{E}}[X_{1}]\geq-\epsilon\right% )\to 1. (4.6)

It is easily seen that (4.6) is equivalent to

\mathcal{V}\left(\frac{\sum_{k=1}^{n}(-X_{j}-\widehat{\mathcal{E}}[-X_{j}])}{n% }\geq\epsilon\right)\to 0. (4.7)

However, we have no inequality to estimate the lower capacity \mathcal{V}(\cdot) such that (4.7) can be verified. We shall now introduce a more technique method to show (4.6).

For any 0<\delta<\epsilon and t>0, we have

\displaystyle I\left\{\frac{S_{n}}{n}-\widehat{\mathbb{E}}[X_{1}]\geq-\epsilon\right\}
\displaystyle\geq \displaystyle e^{-t\delta}\left(\exp\left\{t\frac{S_{n}-n\widehat{\mathbb{E}}[% X_{1}]}{n}\right\}-e^{-t\epsilon}\right)I\left\{\frac{S_{n}-n\widehat{\mathbb{% E}}[X_{1}]}{n}\leq\delta\right\}
\displaystyle= \displaystyle e^{-t\delta}\left(\exp\left\{t\frac{S_{n}-n\widehat{\mathbb{E}}[% X_{1}]}{n}\right\}-e^{-t\epsilon}\right)
\displaystyle-e^{-t\delta}\left(\exp\left\{t\frac{S_{n}-n\widehat{\mathbb{E}}[% X_{1}]}{n}\right\}-e^{-t\epsilon}\right)I\left\{\frac{S_{n}-n\widehat{\mathbb{% E}}[X_{1}]}{n}>\delta\right\}
\displaystyle\geq \displaystyle e^{-t\delta}\left(\prod_{j=1}^{n}\exp\left\{t\frac{X_{j}-% \widehat{\mathbb{E}}[X_{j}]}{n}\right\}-e^{-t\epsilon}\right)-e^{-t\delta}e^{2% tc}I\left\{\frac{S_{n}-n\widehat{\mathbb{E}}[X_{1}]}{n}>\delta\right\}.

It follows that

\displaystyle\mathbb{V}\left(\frac{S_{n}}{n}-\widehat{\mathbb{E}}[X_{1}]\geq-% \epsilon\right)
\displaystyle\geq \displaystyle e^{-t\delta}\left(\widehat{\mathbb{E}}\left[\prod_{j=1}^{n}\exp% \left\{t\frac{X_{j}-\widehat{\mathbb{E}}[X_{j}]}{n}\right\}\right]-e^{-t% \epsilon}\right)-e^{-t\delta}e^{2tc}\mathbb{V}\left(\frac{S_{n}-n\widehat{% \mathbb{E}}[X_{1}]}{n}>\delta\right).

By (4.5), the second term will goes to zero as n\to\infty. By the extended independence and the fact that e^{x}\geq 1+x,

\displaystyle\widehat{\mathbb{E}}\left[\prod_{j=1}^{n}\exp\left\{t\frac{X_{j}-% \widehat{\mathbb{E}}[X_{j}]}{n}\right\}\right]= \displaystyle\prod_{j=1}^{n}\widehat{\mathbb{E}}\left[\exp\left\{t\frac{X_{j}-% \widehat{\mathbb{E}}[X_{j}]}{n}\right\}\right]
\displaystyle\geq \displaystyle\prod_{j=1}^{n}\widehat{\mathbb{E}}\left[t\frac{X_{j}-\widehat{% \mathbb{E}}[X_{j}]}{n}+1\right]=1.

It follows that

\liminf_{n\to\infty}\mathbb{V}\left(\frac{S_{n}}{n}-\widehat{\mathbb{E}}[X_{1}% ]\geq-\epsilon\right)\geq e^{-t\delta}\left(1-e^{-t\epsilon}\right).

Letting \delta\to 0 and then t\to\infty yields (4.6). The proof is completed. ∎


Before we give the strong laws of large numbers, we need some more notations about the sub-linear expectations and capacities.

Definition 4.1

(I) A sub-linear expectation \widehat{\mathbb{E}}:\mathscr{H}\to\mathbb{R} is called to be countably sub-additive if it satisfies

(1)

Countable sub-additivity: \widehat{\mathbb{E}}[X]\leq\sum_{n=1}^{\infty}\widehat{\mathbb{E}}[X_{n}], whenever X\leq\sum_{n=1}^{\infty}X_{n}, X,X_{n}\in\mathscr{H} and X\geq 0,X_{n}\geq 0, n=1,2,\ldots;

It is called to be continuous if it satisfies

(2)

Continuity from below: \widehat{\mathbb{E}}[X_{n}]\uparrow\widehat{\mathbb{E}}[X] if 0\leq X_{n}\uparrow X, where X_{n},X\in\mathscr{H};

(3)

Continuity from above: \widehat{\mathbb{E}}[X_{n}]\downarrow\widehat{\mathbb{E}}[X] if 0\leq X_{n}\downarrow X, where X_{n},X\in\mathscr{H}.

(II) A function V:\mathcal{F}\to[0,1] is called to be countably sub-additive if

V\Big{(}\bigcup_{n=1}^{\infty}A_{n}\Big{)}\leq\sum_{n=1}^{\infty}V(A_{n})\;\;% \forall A_{n}\in\mathcal{F}.

(III) A capacity V:\mathcal{F}\to[0,1] is called a continuous capacity if it satisfies

(III1)

Continuity from below: V(A_{n})\uparrow V(A) if A_{n}\uparrow A, where A_{n},A\in\mathcal{F};

(III2)

Continuity from above: V(A_{n})\downarrow V(A) if A_{n}\downarrow A, where A_{n},A\in\mathcal{F}.


It is obvious that a continuous sub-additive capacity V (resp. a sub-linear expectation \widehat{\mathbb{E}}) is countably sub-additive. The “the convergence part” of the Borel-Cantelli Lemma is still true for a countably sub-additive capacity.

Lemma 4.1

(Borel-Cantelli’s Lemma) Let \{A_{n},n\geq 1\} be a sequence of events in \mathcal{F}. Suppose that V is a countably sub-additive capacity. If \sum_{n=1}^{\infty}V\left(A_{n}\right)<\infty, then

V\left(A_{n}\;\;i.o.\right)=0,\;\;\text{ where }\{A_{n}\;\;i.o.\}=\bigcap_{n=1% }^{\infty}\bigcup_{i=n}^{\infty}A_{i}.

If \mathbb{V} is a continuous capacity and \{A_{n}^{c},n\geq 1\} are independent relative to \mathcal{V}, i.e., \mathcal{V}(\bigcap_{j=m}^{m+n}A_{j}^{c})=\prod_{j=m}^{n}\mathcal{V}(A_{j}^{c}) for all n,m\geq 1, then we can show that \sum_{n=1}^{\infty}\mathbb{V}\left(A_{n}\right)=\infty implies \mathbb{V}\left(A_{n}\;\;i.o.\right)=1. However, the extended independence does not imply that \{X_{n}\in B_{n};n\geq 1\} are independent relative to \mathcal{V} even when (2.3) is assumed to hold for all non-negative Borel functions \psi_{i}s. So, in general, we have not “the divergence part” of the Borel-Cantelli Lemma.

Since \mathbb{V} may be not countably sub-additive in general, we define an outer capacity \mathbb{V}^{\ast} by

\mathbb{V}^{\ast}(A)=\inf\Big{\{}\sum_{n=1}^{\infty}\mathbb{V}(A_{n}):A\subset% \bigcup_{n=1}^{\infty}A_{n}\Big{\}},\;\;\mathcal{V}^{\ast}(A)=1-\mathbb{V}^{% \ast}(A^{c}),\;\;\;A\in\mathcal{F}.

Then it can be shown that \mathbb{V}^{\ast}(A) is a countably sub-additive capacity with \mathbb{V}^{\ast}(A)\leq\mathbb{V}(A) and the following properties:

(a*)

If \mathbb{V} is countably sub-additive, then \mathbb{V}^{\ast}\equiv\mathbb{V};

(b*)

If I_{A}\leq g, g\in\mathscr{H}, then \mathbb{V}^{\ast}(A)\leq\widehat{\mathbb{E}}[g]. Further, if \widehat{\mathbb{E}} is countably sub-additive, then

\widehat{\mathbb{E}}[f]\leq\mathbb{V}^{\ast}(A)\leq\mathbb{V}(A)\leq\widehat{% \mathbb{E}}[g],\;\;\forall f\leq I_{A}\leq g,f,g\in\mathscr{H};
(c*)

\mathbb{V}^{\ast} is the largest countably sub-additive capacity satisfying the property that \mathbb{V}^{\ast}(A)\leq\widehat{\mathbb{E}}[g] whenever I_{A}\leq g\in\mathscr{H}, i.e., if V is also a countably sub-additive capacity satisfying V(A)\leq\widehat{\mathbb{E}}[g] whenever I_{A}\leq g\in\mathscr{H}, then V(A)\leq\mathbb{V}^{\ast}(A).

The following are our main results on the Kolmogorov type strong laws of large numbers.

Theorem 4.2

Let \{X_{n};n\geq 1\} be a sequence identically distributed random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}).

(a)

Suppose \lim\limits_{c\to\infty}\widehat{\mathbb{E}}\left[(|X_{1}|-c)^{+}\right]=0 and C_{\mathbb{V}}[|X_{1}|]<\infty. If X_{1},X_{2},\ldots, are upper extended negatively dependent, then

\mathbb{V}^{\ast}\left(\limsup_{n\to\infty}\frac{S_{n}}{n}>\widehat{\mathbb{E}% }[X_{1}]\right)=0. (4.8)

If X_{1},X_{2},\ldots, are extended negatively dependent, then

\mathbb{V}^{\ast}\left(\Big{\{}\liminf_{n\to\infty}\frac{S_{n}}{n}<\widehat{% \mathcal{E}}[X_{1}]\Big{\}}\bigcup\Big{\{}\limsup_{n\to\infty}\frac{S_{n}}{n}>% \widehat{\mathbb{E}}[X_{1}]\Big{\}}\right)=0. (4.9)
(b)

Suppose that \mathbb{V} is countably sub-additive. Then \mathbb{V}^{\ast}=\mathbb{V} and so (a) remains true when \mathbb{V}^{\ast} is replaced by \mathbb{V}.

(c)

Suppose that \mathbb{V} is continuous. If X_{1},X_{2},\ldots, are extended independent, and

\mathbb{V}\left(\limsup_{n\to\infty}\frac{|S_{n}|}{n}=+\infty\right)<1, (4.10)

then C_{\mathbb{V}}[|X_{1}|]<\infty.

The following corollary shows that the limit of {S_{n}}/{n} is a set.

Corollary 4.1

Let \{X_{n};n\geq 1\} be a sequence of extended independent and identically distributed random variables with C_{\mathbb{V}}[|X_{1}|]<\infty and \lim_{c\to\infty}\widehat{\mathbb{E}}\left[(|X_{1}|-c)^{+}\right]=0. If \mathbb{V} is continuous, then

\mathbb{V}\left(\liminf_{n\to\infty}\frac{S_{n}}{n}=\widehat{\mathcal{E}}[X_{1% }]\right)=1\;\text{ and }\;\mathbb{V}\left(\limsup_{n\to\infty}\frac{S_{n}}{n}% =\widehat{\mathbb{E}}[X_{1}]\right)=1. (4.11)

Moreover, if there is a sequence \{n_{k}\} with n_{k}\to\infty and n_{k-1}/n_{k}\to 0 such that S_{n_{k-1}} and S_{n_{k}}-S_{n_{k-1}} are extended independent, then

\mathbb{V}\left(\liminf_{n\to\infty}\frac{S_{n}}{n}=\widehat{\mathcal{E}}[X_{1% }]\;\text{ and }\;\limsup_{n\to\infty}\frac{S_{n}}{n}=\widehat{\mathbb{E}}[X_{% 1}]\right)=1 (4.12)

and

\mathbb{V}\left(C\left\{\frac{S_{n}}{n}\right\}=\left[\widehat{\mathcal{E}}[X_% {1}],\widehat{\mathbb{E}}[X_{1}]\right]\right)=1, (4.13)

where C(\{x_{n}\}) denotes the cluster set of a sequence of \{x_{n}\} in \mathbb{R}.

(4.9) tells us that the limit points of \frac{S_{n}}{n} are between the lower expectation \widehat{\mathcal{E}}[X_{1}] and the upper expectation \widehat{\mathbb{E}}[X_{1}]. (4.12) tells us that the lower expectation and the upper expectation are reachable. (4.13) tells us that the interval \big{[}\widehat{\mathcal{E}}[X_{1}],\widehat{\mathbb{E}}[X_{1}]\big{]} is filled with the limit points. When \{X_{n};n\geq 1\} are independence in the sense Peng’s definition, the conclusions in Theorem 4.2 and Corollary (4.1) were proved by Zhang (2016). Before that, Chen, Wu and Li (2013) and Chen (2016) proved (4.9) under a stringer moment condition that \widehat{\mathbb{E}}[|X_{1}|^{1+\gamma}]<\infty for some \gamma>0.

For a sequence of extended negatively dependent and identically distributed \{X_{n};n\geq 1\} on a probability space (\Omega,\mathcal{F},\textsf{P}), Chen, Chen and Ng (2010) showed that \textsf{P}(S_{n}/n\to\mu)=1 if and only if \textsf{E}[|X_{1}|]<\infty and \textsf{E}[X_{1}]=\mu. Under the extended negative dependence in a sub-linear space, we have not find a way to show the conclusions in Theorem 4.2 (c), the inverse part of the strong law of large numbers. However, the conclusions are true if we assume that \{X_{n};n\geq 1\} are extended negatively dependent under \widehat{\mathcal{E}} (i.e., in the Definition 2.4 \widehat{\mathbb{E}} is replaced by \widehat{\mathcal{E}}).

Theorem 4.3

Let \{X_{n};n\geq 1\} be a sequence of identically distributed random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}) which are extended negatively dependent under \widehat{\mathcal{E}}. If \mathbb{V} is continuous, then

\mathbb{V}\left(\limsup_{n\to\infty}\frac{|S_{n}|}{n}=+\infty\right)<1% \Longrightarrow C_{\mathbb{V}}[|X_{1}|]<\infty. (4.14)

When the sub-linear expectation \widehat{\mathbb{E}} reduces to the linear expectation E, Theorem 4.2 (b) and Theorem 4.3 improve the result of Chen, Chen and Ng (2010).

Corollary 4.2

Let \{X_{n};n\geq 1\} be a sequence of identically distributed random variables on a probability space (\Omega,\mathcal{F},\textsf{P}) which are extended negatively dependent in the sense of (2.4) and (2.5). If \textsf{E}[|X_{1}|]<\infty, then \textsf{P}\big{(}S_{n}/n\to\textsf{E}[X_{1}]\big{)}=1.

Conversely, if \textsf{P}\big{(}\limsup\limits_{n\to\infty}|S_{n}|/n=\infty\big{)}<1, then \textsf{E}[|X_{1}|]<\infty. Further, if \textsf{P}\big{(}S_{n}/n\to\mu\big{)}>0 for some real \mu, then \textsf{E}[|X_{1}|]<\infty, \mu=\textsf{E}[X_{1}] and \textsf{P}\big{(}S_{n}/n\to\mu\big{)}=1 .

According to Corollary 4.2, the probability \textsf{P}\big{(}S_{n}/n\to\mu\big{)} is either 0 or 1.

For proving the theorems, we need the following lemma which can be found in Zhang (2016).

Lemma 4.2

Suppose that X\in\mathscr{H} and C_{\mathbb{V}}(|X|)<\infty.

(a) Then

\sum_{j=1}^{\infty}\frac{\widehat{\mathbb{E}}[(|X|\wedge j)^{2}]}{j^{2}}<\infty.

(b) Furthermore, if \lim_{c\to\infty}\widehat{\mathbb{E}}\left[|X|\wedge c\right]=\widehat{\mathbb% {E}}\left[|X|\right], then \widehat{\mathbb{E}}[|X|]\leq C_{\mathbb{V}}(|X|).

Proof of Theorems 4.2. We first prove (b). (a) follows from (b) because \mathbb{V}^{\ast}=\mathbb{V} when \mathbb{V} is countably sub-additive.

It is sufficient to show (4.8) under the assumption that \{X_{n};n\geq 1\} are upper extended negatively dependent. Without loss of generality, we assume \widehat{\mathbb{E}}[X_{1}]=0. Define f_{c}(x) and \widehat{f}(x) be defined as in (4.4) and \overline{X}_{j}=f_{j}(X_{j})-\widehat{\mathbb{E}}[f_{j}(X_{j})], \overline{S}_{j}=\sum_{i=1}^{j}\overline{X}_{i}, j=1,2,\ldots. Then f_{c}(\cdot),\widehat{f}_{c}(\cdot)\in C_{b,Lip}(\mathbb{R}) and are all non-decreasing functions. And so, \{\overline{X}_{j};j\geq 1\}, \{f_{j}^{+}(X_{j});j\geq 1\} and \{(\widehat{f}_{j}(X_{j}))^{+};j\geq 1\} are all sequences of upper extended negatively dependent random variables. Let \theta>1, n_{k}=[\theta^{k}]. For n_{k}<n\leq n_{k+1}, we have

\displaystyle\frac{S_{n}}{n}= \displaystyle\frac{1}{n}\left\{\overline{S}_{n_{k}}+\sum_{j=1}^{n_{k}}\widehat% {\mathbb{E}}[f_{j}(X_{j})]+\sum_{j=1}^{n}\widehat{f}_{j}(X_{j})+\sum_{j=n_{k}+% 1}^{n}f_{j}(X_{j})\right]
\displaystyle\leq \displaystyle\frac{\overline{S}_{n_{k}}^{+}}{n_{k}}+\frac{\sum_{j=1}^{n_{k}}|% \widehat{\mathbb{E}}[f_{j}(X_{1})]|}{n_{k}}+\frac{\sum_{j=1}^{n_{k+1}}(% \widehat{f}_{j}(X_{j}))^{+}}{n_{k}}
\displaystyle+\frac{\sum_{j=n_{k}+1}^{n_{k+1}}\big{\{}f_{j}^{+}(X_{j})-% \widehat{\mathbb{E}}[f_{j}^{+}(X_{j})]\big{\}}}{n_{k}}+\frac{(n_{k+1}-n_{k})% \widehat{\mathbb{E}}|X_{1}|}{n_{k}}
\displaystyle=: \displaystyle(I)_{k}+(II)_{k}+(III)_{k}+(IV)_{k}+(V)_{k}.

It is obvious that

\lim_{k\to\infty}(V)_{k}=(\theta-1)\widehat{\mathbb{E}}[|X_{1}|]\leq(\theta-1)% C_{\mathbb{V}}(|X_{1}|)

by Lemma 4.2 (b).

For (I)_{k}, applying (3.5) yields

\displaystyle\mathbb{V}\left(\overline{S}_{n_{k}}\geq\epsilon n_{k}\right) \displaystyle\leq C\frac{\sum_{j=1}^{n_{k}}\widehat{\mathbb{E}}\big{[}% \overline{X}_{j}^{2}\big{]}}{\epsilon^{2}n_{k}^{2}}\leq C\frac{\sum_{j=1}^{n_{% k}}\widehat{\mathbb{E}}\big{[}f_{j}^{2}(X_{1})\big{]}}{\epsilon^{2}n_{k}^{2}}
\displaystyle\leq \displaystyle C\frac{n_{k}}{\epsilon^{2}n_{k}^{2}}+C\frac{\sum_{j=1}^{n_{k}}% \widehat{\mathbb{E}}\big{[}\big{(}|X_{1}|\wedge j)^{2}\big{]}}{\epsilon^{2}n_{% k}^{2}}.

It is obvious that \sum_{k}\frac{1}{n_{k}}<\infty. Also,

\displaystyle\sum_{k=1}^{\infty}\frac{\sum_{j=1}^{n_{k}}\widehat{\mathbb{E}}% \big{[}\big{(}|X_{1}|\wedge j)^{2}\big{]}}{n_{k}^{2}}\leq \displaystyle\sum_{j=1}^{\infty}\widehat{\mathbb{E}}\big{[}\big{(}|X_{1}|% \wedge j)^{2}\}\big{]}\sum_{k:n_{k}\geq j}\frac{1}{n_{k}^{2}}
\displaystyle\leq \displaystyle C\sum_{j=1}^{\infty}\widehat{\mathbb{E}}\big{[}\big{(}|X_{1}|% \wedge j)^{2}\big{]}\frac{1}{j^{2}}<\infty

by Lemma 4.2 (a). Hence \sum_{k=1}^{\infty}\mathbb{V}^{\ast}\left((I)_{k}\geq\epsilon\right)\leq\sum_{% k=1}^{\infty}\mathbb{V}\left((I)_{k}\geq\epsilon\right)<\infty. By the Borel-Cantelli lemma and the countable sub-additivity of \mathbb{V}^{\ast}, it follows that

\mathbb{V}^{\ast}\left(\limsup_{k\to\infty}(I)_{k}>\epsilon\right)=0,\;\;% \forall\epsilon>0.

Similarly,

\mathbb{V}^{\ast}\left(\limsup_{k\to\infty}(IV)_{k}>\epsilon\right)=0,\;\;% \forall\epsilon>0.

For (II)_{k}, note that

|\widehat{\mathbb{E}}[f_{j}(X_{1})]|=|\widehat{\mathbb{E}}[f_{j}(X_{1})]-% \widehat{\mathbb{E}}X_{1}|\leq\widehat{\mathbb{E}}[|\widehat{f}_{j}(X_{1})|]=% \widehat{\mathbb{E}}[(|X_{1}|-j)^{+}]\to 0.

It follows that

(II)_{k}=\frac{n_{k+1}}{n_{k}}\frac{\sum_{j=1}^{n_{k+1}}|\widehat{\mathbb{E}}[% f_{j}(X_{1})]|}{n_{k+1}}\to 0.

At last, we consider (III)_{k}. By the Borel-Cantelli Lemma, we will have

\mathbb{V}^{\ast}\big{(}\limsup_{k\to\infty}(III)_{k}>0\big{)}\leq\mathbb{V}^{% \ast}\big{(}\{|X_{j}|>j\}\;i.o.\big{)}=0

if we have shown that

\sum_{j=1}^{\infty}\mathbb{V}^{\ast}\big{(}|X_{j}|>j\big{)}\leq\sum_{j=1}^{% \infty}\mathbb{V}\big{(}|X_{j}|>j\big{)}<\infty. (4.15)

Let g_{\epsilon} be a non-decreasing function satisfying that its derivatives of each order are bounded, g_{\epsilon}(x)=1 if x\geq 1, g_{\epsilon}(x)=0 if x\leq 1-\epsilon, and 0\leq g_{\epsilon}(x)\leq 1 for all x, where 0<\epsilon<1. Then

g_{\epsilon}(\cdot)\in C_{b,Lip}(\mathbb{R})\text{ is non-decreasing}\;\text{ % and }\;I\{x\geq 1\}\leq g_{\epsilon}(x)\leq I\{x>1-\epsilon\}. (4.16)

Hence by (2.1),

\displaystyle\sum_{j=1}^{\infty}\mathbb{V}\big{(}|X_{j}|>j\big{)}\leq \displaystyle\sum_{j=1}^{\infty}\widehat{\mathbb{E}}\left[g_{1/2}\big{(}|X_{j}% |/j\big{)}\right]=\sum_{j=1}^{\infty}\widehat{\mathbb{E}}\left[g_{1/2}\big{(}|% X_{1}|/j\big{)}\right]\;\;(\text{since }X_{j}\overset{d}{=}X_{1})
\displaystyle\leq \displaystyle\sum_{j=1}^{\infty}\mathbb{V}\big{(}|X_{1}|>j/2\big{)}\leq 1+C_{% \mathbb{V}}(2|X_{1}|)<\infty.

(4.15) is proved. So, we conclude that \mathbb{V}^{\ast}\left(\limsup\limits_{n\to\infty}\frac{S_{n}}{n}>\epsilon% \right)=0, \forall\epsilon>0, by the arbitrariness of \theta>1. Hence

\displaystyle\mathbb{V}^{\ast}\left(\limsup_{n\to\infty}\frac{S_{n}}{n}>0% \right)= \displaystyle\mathbb{V}^{\ast}\left(\bigcup_{k=1}^{\infty}\left\{\limsup_{n\to% \infty}\frac{S_{n}}{n}>\frac{1}{k}\right\}\right)
\displaystyle\leq \displaystyle\sum_{k=1}^{\infty}\mathbb{V}^{\ast}\left(\limsup_{n\to\infty}% \frac{S_{n}}{n}>\frac{1}{k}\right)=0.

(4.8) is proved.

Finally, if \{X_{n};n\geq 1\} are lower extended negatively dependent, then \{-X_{n};n\geq 1\} are upper extended negatively dependent. So

\mathbb{V}^{\ast}\left(\liminf_{n\to\infty}\frac{S_{n}}{n}<\widehat{\mathcal{E% }}[X_{1}]\right)=\mathbb{V}^{\ast}\left(\limsup_{n\to\infty}\frac{\sum_{k=1}^{% n}(-X_{k}-\widehat{\mathbb{E}}[-X_{k}])}{n}>0\right)=0.

The proof of (4.9) is now completed.

Now, we consider (c), the inverse part of the strong law of large numbers. Because we have not “the divergence part” of the Borel-Cantelli Lemma and no information about the independence under the conjugate expectation \widehat{\mathcal{E}} or the conjugate capacity \mathcal{V}, the proof becomes complex and needs a new approach. Suppose that X_{1},X_{2},\ldots are extended independent and identically distributed with C_{\mathbb{V}}(X_{1}^{+})=\infty. Then, by (2.1),

\displaystyle\sum_{j=1}^{\infty}\widehat{\mathbb{E}}\left[g_{1/2}\big{(}\frac{% X_{j}^{+}}{Mj}\big{)}\right]= \displaystyle\sum_{j=1}^{\infty}\widehat{\mathbb{E}}\left[g_{1/2}\big{(}\frac{% X_{1}^{+}}{Mj}\big{)}\right]\;\;(\text{ since }X_{j}\overset{d}{=}X_{1})
\displaystyle\geq \displaystyle\sum_{j=1}^{\infty}\mathbb{V}\big{(}X_{1}^{+}>Mj)=\infty,\;\;% \forall M>0.

Let \xi_{j}=g_{1/2}\big{(}\frac{X_{j}^{+}}{Mj}\big{)}, \eta_{n}=\sum_{j=1}^{n}\xi_{j} and a_{n}=\sum_{j=1}^{n}\widehat{\mathbb{E}}[\xi_{j}]. Then a_{n}\to\infty and \{\xi_{j};j\geq 1\} are extended independent. For any 0<\delta<\epsilon<1 and t>0, we have

\displaystyle I\left\{\frac{\eta_{n}-a_{n}}{a_{n}}\geq-\epsilon\right\}\geq e^% {-t\delta}\left(\exp\left\{t\frac{\eta_{n}-a_{n}}{a_{n}}\right\}-e^{-t\epsilon% }\right)I\left\{\frac{\eta_{n}-a_{n}}{a_{n}}\leq\delta\right\}
\displaystyle\geq \displaystyle e^{-t\delta}\left(\exp\left\{t\frac{\eta_{n}-a_{n}}{a_{n}}\right% \}-e^{-t\epsilon}\right)-e^{-t\delta-t}\exp\left\{t\frac{\eta_{n}}{a_{n}}% \right\}I\left\{\frac{\eta_{n}-a_{n}}{a_{n}}>\delta\right\}.

So

\displaystyle\mathbb{V}\left(\frac{\eta_{n}-a_{n}}{a_{n}}\geq-\epsilon\right)\geq \displaystyle e^{-t\delta}\left(\widehat{\mathbb{E}}\left[\exp\left\{t\frac{% \eta_{n}-a_{n}}{a_{n}}\right\}\right]-e^{-t\epsilon}\right)
\displaystyle-e^{-t\delta-t}\widehat{\mathbb{E}}\left[\exp\left\{t\frac{\eta_{% n}}{a_{n}}\right\}g_{1/2}\left(\frac{\eta_{n}-a_{n}}{\delta a_{n}}\right)% \right].

By the extended independence and the fact that e^{x}\geq 1+x, we have

\displaystyle\widehat{\mathbb{E}}\left[\exp\left\{t\frac{\eta_{n}-a_{n}}{a_{n}% }\right\}\right]= \displaystyle\prod_{j=1}^{n}\widehat{\mathbb{E}}\left[\exp\left\{t\frac{\xi_{j% }-\widehat{\mathbb{E}}[\xi_{j}]}{a_{n}}\right\}\right]
\displaystyle\geq \displaystyle\prod_{j=1}^{n}\widehat{\mathbb{E}}\left[t\frac{\xi_{j}-\widehat{% \mathbb{E}}[\xi_{j}]}{a_{n}}+1\right]=1.

On the other hand, by noting e^{x}\leq 1+|x|e^{|x|}, 1+x\leq e^{x} and 0\leq\xi_{j}\leq 1,

\displaystyle\widehat{\mathbb{E}}\left[\exp\left\{2t\frac{\eta_{n}}{a_{n}}% \right\}\right]= \displaystyle\prod_{j=1}^{n}\widehat{\mathbb{E}}\left[\exp\left\{2t\frac{\xi_{% j}}{a_{n}}\right\}\right]\leq\prod_{j=1}^{n}\widehat{\mathbb{E}}\left[1+t\frac% {2\xi_{j}}{a_{n}}e^{2t/a_{n}}\right]
\displaystyle\leq \displaystyle\prod_{j=1}^{n}\left[1+2t\frac{\widehat{\mathbb{E}}[\xi_{j}]}{a_{% n}}e^{2t/a_{n}}\right]\leq\exp\left\{2te^{2t/a_{n}}\right\}.

Also, by (2.1),

\displaystyle\mathbb{V}\left(\frac{\eta_{n}-a_{n}}{a_{n}}>\frac{\delta}{2}% \right)\leq C\frac{4\sum_{j=1}^{n}\widehat{\mathbb{E}}[(\xi_{j}-\widehat{% \mathbb{E}}[\xi_{j}])^{2}]}{\delta^{2}a_{n}^{2}}\leq C\frac{16\sum_{j=1}^{n}% \widehat{\mathbb{E}}[\xi_{j}]}{\delta^{2}a_{n}^{2}}\leq\frac{C}{\delta^{2}a_{n% }}.

It follows that

\displaystyle\widehat{\mathbb{E}}\left[\exp\left\{t\frac{\eta_{n}}{a_{n}}% \right\}g_{1/2}\left(\frac{\eta_{n}-a_{n}}{a_{n}}\right)\right]\leq\left\{% \widehat{\mathbb{E}}\left[\exp\left\{2t\frac{\eta_{n}}{a_{n}}\right\}\right]% \cdot\widehat{\mathbb{E}}\left[g_{1/2}\left(\frac{\eta_{n}-a_{n}}{\delta a_{n}% }\right)\right]^{2}\right\}^{1/2}
\displaystyle\quad\leq\exp\left\{te^{2t/a_{n}}\right\}\left\{\mathbb{V}\left(% \frac{\eta_{n}-a_{n}}{a_{n}}>\frac{\delta}{2}\right)\right\}^{1/2}\leq\exp% \left\{te^{2t/a_{n}}\right\}\frac{C}{\delta a_{n}^{1/2}}\to 0,

by Hölder’s inequality and noting I\{x\geq 1\}\leq g_{1/2}(x)\leq I\{x\geq 1/2\}. We conclude that

\liminf_{n\to\infty}\mathbb{V}\left(\frac{\eta_{n}-a_{n}}{a_{n}}\geq-\epsilon% \right)\geq e^{-t\delta}\left(1-e^{-t\epsilon}\right).

Letting \delta\to 0 and then t\to\infty yields

\lim_{n\to\infty}\mathbb{V}\left(\frac{\eta_{n}-a_{n}}{a_{n}}\geq-\epsilon% \right)=1. (4.17)

Now, choose \epsilon=1/2. By the continuity of \mathbb{V},

\displaystyle\mathbb{V}\left(\limsup_{n\to\infty}\frac{X_{n}^{+}}{n}>\frac{M}{% 2}\right)=\mathbb{V}\left(\big{\{}\frac{X_{j}^{+}}{Mj}>\frac{1}{2}\big{\}}\;\;% i.o.\right)\geq\mathbb{V}\left(\sum_{j=1}^{\infty}g_{1/2}\big{(}\frac{X_{j}^{+% }}{Mj}\big{)}=\infty\right)
\displaystyle\qquad=\mathbb{V}\left(\Big{\{}\frac{\eta_{n}-a_{n}}{a_{n}}\geq-% \frac{1}{2}\Big{\}}\;\;i.o.\right)\geq\limsup_{n\to\infty}\mathbb{V}\left(% \frac{\eta_{n}-a_{n}}{a_{n}}\geq-\frac{1}{2}\right)=1.

On the other hand,

\limsup_{n\to\infty}\frac{X_{n}^{+}}{n}\leq\limsup_{n\to\infty}\frac{|X_{n}|}{% n}\leq\limsup_{n\to\infty}\Big{(}\frac{|S_{n}|}{n}+\frac{|S_{n-1}|}{n}\Big{)}% \leq 2\limsup_{n\to\infty}\frac{|S_{n}|}{n}.

It follows that

\mathbb{V}\left(\limsup_{n\to\infty}\frac{|S_{n}|}{n}>m\right)=1,\;\;\forall m% >0.

Hence

\mathbb{V}\left(\limsup_{n\to\infty}\frac{|S_{n}|}{n}=+\infty\right)=\lim_{m% \to\infty}\mathbb{V}\left(\limsup_{n\to\infty}\frac{|S_{n}|}{n}>m\right)=1,

which contradicts with (4.10). So, C_{\mathbb{V}}(X_{1}^{+})<\infty. Similarly, C_{\mathbb{V}}(X_{1}^{-})<\infty. It follows that C_{\mathbb{V}}(|X_{1}|)\leq C_{\mathbb{V}}(X_{1}^{+})+C_{\mathbb{V}}(X_{1}^{-}% )<\infty.


Proof of Corollary 4.1. By (4.2) and the continuity of \mathbb{V},

\displaystyle\mathbb{V}\left(\limsup_{n\to\infty}\frac{S_{n}}{n}\geq\widehat{% \mathbb{E}}[X_{1}]-\epsilon\right)\geq\limsup_{n\to\infty}\mathbb{V}\left(% \frac{S_{n}}{n}>\widehat{\mathbb{E}}[X_{1}]-\epsilon\right)=1,\;\;\forall% \epsilon>0.

B the continuity of \mathbb{V} again, \mathbb{V}\left(\limsup\limits_{n\to\infty}{S_{n}}/{n}\geq\widehat{\mathbb{E}}% [X_{1}]\right)=1, which, together with Theorem 4.2 (b) implies the second equation in (4.11). By considering \{-X_{n};n\geq 1\} instead, we obtain the first equation in (4.11).

For (4.12), by noting the facts that n_{k}\to\infty, n_{k-1}/n_{k}\to 0 such that S_{n_{k-1}} and S_{n_{k}}-S_{n_{k-1}} are extended independent, we conclude that

\displaystyle\liminf_{k\to\infty}\mathbb{V}\left(\frac{S_{n_{k-1}}}{n_{k-1}}<% \widehat{\mathcal{E}}[X_{1}]+\epsilon\;\text{ and }\;\frac{S_{n_{k}}-S_{n_{k-1% }}}{n_{k}-n_{k-1}}>\widehat{\mathbb{E}}[X_{1}]-\epsilon\right)
\displaystyle\geq \displaystyle\liminf_{k\to\infty}\widehat{\mathbb{E}}\left[\phi\left(\frac{S_{% n_{k-1}}}{n_{k-1}}-\widehat{\mathcal{E}}[X_{1}]\right)\phi\left(\widehat{% \mathbb{E}}[X_{1}]-\frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}\right)\right]
\displaystyle\geq \displaystyle\liminf_{k\to\infty}\widehat{\mathbb{E}}\left[\phi\left(\frac{S_{% n_{k-1}}}{n_{k-1}}-\widehat{\mathcal{E}}[X_{1}]\right)\right]\cdot\widehat{% \mathbb{E}}\left[\phi\left(\widehat{\mathbb{E}}[X_{1}]-\frac{S_{n_{k}}-S_{n_{k% -1}}}{n_{k}-n_{k-1}}\right)\right]
\displaystyle\geq \displaystyle\liminf_{k\to\infty}\mathbb{V}\left(\frac{S_{n_{k-1}}}{n_{k-1}}<% \widehat{\mathcal{E}}[X_{1}]+\frac{\epsilon}{2}\right)\cdot\mathbb{V}\left(% \frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}>\widehat{\mathbb{E}}[X_{1}]-\frac{% \epsilon}{2}\right)
\displaystyle\geq \displaystyle\liminf_{k\to\infty}\mathbb{V}\left(\frac{S_{n_{k-1}}}{n_{k-1}}<% \widehat{\mathcal{E}}[X_{1}]+\frac{\epsilon}{2}\right)\cdot\mathbb{V}\left(% \frac{S_{n_{k}}}{n_{k}}>\widehat{\mathbb{E}}[X_{1}]-\frac{\epsilon}{4}\right)=% 1,\;\;\forall\epsilon>0,

by (4.1)-(4.3). Hence, by Theorem 4.2 (b) and the continuity of \mathbb{V} we have

\displaystyle\mathbb{V}\left(\liminf_{n\to\infty}\frac{S_{n}}{n}\leq\widehat{% \mathcal{E}}[X_{1}]+\epsilon\;\text{ and }\;\limsup_{n\to\infty}\frac{S_{n}}{n% }\geq\widehat{\mathbb{E}}[X_{1}]-\epsilon\right)
\displaystyle\geq \displaystyle\mathbb{V}\left(\liminf_{k\to\infty}\frac{S_{n_{k-1}}}{n_{k-1}}% \leq\widehat{\mathcal{E}}[X_{1}]+\epsilon\;\text{ and }\;\limsup_{k\to\infty}% \frac{S_{n_{k}}}{n_{k}}\geq\widehat{\mathbb{E}}[X_{1}]-\epsilon\right)
\displaystyle= \displaystyle\mathbb{V}\left(\liminf_{k\to\infty}\frac{S_{n_{k-1}}}{n_{k-1}}<% \widehat{\mathcal{E}}[X_{1}]+\epsilon\;\text{ and }\;\limsup_{k\to\infty}\frac% {S_{n_{k}}-S_{n_{k-1}}}{n_{k}-n_{k-1}}>\widehat{\mathbb{E}}[X_{1}]-\epsilon\right)
\displaystyle\geq \displaystyle\mathbb{V}\left(\left\{\frac{S_{n_{k-1}}}{n_{k-1}}<\widehat{% \mathcal{E}}[X_{1}]+\epsilon\;\text{ and }\;\frac{S_{n_{k}}-S_{n_{k-1}}}{n_{k}% -n_{k-1}}>\widehat{\mathbb{E}}[X_{1}]-\epsilon\right\}\;\;i.o.\right)
\displaystyle\geq \displaystyle\limsup_{k\to\infty}\mathbb{V}\left(\frac{S_{n_{k-1}}}{n_{k-1}}<% \widehat{\mathcal{E}}[X_{1}]+\epsilon\;\text{ and }\;\frac{S_{n_{k}}-S_{n_{k-1% }}}{n_{k}-n_{k-1}}>\widehat{\mathbb{E}}[X_{1}]-\epsilon\right)=1,\;\;\forall% \epsilon>0.

By the continuity of \mathbb{V} again,

\mathbb{V}\left(\liminf_{n\to\infty}\frac{S_{n}}{n}\leq\widehat{\mathcal{E}}[X% _{1}]\;\text{ and }\;\limsup_{n\to\infty}\frac{S_{n}}{n}\geq\widehat{\mathbb{E% }}[X_{1}]\right)=1,

which, together with Theorem 4.2 (b) implies (4.12).

Finally, note

\frac{S_{n}}{n}-\frac{S_{n-1}}{n-1}=\frac{X_{n}}{n}-\frac{S_{n-1}}{n-1}\frac{1% }{n}\to 0\;\;a.s.\mathbb{V}.

It can be verified that (4.12) implies (4.13).


For proving Theorem 4.3, we need the estimates of \mathcal{V}\left(S_{n}\geq x\right).

Lemma 4.3

Let \{X_{1},\ldots,X_{n}\} be a sequence of random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}) with \widehat{\mathcal{E}}[X_{k}]\leq 0 which are upper extended negatively dependent under \widehat{\mathcal{E}} with a dominating constant K\geq 1. Then

(a)

For all x,y>0,

\mathcal{V}\left(S_{n}\geq x\right)\leq\mathbb{V}\left(\max_{k\leq n}X_{k}\geq y% \right)+K\exp\left\{-\frac{x^{2}}{2(xy+B_{n})}\Big{(}1+\frac{2}{3}\ln\big{(}1+% \frac{xy}{B_{n}}\big{)}\Big{)}\right\};
(b)

For any p\geq 2, there exists a constant C_{p}\geq 1 such that for all x>0 and 0\leq\delta\leq 1,

\mathcal{V}\left(S_{n}\geq x\right)\leq C_{p}K\delta^{-2p}\frac{M_{n,p}}{x^{p}% }+K\exp\left\{-\frac{x^{2}}{2B_{n}(1+\delta)}\right\};
(c)

We have

\mathcal{V}\left(S_{n}\geq x\right)\leq(1+Ke)\frac{\sum_{k=1}^{n}\widehat{% \mathbb{E}}[X_{k}^{2}]}{x^{2}},\;\forall x>0. (4.18)
Proof.

Let Y_{k}=X_{k}\wedge y, T_{n}=\sum_{k=1}^{n}Y_{k} be as in the proof of Theorem 3.1. Then

\mathcal{V}\left(S_{n}\geq x\right)\leq\mathbb{V}\big{(}\max_{k\leq n}X_{k}% \geq y\big{)}+\mathcal{V}\left(T_{n}\geq x\right),
\mathcal{V}\left(T_{n}\geq x\right)\leq e^{-tx}\widehat{\mathcal{E}}[e^{tT_{n}% }]\leq e^{-tx}K\prod_{k=1}^{n}\widehat{\mathcal{E}}[e^{tY_{k}}]

and

\displaystyle\widehat{\mathcal{E}}[e^{tY_{k}}]\leq \displaystyle\widehat{\mathcal{E}}\left[1+tY_{k}+\frac{e^{ty}-1-ty}{y^{2}}Y_{k% }^{2}\right]\leq 1+t\widehat{\mathcal{E}}[Y_{k}]+\frac{e^{ty}-1-ty}{y^{2}}% \widehat{\mathbb{E}}[Y_{k}^{2}]
\displaystyle\leq \displaystyle 1+\frac{e^{ty}-1-ty}{y^{2}}\widehat{\mathbb{E}}[Y_{k}^{2}].

The remainder proof is similar as that of Theorem 3.1. ∎


The proof of Theorem 4.3. Suppose C_{\mathbb{V}}(X_{1}^{+})=\infty. Let g_{\epsilon}(\cdot) satisfy (4.16). Let \xi_{j}=g_{1/2}\big{(}\frac{X_{j}^{+}}{Mj}\big{)}, \eta_{n}=\sum_{j=1}^{n}\xi_{j} and a_{n}=\sum_{j=1}^{n}\widehat{\mathbb{E}}[\xi_{j}] be as in the proof of Theorem 4.2 (c). Then a_{n}\to\infty and \{-\xi_{j}-\widehat{\mathcal{E}}[-\xi_{j}])^{2}];j\geq 1\} are upper extended negatively dependent under \widehat{\mathcal{E}}. By Lemma 4.3 (c),

\displaystyle\mathcal{V}\left(\frac{\eta_{n}-a_{n}}{a_{n}}<-\epsilon\right)=% \mathcal{V}\left(\frac{\sum_{j=1}^{n}(-\xi_{j}-\widehat{\mathcal{E}}[-\xi_{j}]% )}{a_{n}}>\epsilon\right)
\displaystyle\leq \displaystyle(1+Ke)\frac{\sum_{j=1}^{n}\widehat{\mathbb{E}}[(-\xi_{j}-\widehat% {\mathcal{E}}[-\xi_{j}])^{2}]}{\epsilon^{2}a_{n}^{2}}\leq c\frac{1}{\epsilon^{% 2}a_{n}}\to 0.

That is (4.17). By the same argument as in proof of Theorem 4.2, (4.17) will imply a contradiction to (4.10). So, C_{\mathbb{V}}(X_{1}^{+})<\infty. Similarly, C_{\mathbb{V}}(X_{1}^{-})<\infty. It follows that C_{\mathbb{V}}(|X_{1}|)<\infty.

5 The law of the iterated logarithm

In this section, we let \{X_{n};n\geq 1\} be a sequence of identically distributed random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}). Denote \overline{\sigma}_{1}^{2}=\widehat{\mathbb{E}}[(X_{1}-\widehat{\mathcal{E}}[X_% {1}])^{2}], \overline{\sigma}_{2}^{2}=\widehat{\mathbb{E}}[(X_{1}-\widehat{\mathbb{E}}[X_{% 1}])^{2}], a_{n}=\sqrt{2n\log\log n}, where \log x=\ln(x\vee e). The following is the law of the iterated logarithm for extended negatively dependent random variables.

Theorem 5.1

Suppose \widehat{\mathbb{E}}[|X_{1}|^{2+\gamma}]<\infty for some \gamma>0. If X_{1},X_{2},\ldots, are upper extended negatively dependent, then

\mathbb{V}^{\ast}\left(\limsup_{n\to\infty}\frac{S_{n}-n\widehat{\mathbb{E}}[X% _{1}]}{a_{n}}>\overline{\sigma}_{2}\right)=0. (5.1)

If X_{1},X_{2},\ldots, are extended negatively dependent, then

\mathbb{V}^{\ast}\left(\Big{\{}\liminf_{n\to\infty}\frac{S_{n}-n\widehat{% \mathcal{E}}[X_{1}]}{a_{n}}<-\overline{\sigma}_{1}\Big{\}}\bigcup\Big{\{}% \limsup_{n\to\infty}\frac{S_{n}-n\widehat{\mathbb{E}}[X_{1}]}{a_{n}}>\overline% {\sigma}_{2}\Big{\}}\right)=0. (5.2)

When the sub-linear expectation \widehat{\mathbb{E}} reduces to the linear expectation, we obtain the law of the iterated logarithm for extended negatively dependent random variables on a probability space (\Omega,\mathcal{F},\textsf{P}).

Corollary 5.1

Suppose that X_{1},X_{2},\ldots are extended negatively dependent and identically distributed random variables on a probability space (\Omega,\mathcal{F},\textsf{P}) with \textsf{E}[|X_{1}|^{2+\gamma}<\infty for some \gamma>0 and \sigma^{2}=Var(X_{1}). Then

\ \textsf{P}\left(\limsup_{n\to\infty}\frac{|S_{n}-n\textsf{E}[X_{1}]|}{a_{n}}% \leq\sigma\right)=1. (5.3)

To prove the law of the iterated logarithm, besides the exponential inequality we need a moment inequality on the maximum partial sums.

Lemma 5.1

Let \{X_{k};k=1,\ldots,n\} be a sequence of upper extended negatively dependent random variables in (\Omega,\mathscr{H},\widehat{\mathbb{E}}) with \widehat{\mathbb{E}}[X_{k}]\leq 0, k=1,\ldots,n. Let S_{m}=\sum_{k=1}^{m}X_{k}, S_{0}=0, p>2 (be an integer). And assume that \widehat{\mathbb{E}}[(|X_{k}|^{p}-c)^{+}]\to 0 as c\to\infty, k=1,\ldots,n. Then

\displaystyle\widehat{\mathbb{E}}\left[\max_{m\leq n}(S_{m}^{+})^{p}\right]% \leq C_{p}n(\log_{2}n)^{p}\max_{k\leq n}C_{\mathbb{V}}\Big{[}(X_{k}^{+})^{p}% \Big{]}+C_{p}n^{p/2}\big{(}\max_{k\leq n}\widehat{\mathbb{E}}[X_{k}^{2}]\big{)% }^{p/2}. (5.4)
Proof.

We expand \{X_{k};k\leq n\} to \{X_{k};k\geq 1\} by defining X_{k}=0, k=n+1,n+2,\ldots. Let T_{k,m}=(X_{k+1}+\cdots+X_{k+m})^{+} and M_{k,m}=\max_{j\leq m}T_{k,j}. It is easily seen that T_{k,l+m}\leq T_{k,l}+T_{k+l,m}. Under the conditions in the Lemma, we have \widehat{\mathbb{E}}[T_{k,m}^{p}]\leq C_{\mathbb{V}}(T_{k,m}^{p}). From (3.1) it follows that

\widehat{\mathbb{E}}[T_{k,m}^{p}]\leq C_{p}m\max_{k}C_{\mathbb{V}}\Big{[}(X_{k% }^{+})^{p}\Big{]}+C_{p}m^{p/2}\big{(}\max_{k}\widehat{\mathbb{E}}[X_{k}^{2}]% \big{)}^{p/2}.

Let K_{1}=\Big{(}C_{p}\max_{k}C_{\mathbb{V}}\big{[}(X_{k}^{+})^{p}\big{]}\Big{)}^{% 1/p} and K_{2}=C_{p}^{1/p}\big{(}\max_{k}\widehat{\mathbb{E}}[X_{k}^{2}]\big{)}^{1/2}. Then

\widehat{\mathbb{E}}[T_{k,m}^{p}]\leq m\left(K_{1}+K_{2}m^{\frac{p-2}{2p}}% \right)^{p}. (5.5)

Using the same argument of Mórcz (1982), we can show that for some constant M>1,

\widehat{\mathbb{E}}[M_{k,m}^{p}]\leq Mm\left(K_{1}\log_{2}m+K_{2}m^{\frac{p-2% }{2p}}\right)^{p}, (5.6)

which implies (5.4). Here we only give the proof for integer p because it is sufficient for our use. Also, it is sufficient to show that (5.6) holds for any m=2^{I}. Suppose (5.6) holds for m=2^{I}. We will show that it is also true for m=2^{I+1} by the induction. Now, if i\leq 2^{I}, then T_{k,i}\leq M_{k,2^{I}}. If 2^{I}+1\leq i\leq 2^{I+1}, then T_{k,i}\leq T_{k,2^{I}}+M_{k+2^{I},2^{I}}, and so

T_{k,i}^{p}\leq T_{k,2^{I}}^{p}+M_{k+2^{I},2^{I}}^{p}+\sum_{j=1}^{p-1}\binom{p% }{j}T_{k,2^{I}}^{j}M_{k+2^{I},2^{I}}^{p-j}.

It follows that

M_{k,2^{I+1}}^{p}\leq M_{k,2^{I}}^{p}+M_{k+2^{I},2^{I}}^{p}+\sum_{j=1}^{p-1}% \binom{p}{j}T_{k,2^{I}}^{j}M_{k+2^{I},2^{I}}^{p-j}.

By the induction,

\widehat{\mathbb{E}}\left[M_{k,2^{I}}^{p}\right]\leq M2^{I}\big{(}K_{1}I+K_{2}% 2^{\frac{p-2}{2p}I}\big{)}^{p},\;\;\widehat{\mathbb{E}}\left[M_{k+2^{I},2^{I}}% ^{p}\right]\leq M2^{I}\big{(}K_{1}I+K_{2}2^{\frac{p-2}{2p}I}\big{)}^{p}

and

\displaystyle\widehat{\mathbb{E}}\left[T_{k,2^{I}}^{j}M_{k+2^{I},2^{I}}^{p-j}% \right]\leq \displaystyle\left\{\widehat{\mathbb{E}}\left[T_{k,2^{I}}^{p}\right]\right\}^{% j/p}\cdot\left\{\widehat{\mathbb{E}}\left[M_{k+2^{I},2^{I}}^{p}\right]\right\}% ^{(p-j)/p}
\displaystyle\leq \displaystyle 2^{I}\big{(}K_{1}+K_{2}2^{\frac{p-2}{2p}I}\big{)}^{j}\cdot\big{% \{}M^{1/p}\big{(}K_{1}I+K_{2}2^{\frac{p-2}{2p}I}\big{)}\big{\}}^{p-j}.

Let M>1 such that 1+M^{-1/p}\leq 2^{\frac{p-2}{2p}}. It follows that

\displaystyle\widehat{\mathbb{E}}\left[M_{k,2^{I+1}}^{p}\right]\leq M2^{I}\big% {(}K_{1}I+K_{2}2^{\frac{p-2}{2p}I}\big{)}^{p}
\displaystyle\qquad\qquad+2^{I}\sum_{j=0}^{p-1}\binom{p}{j}\big{(}K_{1}+K_{2}2% ^{\frac{p-2}{2p}I}\big{)}^{j}\cdot\big{\{}M^{1/p}\big{(}K_{1}I+K_{2}2^{\frac{p% -2}{2p}I}\big{)}\big{\}}^{p-j}
\displaystyle\leq \displaystyle M2^{I}\big{(}K_{1}I+K_{2}2^{\frac{p-2}{2p}I}\big{)}^{p}+2^{I}% \left\{K_{1}+K_{2}2^{\frac{p-2}{2p}I}+M^{1/p}\big{(}K_{1}I+K_{2}2^{\frac{p-2}{% 2p}I}\big{)}\right\}^{p}
\displaystyle= \displaystyle M2^{I}\big{(}K_{1}I+K_{2}2^{\frac{p-2}{2p}I}\big{)}^{p}+M2^{I}% \left\{K_{1}(I+M^{-1/p})+K_{2}2^{\frac{p-2}{2p}I}(1+M^{-1/p})\big{)}\right\}^{p}
\displaystyle\leq \displaystyle 2M2^{I}\big{(}K_{1}(I+1)+K_{2}2^{\frac{p-2}{2p}(I+1)}\big{)}^{p}% =M2^{I+1}\big{(}K_{1}(I+1)+K_{2}2^{\frac{p-2}{2p}(I+1)}\big{)}^{p}.

The proof is completed. ∎


Now we prove the law of the iterated logarithm.

Proof of Theorem 5.1. It is sufficient to show that (5.1) under the assumption that X_{1},X_{2},\ldots, are upper extended negatively dependent. Without loss of generality, we assume \widehat{\mathbb{E}}[X_{1}]=0 and \widehat{\mathbb{E}}[X_{1}^{2}]=1. Choose 1/(2+\gamma)<\beta<1/2, and let b_{n}=n^{\beta}, a_{n}=\sqrt{2n\log\log n}. Denote Y_{k}=(-b_{k})\vee(X_{k}\wedge b_{k}). Then \{Y_{k};k\geq 1\} are upper extended negatively dependent. Note

\displaystyle\sum_{n=1}^{\infty}\mathbb{V}^{\ast}(X_{n}\neq Y_{n})\leq\sum_{n=% 1}^{\infty}\mathbb{V}(|X_{n}|>n^{\beta})\leq\sum_{n=1}^{\infty}\frac{\widehat{% \mathbb{E}}[|X_{1}|^{2+\gamma}}{n^{\beta(2+\gamma)}}<\infty.

Also,

\displaystyle\sum_{i=1}^{n}\big{|}\widehat{\mathbb{E}}[Y_{i}]\big{|}= \displaystyle\sum_{i=1}^{n}\big{|}\widehat{\mathbb{E}}[Y_{i}]-\widehat{\mathbb% {E}}[X_{i}]\big{|}\leq\sum_{i=1}^{n}\widehat{\mathbb{E}}[(|X_{1}|-b_{i})^{+}]
\displaystyle\leq \displaystyle\sum_{i=1}^{n}\frac{\widehat{\mathbb{E}}[|X_{1}|^{2+\gamma}}{i^{% \beta(1+\gamma)}}=O(n^{1-\beta(1+\gamma)})=o(a_{n}).

By the countable sub-additivity of \mathbb{V}^{\ast}, (5.1) will follow if we have shown that

\mathbb{V}^{\ast}\left(\limsup_{n\to\infty}\frac{\sum_{k=1}^{n}(Y_{k}-\widehat% {\mathbb{E}}[Y_{k}])}{a_{n}}>(1+\epsilon)^{2}\right)=0,\;\;\forall\epsilon>0. (5.7)

Now, for given \epsilon such that 0<\epsilon<1/2, let n_{k}=[e^{k^{1-\alpha}}], where 0<\alpha<\frac{\epsilon}{1+\epsilon}. Then n_{k+1}/n_{k}\to 1 and \frac{n_{k+1}-n_{k}}{n_{k}}\approx\frac{C}{k^{\alpha}}. For n_{k}<n\leq n_{k+1}, we have

\displaystyle\sum_{i=1}^{n}(Y_{k}-\widehat{\mathbb{E}}[Y_{k}])\leq \displaystyle\sum_{i=1}^{n_{k}}(Y_{i}-\widehat{\mathbb{E}}[Y_{i}])+\max_{n_{k}% <m\leq n_{k+1}}\Big{(}\sum_{i=n_{k}+1}^{m}(Y_{i}-\widehat{\mathbb{E}}[Y_{i}])% \Big{)}^{+}=:I_{k}+II_{k}.

For the second term, by applying Lemma 5.1 we have

\displaystyle\mathbb{V}\left(II_{k}\geq\delta a_{n_{k}}\right)\leq\frac{% \widehat{\mathbb{E}}\left[II_{k}^{p}\right]}{(\delta a_{n_{k}})^{p}}
\displaystyle\leq \displaystyle c\frac{(n_{k+1}-n_{k})\big{(}\log(n_{k+1}-n_{k})\big{)}^{p}}{a_{% n_{k}}^{p}}\max\limits_{n_{k}+1\leq i\leq n_{k+1}}C_{\mathbb{V}}(|Y_{i}|^{p})
\displaystyle+c\left(\frac{(n_{k+1}-n_{k})}{a_{n_{k}}^{2}}\max\limits_{n_{k}+1% \leq i\leq n_{k+1}}\widehat{\mathbb{E}}[|Y_{i}|^{2}]\right)^{p/2}
\displaystyle\leq \displaystyle c\frac{(n_{k+1}-n_{k})\big{(}\log(n_{k+1}-n_{k})\big{)}^{p}}{a_{% n_{k}}^{p}}n_{k+1}^{\beta(p-2)}C_{\mathbb{V}}(X_{1}^{2})+c\left(\frac{n_{k+1}-% n_{k}}{n_{k}\log\log n_{k}}\right)^{p/2}
\displaystyle\leq \displaystyle c(\log n_{k})^{p}n_{k}^{-(p-2)(1/2-\beta)}+c\left(\frac{1}{k^{% \alpha}}\right)^{p/2}.

It follows that \sum_{k=1}^{\infty}\mathbb{V}\left(II_{k}\geq\delta a_{n_{k}}\right)<\infty for all \delta>0 whenever we choose the integer p>2 such that \alpha p/2>1. Hence,

\mathbb{V}^{\ast}\left(\Big{\{}\frac{II_{k}}{a_{n_{k}}}>\delta\Big{\}}\;i.o.% \right)=0,\;\;\forall\delta>0. (5.8)

Finally, we consider the term I_{k}. Let y=2b_{n_{k}} and x=(1+\epsilon)^{2}a_{n_{k}}. Then |Y_{i}-\widehat{\mathbb{E}}[Y_{i}]|\leq y and xy=o(n_{k}). By (3.1), we have

\displaystyle\mathbb{V}\left(I_{k}\geq(1+\epsilon)^{2}a_{n_{k}}\right)
\displaystyle\leq \displaystyle\exp\left\{-\frac{(1+\epsilon)^{4}a_{n_{k}}^{2}}{2\big{(}o(n_{k})% +\sum_{i=1}^{n_{k}}\widehat{\mathbb{E}}[|Y_{i}-\widehat{\mathbb{E}}[Y_{i}]|^{2% }]\big{)}}\left(1+\frac{2}{3}\ln(1+o(1))\right)\right\}.

Since

\left|\widehat{\mathbb{E}}[X_{i}^{2}]-\widehat{\mathbb{E}}[Y_{i}^{2}]\right|% \leq\widehat{\mathbb{E}}|X_{i}^{2}-Y_{i}^{2}|=\widehat{\mathbb{E}}[(X_{1}^{2}-% b_{i}^{2})^{+}]\to 0,\text{ as }i\to\infty

and |\widehat{\mathbb{E}}[Y_{i}]|\to 0 as i\to\infty, we have \sum_{i=1}^{n_{k}}\widehat{\mathbb{E}}[|Y_{i}-\widehat{\mathbb{E}}[Y_{i}]|^{2}% ]\leq(1+\epsilon/2)n_{k}\widehat{\mathbb{E}}X_{1}^{2}=(1+\epsilon/2)n_{k} for k large enough. It follows that

\displaystyle\sum_{k=k_{0}}^{\infty} \displaystyle\mathbb{V}\left(I_{k}\geq(1+\epsilon)^{2}a_{n_{k}}\right)\leq\sum% _{k=k_{0}}^{\infty}\exp\left\{-(1+\epsilon)^{2}\log\log n_{k}\right\}\leq\sum_% {k=k_{0}}^{\infty}\frac{c}{k^{(1+\epsilon)(1-\alpha)}}<\infty

if \alpha is chosen such that (1+\epsilon)(1-\alpha)>1. It follows that by the countable sub-additivity and the Borel-Cantelli Lemma again,

\mathbb{V}^{\ast}\left(\Big{\{}\frac{I_{k}}{a_{n_{k}}}>(1+\epsilon)^{2}\Big{\}% }\;i.o.\right)=0. (5.9)

Combining (5.8) and (5.9) yields (5.7). The proof is completed.

Acknowledgements

This work was supported by grants from the NSF of China (No. 11225104), the 973 Program (No. 2015CB352302) and the Fundamental Research Funds for the Central Universities.


References

  • [1]
  • [2] Chen, Y. Q., Chen, A. Y. and Ng, K. W. (2010), The strong law of large numbers for extended negatively dependent random variables, Journal of Applied Probability, 47(4): 908-922
  • [3] Chen, Z. J. (2016), Strong laws of large numbers for sub-linear expectations, Science in China-Mathematics, 59(5): 945-954. arXiv:1006.0749 [math.PR].
  • [4] Chen, Z. J. and Hu, F. (2014), A law of the iterated logarithm for sublinear expectations, Journal of Financial Engineering, 1, No.02. arXiv: 1103.2965v2[math.PR].
  • [5] Chen, Z. J., Wu, P. Y. and Li, B. M. (2013), A strong law of large numbers for nonadditive probabilities, International Journal of Approximate Reasoning, 54: 365-377.
  • [6] Block, H. W., Savits, T. H. and Shaked, M. (1982), Some concepts of negative dependence, Ann. Probab., 10: 765-772.
  • [7] Denis, L. and Martini, C. (2006), A theoretical framework for the pricing of contingent claims in the presence of model uncertainty, Ann. Appl. Probab., 16(2): 827-852.
  • [8] Gilboa, I. (1987), Expected utility theory with purely subjective non-additive prob- abilities, J. Math. Econom., 16: 65-68.
  • [9] Joag-Dev, K. and Proschan, F. (1983), Negative association of random variables with applications, Ann. Statist., 11 (1): 286-295.
  • [10] Lehmann, E. (1966), Some concepts of dependence, Ann. Math. Statist., 37 (5):1137-1153.
  • [11] Liu, L. (2009), Precise large deviations for dependent random variables with heavy tails, Statist. Prob. Lett., 79: 1290-1298.
  • [12] Maccheroni, F. and Marinacci, M. (2005), A strong law of large number for capacities, Ann. Probab., 33: 1171-1178.
  • [13] Marinacci, M. (1999), Limit laws for non-additive probabilities and their frequentist interpretation, J. Econom. Theory, 84: 145-195.
  • [14] Matula, P. (1992), A note on the almost sure convergence of sums of negatively dependent random variables, Statist. Probab. Lett., 15: 209-213.
  • [15] Móricz (1982), A general moment inequality for the maximum of partial sums of single series, Acta Sci. Math. Szeged, 44: 67-75.
  • [16] Newman, C. M. (1984), Asymptotic independence and limit theorems for positively and negatively dependent random variables, in Inequalities in Statistics and Probability (ed. Tong, Y. L.), IMS Lecture Notes-Monograph Series, Vo1.5, 127-140.
  • [17] Newman, C. M. and Wright, A. L. (1981), An invariance principle for certain dependent sequences, Ann. Probab., 9: 671-675.
  • [18] Peng, S. (1999), Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer type, Probab. Theory Related Fields, 113: 473-499.
  • [19] Peng, S. (2006), G-expectation, G-Brownian motion and related stochastic calculus of Ito type, Proceedings of the 2005 Abel Symposium.
  • [20] Peng, S. (2008a), Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stochastic Process. Appl., 118(12): 2223-2253.
  • [21] Peng, S. (2008b), A new central limit theorem under sublinear expectations, Preprint: arXiv:0803.2656v1 [math.PR]
  • [22] Peng, S. (2009), Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations, Science in China Ser. A, 52(7): 1391-1411.
  • [23] Peng, S. G. (2010), Nonlinear Expectations and Stochastic Calculus under Uncertainty, arXiv:1002.4546 [math.PR].
  • [24] Shao, Q. M. (2000) A Comparison theorem on moment inequalities between negatively associated and independent random variables, J. Theort. Probab., 13: 343-356.
  • [25] Shao, Q.M. and Su, C. (1999), The law of the iterated logarithm for negatively associated random variables, Stochastic Process. Appl., 86: 139¨C148.
  • [26] Su, C., Zhao, L. C., and Wang, Y. B. (1997), Moment inequalities and weak convergence for negatively associated sequences, Science in China Ser A, 40: 172-182.
  • [27] Terán, P. (2014), Laws of large numbers without additivity, Tran. Amer. Math. Soc., 366: 5431-5451.
  • [28] Zhang, L. X. (2001a), A Strassen’s law of the iterated logarithm for negatively associated random vectors, Stoch. Process. Appl., 95: 311-328
  • [29] Zhang, L. X. (2001b), The weak convergence for functions of negatively associated random variables, J. Mult. Anal., 78: 272-298.
  • [30] Zhang, L. X. (2015a), Exponential inequalities under sub-linear expectations with applications to laws of the iterated logarithm, Manuscript, arXiv:1409.0285 [math.PR].
  • [31] Zhang, L. X. (2015b), Donsker’s invariance principle under the sub-linear expectation with an application to Chung’s law of the iterated logarithm, Communications in Math. Stat., 3: 187-214. arXiv:1503.02845 [math.PR]
  • [32] Zhang, L. X. (2016), Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Science in China-Mathematics, 59(4):751-768. arXiv:1408.5291 [math.PR].
  • [33]
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
271707
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description