On Hypothesis Testing for Poisson Processes. Regular Case
Abstract
We consider the problem of hypothesis testing in the situation when the first hypothesis is simple and the second one is local onesided composite. We describe the choice of the thresholds and the power functions of the Score Function test, of the General Likelihood Ratio test, of the Wald test and of two Bayes tests in the situation when the intensity function of the observed inhomogeneous Poisson process is smooth with respect to the parameter. It is shown that almost all these tests are asymptotically uniformly most powerful. The results of numerical simulations are presented.
MSC 2010 Classification: 62M02, 62F03, 62F05.
Key words: Hypothesis testing, inhomogeneous Poisson processes, asymptotic theory, composite alternatives, regular situation.
1 Introduction
The hypothesis testing theory is a well developed branch of mathematical statistics [12]. The asymptotic approach allows to find satisfactory solutions in many different situations. The simplest problems, like the testing of two simple hypotheses, have well known solutions. Recall that if we fix the first type error and seek the test which maximizes the power, then we obtain immediately (by NeymanPearson lemma) the most powerful test based on the likelihood ratio statistic. The case of composite alternative is more difficult to treat and here the asymptotic solution is available in the regular case. It is possible, using, for example, the Score Function test (SFT), to construct the asymptotically (locally) most powerful test. Moreover, the General Likelihood Ratio Test (GLRT) and the Wald test (WT) based on the maximum likelihood estimator are asymptotically most powerful in the same sense. In the non regular cases the situation became much more complex. First of all, there are different non regular (singular) situations. Moreover, in all these situations, the choice of the asymptotically best test is always an open question.
This work is an attempt to study all these situations on the model of inhomogeneous Poisson processes. This model is sufficiently simple to allow us to realize the construction of the well known tests (SFT, GLRT, WT) and to verify that these test are asymptotically most powerful also for this model, in the case when it is regular. In the next paper we study the behavior of these tests in the case when the model is singular. The “evolution of the singularity” of the intensity function is the following: regular case (finite Fisher information, this paper), continuous but not differentiable (cusptype singularity, [4]), discontinuous (jumptype singularity, [4]). In all the three cases we describe the tests analytically. More precisely, we describe the test statistics, the choice of the thresholds and the behavior of the power functions for local alternatives.
Note that the notion of local alternatives is different following the type of regularity/singularity. Suppose we want to test the simple hypothesis against the onesided alternative . In the regular case, the local alternatives are usually given by , . In the case of a cusptype singularity, the local alternatives are introduced by , . As to the case of a jumptype singularity, the local alternatives are , . In all these problems, the most interesting for us question is the comparison of the power functions of different tests. In singular cases, the comparison is done with the help of numerical simulations. The main results concern the limit likelihood ratios in the nonregular situations. Let us note, that in many other models of observations (i.i.d., time series, diffusion processes etc.) the likelihood ratios have the same limits as here (see, for example, [6] and [2]). Therefore, the results presented here are of more universal nature and are valid for any other (non necessarily Poissonian) model having one of considered here limit likelihood ratios.
We recall that is an inhomogeneous Poisson process with intensity function , , if and the increments of on disjoint intervals are independent and distributed according to the Poisson law
In all statistical problems considered in this work, the intensity functions are periodic with some known period and depend on some onedimensional parameter, that is, . The basic hypothesis and the alternative are always the same: and . The diversity of statements corresponds to different types of regularity/singularity of the function . The case of unknown period needs a special study.
The hypothesis testing problems (or closely related properties of the likelihood ratio) for inhomogeneous Poisson processes were studied by many authors (see, for example, Brown [1], Kutoyants [7], Léger and Wolfson [11], Liese and Lorz [14], Sung et al. [16], Fazli and Kutoyants [5], Dachian and Kutoyants [3] and the references therein). Note finally, that the results of this study will appear later in the work [9].
2 Auxiliary results
For simplicity of exposition we consider the model of independent observations of an inhomogeneous Poisson process: , where , , are Poisson processes with intensity function , . Here , , is a onedimensional parameter. We have
where is the mathematical expectation in the case when the true value is . Note that this model is equivalent to the one, where we observe an inhomogeneous Poisson process with periodic intensity , , and (the period is supposed to be known). Indeed, if we put , , , then the observation of one trajectory is equivalent to independent observations .
The intensity function is supposed to be separated from zero on . The measures corresponding to Poisson processes with different values of are equivalent. The likelihood function is defined by the equality (see Liese [13])
and the likelihood ratio function is
We have to test the following two hypotheses
A test is defined as the probability to accept the hypothesis . Its power function is , .
Denote the class of tests of asymptotic size :
Our goal is to construct tests which belong to this class and have some proprieties of asymptotic optimality.
The comparison of tests can be done by comparison of their power functions. It is known that for any reasonable test and for any fixed alternative the power function tends to . To avoid this difficulty, as usual, we consider close or contiguous alternatives. We put , where , and . The rate of convergence must be chosen so that the normalized likelihood ratio
has a non degenerate limit. In the regular case this rate is usually .
Then the initial problem of hypotheses testing can be rewritten as
The power function of a test is now denoted as
The asymptotic optimality of tests is introduced with the help of the following definition (see [15]).
Definition 1.
We call a test locally asymptotically uniformly most powerful (LAUMP) in the class if its power function satisfies the relation: for any test and any we have
Below we show that in the regular case many tests are LAUMP. In the next paper [4], where we consider some singular situations, a “reasonable” definition of asymptotic optimality of tests is still an open question. That is why we use numerical simulations to compare the tests in [4].
We assume that the following Regularity conditions are satisfied.
Smoothness. The intensity function , , of the observed Poisson process is two times continuously differentiable w.r.t. , is separated from zero uniformly on , and the Fisher information is positive:
Here denotes the derivative of w.r.t. and, at the point , the derivative is from the right.
Distinguishability. For any , we have
Here
In this case, the natural normalization function is and the change of variables is .
The key propriety of statistical problems in the regular case is the local asymptotic normality (LAN) of the family of measures of corresponding inhomogeneous Poisson processes at the point . This means that the normalized likelihood ratio
admits the representation
where (using the central limit theorem) we have
(convergence in distribution under ), and (convergence in probability under ). Moreover, the last convergence is uniform on for any .
Let us now briefly recall how this representation was obtained in [7]. Denoting and , with the help of the Taylor series expansion we can write
In the sequel, we choose reparametrizations which lead to universal in some sense limits. For example, in the regular case, we put
With such change of variables, we have
where
The LAN families have many remarkable properties and some of them will be used below.
Let us remind here one general result which is valid in a more general situation. We suppose only that the normalized likelihood ratio converges to some limit in distribution. Note that this is the case in all our regular and singular problems. The following property allows us to calculate the distribution under local alternative when we know the distribution under the null hypothesis. Moreover, it gives an efficient algorithm for calculating power functions in numerical simulations.
Lemma 1 (Le Cam’s Third Lemma).
Suppose that converges in distribution under :
Then, for any bounded continuous function , we have
For the proof see [10].
In the regular case, the limit of is the random function
So, for any fixed , we have the convergence
According to this lemma, we can write the following relations for the characteristic function of :
which yields the asymptotic distribution of the statistic under the alternative :
3 Weak convergence
All the tests considered in this paper are functionals of the normalized likelihood ratio . For each of them, we have to evaluate two quantities. The first one is the threshold, which guarantees the desired asymptotic size of the test, and the second one is the limit power function, which has to be calculated under alternative. Our study is based on the weak convergence of the likelihood ratio under hypothesis (to calculate the threshold) and under alternative (to calculate the limit power function). Note that the test statistics of all the tests are continuous functionals of . That is why the weak convergence of allows us to obtain the limit distributions of these statistics.
We denote the distribution that the observed inhomogeneous Poisson processes induce on the measurable space of their realizations. The measures in the family are equivalent, and the normalized likelihood ratio is
where . We define to be linearly decreasing to zero on the interval and we put for . Now the random function is defined on and belongs to the space of continuous on functions such that as . Introduce the uniform metric in this space and denote the corresponding Borel sigmaalgebra. The next theorem describes the weak convergence under the alternative (with fixed ) of the stochastic process to the process
in the measurable space . Note that in [8] this theorem was proved for a fixed true value . In the hypothesis testing problems considered here, we need this convergence both under hypothesis , that is, for fixed true value (), and under alternative with “moving” true value .
Theorem 1.
Let us suppose that the Regularity conditions are fulfilled. Then, under alternative , we have the weak convergence of the stochastic process to .
According to [6, Theorem 1.10.1], to prove this theorem it is sufficient to verify the following three properties of the process .

The finitedimensional distributions of converge, under alternative , to the finitedimensional distributions of .

The inequality
holds for every and some constant .

There exists , such that for some and all we have the estimate
Let us rewrite the random function as follows:
For the first term we have
Therefore we only need to check the conditions 2–3 for the term
Lemma 2.
The finitedimensional distributions of converge, under alternative , to the finitedimensional distributions of .
Proof.
Lemma 3.
Let the Regularity conditions be fulfilled. Then there exists a constant , such that
for all and sufficiently large values of .
Proof.
Lemma 4.
Let the Regularity conditions be fulfilled. Then there exists a constant , such that
(1) 
for all and sufficiently large value of .
Proof.
Using the Markov inequality, we get
According to [8, Lemma 1.1.5], we have
Using the Taylor expansion we get
where is some intermediate point between and . Hence, for sufficiently large providing , we have the inequality , and we obtain
(2)  
By Distinguishability condition, we can write
and hence
and
(3) 
So, putting
The weak convergence of now follows from [6, Theorem 1.10.1].
4 Hypothesis testing
In this section, we construct the Score Function test, the General Likelihood Ratio test, the Wald test and two Bayes tests. For all these tests we describe the choice of the thresholds and evaluate the limit power functions for local alternatives.
4.1 Score Function test
Let us introduce the Score Function test (SFT)
where is the ()quantile of the standard normal distribution and the statistic is
The SFT has the following wellknown properties (one can see, for example, [12, Theorem 13.3.3] for the case of i.i.d. observations).
Proposition 1.
The test and is LAUMP. For its power function the following convergence hold:
Proof.
The property follows immediately from the asymptotic normality (under hypothesis )
Further, we have (under alternative ) the convergence
This follows from the Le Cam’s Third Lemma and can be shown directly as follows. Suppose that the intensity of the observed Poisson process is , then we can write
To show that the SFT is LAUMP, it is sufficient to verify that the limit of its power function coincides (for each fixed value ) with the limit of the power of the corresponding likelihood ratio (NeymanPerson) test (NPT) . Remind that the NPT is the most powerful for each fixed (simple) alternative (see, for example, Theorem 13.3 in Lehman and Romano [12]). Of course, the NPT is not a real test (in our onesided problem), since for its construction one needs to know the value of the parameter under alternative.
The NPT is defined by
where the threshold and the probability are chosen from the condition , that is,
Of course, we can put because the limit random variable has continuous distribution function.
The threshold can be found as follows. The LAN of the family of measures at the point allows us to write
Hence, we have
Therefore the NPT
belongs to .
For the power of the NPT we have (denoting as usually )
Therefore the limits of the powers of the tests and coincide, that is, the Score Function test is asymptotically as good as the NeymanPearson optimal one. Note that the limits are valid for any sequence of . So, for any , we can choose a sequence such that
which represents the asymptotic coincidence of the two tests and concludes the proof. ∎
4.2 GLRT and Wald test
Let us remind that the maximum likelihood estimator (MLE) is defined by the equation:
where the likelihood ratio function is
The GLRT is
where
The Wald’s test is based on the MLE and is defined as follows:
The properties of these tests are given in the following Proposition.
Proposition 2.
The tests and belong to , their power functions and converge to , and therefore they are LAUMP.
Proof.
Let us put and denote . We have
According to Theorem 1 (with ), we have the weak convergence (under ) of the measure of the stochastic processes to those of the process . This provides us the convergence of the distributions of all continuous in uniform metric functionals. Hence
which yields (we suppose that )
Using the same weak convergence we obtain the asymptotic normality of the MLE (see [6] or [8]):
and hence . So both and belong to .
Now, let us fix some and study the limit behavior of the power functions of the tests.
Using the weak convergence of the likelihood ratio process under the alternative , we have
Hence (we suppose again that ),
Similarly we have
Therefore the tests are LAUMP. ∎
Example 1. As the family of measures is LAN and the problem is asymptotically equivalent to the corresponding hypothesis testing problem for a Gaussian model, we propose here a similar test for Gaussian observations.
Suppose that the random variable