On Bandlimited Spatiotemporal Field Sampling with Location and Time Unaware Mobile Senors

# On Bandlimited Spatiotemporal Field Sampling with Location and Time Unaware Mobile Senors

Sudeep Salgia and Animesh Kumar
###### Abstract

Sampling of physical fields has been a topic that been studied extensively in literature but has been restricted to a small class of fields like temperature or pollution which are essentially modelled by the standard second order partial differential equation for diffusion. Furthermore, a large number of sampling techniques have been studied from sensor networks to mobile sampling under a variety of conditions like known and unknown locations of the sensors or sampling locations and with samples affected by measurement and/or quantization noise. Also, certain works have also addressed time varying fields incorporating the difference in known timestamps of the obtained signals.

It would be of great interest to explore fields which are modelled by more general constant coefficient linear partial differential equations to address a larger class of fields that have a more complex evolution. Additionally, this works address an extremely general and challenging problem, in which such a field is sampled using an inexpensive mobile sensor such that both, the locations of the samples and the timestamps of the samples are unknown. Moreover, the locations and timestamps of the samples are assumed to be realizations of two independent unknown renewal processes. Furthermore, the samples have been corrupted by measurement noise. In such a challenging setup, the mean squared error between the original and the estimated signal is shown to decreasing as , where is the average sampling density of the mobile sensor.

Additive white noise, partial differential equations, nonuniform sampling, signal reconstruction, signal sampling, time-varying fields

## I Introduction

Sampling of smooth spatiotemporally varying fields is a problem that has been addressed in literature for multiple reasons. Often the aim has been to estimate the sources in a diffusion field while some papers have addressed the problem of estimating the field from the samples. Classical approaches towards this study have generally involved samples from distributed sensor networks corrupted with measurement noise and often assume that the time instants of the measurements are precisely known or have a control over them. Sampling using a mobile sensor has been a problem that has received attention of late. This problem also has been well studied in literature for temporally fixed fields with samples from precisely known sampling locations to unknown locations. Also, certain works do consider a time variation of field and have addressed generally using known sampling locations and known time stamps.

Extensive research has been done in sampling fields which can be modelled using the diffusion equation. In fact, almost all models studying spatiotemporal fields assume a model of the field which is evolving according to the standard diffusion equation. However, to the best knowledge of the authors, there has been no work in sampling fields governed by any linear partial differential equation (PDE) with constant coefficients. It is important to note here that, all PDEs are not good models for physical fields. Thus, PDEs that are under consideration here are the ones which can possibly model a physical field. An important criteria here is that the energy has to be finite and typically decreasing with time due to finite support considerations and inherent ”diffusive” nature. This be will quantified in a later section of this work. Furthermore, sampling of fields varying with time has generally been studied in specific, constrained environments like uniform sampling, (in space or time or both), or non uniform sampling with precisely known spatial locations and time stamps, or unknown locations of either a very slowly varying fields or at known time instants. The primary motivation of this work is to analyze sampling in a highly generalized setup of any physical field. Such scenarios, are rather common in real world. An example is sampling of a pollution field using an inexpensive device. Often adding precision of knowledge of location and time to sampling system leads to a considerable increase in cost and hence often inexpensive devices is used which can record the location or the time stamps of the samples. Modelling realistic scenarios of sampling fields without the knowledge of spatial locations or time instants is another important aspect of motivation behind this work.

This work considers a very general model of a smooth field with finite support that is evolving according a known, linear partial differential equation with constant coefficients. For mathematical tractability, field is assumed to be one dimensional but is evolving with time according to the known PDE. The smoothness of the field is modelled by is spatial bandlimitedness. However, the field need not be bandlimited in time, which in fact, is often the case. It is important to note here that if a field and its certain number of temporal derivatives are known to be spatially bandlimited at , then it can be concluded that the field will be always be bandlimited if it evolves according to the given PDE. The number of the temporal derivatives depends on the degree of the time derivative in the PDE and result has been proven in this work (Appendix A). Also, it is considered that location and time stamps of all samples are unknown and are realizations of two independent unknown renewal processes. Also, the samples are assumed to be corrupted with measurement noise which is independent of all other processes and is assumed to have zero mean and a finite variance. There are no other assumptions about the nature of the noise or its statistics. The primary way of reducing error in this setup would be oversampling, like any other setup involving spatial sampling with unknown locations. However, it is important to note that the oversampling is only in the spatial domain and not in time domain. The number of samples will solely be governed by spatial sampling density.

The main result of this paper is that if a spatially bandlimited field evolving according to a constant coefficient linear PDE, is sampled such that sampling locations and instants are unknown and are obtained from unknown renewal processes that are independent, then the mean square error between the estimated field from the noisy samples and the original field at decreases as , where is the average sampling density, that is, the expected number of samples over the support of the field.

Prior Art: Estimating a spatiotemporally varying field or sources in a diffusive field has been a problem addressed in literature. Classically, the problem involved estimating sources in a diffusive field from distributed sensor networks, which often requires solving an inverse problem, i.e., inferring certain characteristics of the field like its distribution at any instant, from a small number of samples of the field. What makes such problems difficult is the fact that these inverse problems involving diffusion equation are known to be severely ill-conditioned [28]. Several works have thus looked at different forms to regularize the problem. Nehorai et al. [1] invoked spatial sparsity of sources and studied the detection and the localization of a single vapor-emitting source using a maximum likelihood estimator. Two different approaches to the reconstruction of a sparse source distributions, one involving spatial super-resolution[2] and other on an adaptive spatio-temporal sampling scheme[3], were introduced by Lu and Vetterli. Another method[4] using Prony’s method has been proposed. Ranieri et al. [5] employed compressed sensing on a discrete grid to estimate the field which has been extended to real line[6]. Apart from the standard diffusion PDE, the Poisson PDE has also been studied and solutions to that using finite elements has been also proposed[7], [8]. The scenario when spatial sparsity is not realistic has also been studied[9]. Sampling using a mobile sensor has been a topic of recent interest[10], [11]. Estimating fields using mobile sampling has been a well-studied problem. This problem reduces to the classical sampling and interpolation problem (as described in [12], [13], [14] ), if the samples are collected on precisely known locations in absence of any measurement noise. A more generic version with precisely known locations, in presence of noise, both measurement and quantization, has also been addressed (refer [15] - [20]). Sampling and reconstruction of bandlimited signals from samples taken at unknown locations has also been studied using a number of variations of sampling models (see [21] - [26]). This work is different from all previous works in the following ways: (i) The field is considered to be evolving with a known constant linear PDE, which need not be the diffusion equation. It can be any linear PDE as long as it is a feasible model for a physical field. (ii) Both the sampling locations and the timestamps of the samples are unknown, unlike previous works where either fields are considered to be temporally fixed or time stamps are assumed to be known.

Notation: The spatiotemporally varying field will be denoted as . The gradient of is defined as . denotes the average sampling density, while is the random variable which denotes the number of samples taken over the support of the field. All vectors will be denoted in bold. The norm of a vector will be denoted by . The expectation operator will be denoted by . The expectation is over all the random variables within the arguments. The trace of a matrix will be denoted by . The set of natural number, integers, reals and complex numbers will be denoted by and respectively. Also, .

## Ii Field, Sampling and Noise Model, Distortion Critertia

### Ii-a Field Model

The field is considered to be spatially smooth over a finite support, one dimensional in space and evolving with time according to a Partial Differential Equation. The PDE is linear with constant coefficients and is assumed to be known and given by

 m∑i=0pi∂i∂tig(x,t)=m′∑i=0qi∂i∂xig(x,t) (1)

where, . This will be represented throughout the paper in the terms of polynomials. Define two polynomials as

 p(z)=m∑i=0pizi ; q(z)=m′∑i=0qizi (2)

where the coefficients are same as in the differential equation. If for notation purposes, , then the original equation can be written as

 p(∂∂t)g(x,t)=q(∂∂x)g(x,t) (3)

To incorporate the smoothness of the field, the field is assumed to be bandlimited. It is important to note we need to ensure that the field is bandlimited as it evolves with time. Intuitively speaking, this condition should hold true if we have know the field is spatially bandlimited at , because time evolution is unlikely to affect the spatial bandlimitedness. Formally speaking, if the degree of the polynomial is , then if partial derivatives of along with are spatially bandlimited then the function will be spatially bandlimited if the field evolves according to the given PDE. A detailed proof of this has been given in Appendix A. We will assume that such a condition holds for the field under consideration. This helps us ensure that field will always be bandlimited. Since the field considered has a finite support, assumed WLOG as , it can be represented as

 g(x,t)=b∑k=−bak(t)exp(j2πkx) ; ak(t)=∫10g(x,t)exp(−j2πkx)dx (4)

Also the field is assumed to be bounded. That is, .

### Ii-B Distortion Criteria

To measure the distortion, we will use a simple mean squared error between the estimated field and the actual field. All measurements will be considered for i.e., the mean square error will be considered between the estimated field at and the actual field at . Let the estimated field be and its Fourier coefficients be , then the distortion criterion is defined as

 D[^G,g] =E[∫10|^G(x,t)−g(x,t)|2]∣∣∣t=0 (5) =E[b∑k=−b|^Ak(t)−ak(t)|2]∣∣∣t=0 (6) =b∑k=−bE[|^Ak(0)−ak(0)|2] (7)

### Ii-C Sampling Model

The sampling model is in this case a renewal process based sampling model, similar to the one in [27]. Let denote the intersample distances and denote the intersample time intervals. The sampling model employed assumes the spatial and the temporal separations as realizations of two independent renewal processes. In other words, are i.i.d. random variables having a common distribution and are also i.i.d. random variables having a single common distribution , such that and are independent random variables for all values of . Using these intersample distances, the sampling locations, , are given by . The sampling is done over the interval , the support of the field, and is the random number of samples that lie in the interval i.e. it is defined such that, and . Thus is a well defined measurable random variable[30]. Note that, the number of samples in the interval only depend on the spatial sampling density and not on the temporal counterpart.

For the purpose of ease of analysis and tractability, the support of the distributions of and are considered to be finite and inversely proportional to the sampling density. Hence, it is assumed that

 (8)

where are parameters that characterize the support of the distributions. Both are finite numbers, independent of the average sampling density and much smaller than i.e., . These would be important factors that govern the constant of proportionality in the expected error of the estimate. Furthermore, is an important factor also to determine the threshold on the minimum number of samples. Applying Wald’s identity[30], on ,

 E[M+1]E[X] =E[SM+1] (9) (E[M]+1)1n =E[SM+1] E[M] =nE[SM+1]−1 (10)

By definition, and . Since , therefore, . Use these inequalities with equation (9), to obtain,

 n−1

Also, the bound on each , along with gives,

 (M+1)λn>1 or M>nλ−1 (12)

Note that all the bounds on the number of samples, a random variable, is characterized solely in terms of , the parameter that defines support for the spatial renewal process. There is no involvement of the temporal renewal process as expected. The only assumption on the sampling model with respect to time is that it has been assumed that all samples are collected within some time , which is known. That is to say, and . Note that this value is variable. This value is important in general to be known in sampling scenarios especially in case of time varying fields because if it is too large then the field has likely decayed to a very small value which can lead to erroneous results. Thus, the knowledge of this is assumed. Since , we expect that .

### Ii-D Measurement Noise Model

It will be assumed that the obtained samples have been corrupted by additive noise that is independent both of the samples and of both the renewal processes. For simplicity, the noise is considered to be varying only spatially. That is, at all time instants, the distribution of the noise remains the same, which is assumed to be unknown in this work. Hence, . Thus, the samples obtained would be sampled versions of , where is the noise. Also, since the measurement noise is independent, that is for any set of measurements at distinct points , the samples would be independent and identically distributed random variables. Note that the sampling instants have not been considered because of the distribution being temporally static. The only statistics known about the noise are that the noise is zero mean and has a finite variance, .

## Iii Field Estimation from the Obtained Samples

This section will mainly deal with the estimation of the field from the samples whose locations and time stamps come from two unknown independent renewal processes. Before that, it is essential to analyse the development of the field under the differential equation. Using the fact that Fourier series are linear in coefficients, and combining the equations (4) and (1), and using the orthogonality of Fourier basis, we can write,

 m∑i=0pi∂i∂ti(b∑k=−bak(t)exp(j2πkx)) =n∑i=0qi∂i∂xi(b∑k=−bak(t)exp(j2πkx)) b∑k=−b(m∑i=0pi∂iak(t)∂ti)exp(j2πkx) =b∑k=−bak(t)(n∑i=0qi(j2πk)i)exp(j2πkx) b∑k=−b(m∑i=0pi∂iak(t)∂ti)exp(j2πkx) =b∑k=−bak(t)q(j2πk)exp(j2πkx) m∑i=0pi∂iak(t)∂ti−q(j2πk)ak(t) =0  ∀k=−b,…,b (13)

where (a) follows from 2 and (b) uses the orthogonality proprety for the Fourier basis. This gives us a differential equation for each . To solve for , the general method is adopted and the solution is assumed to be of the form . For each , this leads to the polynomial equation,

 m∑i=0pi∂iAert∂ti−q(j2πk)Aert=0 (14) (m∑i=0piri−q(j2πk))Aert=0 (15) p(r)−q(j2πk)=0 (16)

The solution for is a of the form , where is the root of the above polynomial and is a constant independent of . Let the roots of the above polynomial be . Note that the roots of the polynomial are indexed by as well, implying there is a set of roots for each value of . It is essential here to realise that if the field is a physically feasible one, then , that is all roots have a non positive real part. Generally for all physical fields it has to be strictly less than , but we are allowing the possibility of sustained oscillating (in time) fields. Furthermore for simplicity of analysis, all of the roots are considered to be distinct for a given .111If there is a repeated root then the solution will be of the form and also , which will make the problem very complicated. Such cases can also be treated in a similar manner that has been described in the paper. To be very specific, if the repeated root is , it can be easily taken into the given framework by combining all the repeated terms with it. This is because if , then , which diverges and hence cannot be the solution for a physical field. Such nuances have been omitted to simplify the description of the process. However, it is possible that for some . This assumption is realistic enough as generally for physical fields, is generally very small thus the chance of repeated roots is lesser. The condition is similar to the one obtained in control theory, where we want the poles of the closed loop system to lie in the left half plane. Thus, we can use criteria like the Routh-Hurwitz condition, to ensure the roots have negative real parts in our case.

In fact, the solution for can thus be written as a linear combination of these roots. Thus . The coefficients have been represented so to maintain consistency of representation of as a function of time. Also are finite constants independent of everything else. Let .

 ∣∣∣∂∂tg(x,t)∣∣∣ =∣∣∣∂∂tm∑i=1aki(0)exp(ri(k)t)exp(j2πkx)∣∣∣ (17) =∣∣∣m∑i=1aki(0)ri(k)exp(ri(k)t)exp(j2πkx)∣∣∣ (18) ≤m∑i=1∣∣∣aki(0)ri(k)exp(ri(k)t)exp(j2πkx)∣∣∣ (19) ≤m∑i=1|aki(0)||ri(k)| (20) ≤mαkR (21)
 ∣∣∣∂∂xg(x,t)∣∣∣ =∣∣∣∂∂xb∑k=−bak(t)exp(j2πkx)∣∣∣ (22) =∣∣∣b∑k=−bak(t)j2πkexp(j2πkx)∣∣∣ (23) ≤m∑i=1∣∣∣aki(0)j2πkexp(ri(k)t)exp(j2πkx)∣∣∣ (24) ≤m∑i=12bπ|aki(0)| (25) ≤mαk2bπ (26)

The third step follows from triangle inequality, fourth step uses the fact that and the bound on uses the Rouche’s theorem. The value of can be expressed in terms of ’s, which are finite and so is the upper bound. Using the above expression for and using it in equation (4) to obtain the value at , we get,

 g(Sn,Tn)=b∑k=−bm∑i=1aki(0)exp(ri(k)Tn)exp(j2πkSn) (27)

The above equation can be written in a vector notation form. Let . Define,

 ek,i(x,t):=[ek,1(x,t), ek,2(x,t),…,ek,m(x,t)] ak:=[ak1(0), ak2(0), ak3(0), …, akm(0)] a=[a−b,…a−1,a0,a1,…ab]T e(x,t)=[e−b(x,t),…,eb(x,t)]H (28)

Observe that and are column vectors, while and are row vectors. Since , so . This implies,

 ||ek(x,t)||2=m∑i=1|exp(ri(k)t+j2πkx)|2≤m∑i=11≤m ||e(x,t)||2=b∑k=−b||ek(x,t)||2≤b∑k=−bm=m(2b+1) (29)

Therefore, on using equation (III), equation (27) can be rewritten as,

 g(Sn,Tn)=eH(Sn,Tn) a (30)

Recall that the sampling locations () and their respective time stamps () are given by,

 S1=X1, S2=X1+X2,…Sn=n∑i=1Xi T1=N1, T2=N1+N2,…Tn=n∑i=1Ni

where and are all unknown and both ’s and ’s are drawn from independent distributions. The obtained samples are value of the field at these locations and instants, that have been corrupted with noise. The observed values are, thus, . Define two vectors

 g=[g1,g2,…gM]T and w=[w1,w2,…wM]T (31) where, gi=g(Si,Ti) and wi=W(Si,Ti) for i=1,2,3,…M. (32)

The motivation behind this is to continue to matrix vector notation and hence all the samples have been stacked up to form a single vector. The vector that would be obtained on stacking up the samples would be . Combining equation (III) and (31), we can write,

 g=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eH(S1,T1)eH(S2,T2)⋮eH(SM,TM)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦a=Ya ; where % Y=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eH(S1,T1)eH(S2,T2)⋮eH(SM,TM)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (33)

The main idea behind the reconstruction of the field would be that the sampling location and time instants are “near” to the locations and time instants, had we sampled uniformly both in time and space for points. Thus, to incorporate the same into the formulation, let

 si=iM, and ti=iT0M for 1≤i≤M (34)
 (35)

where . This implies, . Note that has Vandermonde structure[31]. Now, since we expect that the sampling locations are ”near” to the grid points, we can estimate the Fourier coefficients by assuming that samples have been obtained by multiplying the Fourier coefficient vector by instead of . The best estimate of the Fourier coefficients, , thus would be

 ^a=argminb=||gs−Y0b||2 (36)

It is important to note here that instead of , we have used since that is the best knowledge we have about . Since the main way to achieve to estimate the field relies on oversampling, the sampling density will be generally very large and thus, , making this problem a standard least squares estimation problem. The solution to this problem is well known and uses the pseudoinverse of the matrix. Therefore,

 ^a=(YH0Y0)−1YH0gs (37) a=(YH0Y0)−1YH0g0 (38)

The second equation is obtained in a similar manner. However, it is important to realize at this point that the first equation is a least-square estimate because of the unknown locations and noise while the second equation is an exact solution. Having defined all the above quantities, we can go ahead and estimate the error using the distortion criteria mentioned in the above section.

 b∑k=−bE[∣∣∣^Ak(0)−ak(0)|2] =b∑k=−bE[|m∑i=1^Aki(0)−m∑i=1aki(0)∣∣∣2] =b∑k=−bE[∣∣∣m∑i=1(^Aki(0)−aki(0))∣∣∣2] ≤b∑k=−bE[mm∑i=1∣∣^Aki(0)−aki(0)∣∣2] =mb∑k=−bE[∣∣|^ak−ak∣∣|2] ≤mb∑k=−bE[∣∣|^a−a∣∣|2] (39)

This is the estimate for the Fourier coefficients of the field and distortion criteria expressed in that estimate. We will establish the bound on the estimation error as the main result in this work.

###### Theorem 1.

Let and be as defined in equation (37). Under the sampling model discussed and the corruption by the measurement noise, the following result holds

 E[||^a−a||2]≤C′n

where is the average sampling density and is a positive constant independent of . It depends on the bandwidth, of the signal, the support parameters of the renewal processes, and , the coefficients of the PDE and the noise variance, . The dependence on , , and is such that if these constant would increase, the proportionality constant would increase, worsening the bound. The dependence on the coefficients of the PDE is in a very non linear way through the roots of the equations whose almost all coefficients are determined by these values. Correspondingly, the distortion error can be bounded as

###### Proof.

Thus, now we will upper bound to obtain a bound on distortion. Letting, , we can write,

 E[||^a−a||2] =E[||(YH0Y0)−1YH0gs−a||2] (40) =E[||(YH0Y0)−1YH0(g+w)−(YH0Y0)−1YH0g0||2] (41) =E[||A(g+w−g0)||2] (42) ≤2E[||A(g−g0)||2]+2E[||Aw||2] (43) ≤2E[λAmax||g−g0||2]+2E[||Aw||2] (44)

where second step follows from equation (37) and definition of , the fourth step from Cauchy-Scwharz inequality and is the largest eigenvalue of . Both the terms, along with the bound on , in the last step will be analyzed separately to obtain the bound on the error. Now,

 ||g−g0||2 =M∑i=1|g(Si,Ti)−g(si,ti)|2 (46)

The second step follows from the fact that for a smooth function , for , . The third step uses the fact that is upper bounded. This follows from the fact that along with the bounds on partial derivatives. From (17) and (22), we can write,

 ||∇g||22=(∂∂xg(x,t))2+(∂∂tg(x,t))2≤(mαkR)2+(mαkR2bπ)2≤C20 (47)

for some .

 λAmax (a)≤tr(AHA) (48) =tr(AAH) (49) =tr((YH0Y0)−1YH0Y0(YH0Y0)−1) (50) =tr((YH0Y0)−1) (51)

(a) follows from the fact that trace of a matrix is the sum of its eigenvalues and since is symmetric, all its eigenvalues will be non negative therefore, the sum will be greater than the largest eigenvalue. For the second term in the equation (40), the structure of the noise model can be exploited to simplify the expression. Note that using the assumptions on the noise model, and , where is the identity matrix.

 E[||Aw||2] =E[wT(AHAw)] (53) (a)=E[tr(wT(AHAw))] (54) (b)=E[tr((AHAw)wT)] (55) (c)=tr(E[AHAwwT]) (56) (d)=tr(E[AHA]E[wwT]) (57) =tr(E[AHA]σ2I) (58) =E[tr(AHAσ2I)] (59) =E[σ2tr(AHA)] (60) =σ2E[tr((YH0Y0)−1)] (61)

where, (a) uses the fact that is scalar hence, it equals its trace, (b) follows from , (c) uses linearity of expectation and the trace operator, and (d) is a result of independence of noise and sampling. Thus, from equation 40, 48 and 53, it is clear that characterizing the bound on is required and will also suffice for the purpose.

Let be eigenvalues of . Since the matrix is symmetric, . Using the property of eigenvalues, the eigenvalues of , will be . Let and be the maximum and minimum eigenvalues of . Therefore, and will be the maximum and minimum eigenvalues of . Applying the Polya-Szego inequality [32] on the sequence formed by eigenvalues of and by those of , we can write

 ∑m(2b+1)i=1λi ∑m(2b+1)i=1(1/λi)(∑m(2b+1)i=1√λi.1/λi)2≤14(λmaxλmin+λminλmax)2 (63)

Noting that and is the condition number of the matrix , it can be written as,

 tr(YH0Y0) tr((YH0Y0)−1)≤m2(2b+1)24(κ+1κ)2 tr((YH0Y0)−1)≤m2(2b+1)24 tr(YH0Y0)(κ+1κ)2 (64)

Consider,

 tr(YH0Y0) =tr ⎛⎜ ⎜ ⎜ ⎜ ⎜⎝[e(s1,t1),e(s2,t2),…,e(sM,tM)]⎡⎢ ⎢ ⎢ ⎢ ⎢⎣eH(s1,t1)eH(s2,t2)⋮eH(sM,tM)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ ⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ =||e(s1,t1)||2+||e(s2,t2)||2+⋯+||e(sM,tM)||2 (a)=M∑i=1b∑k=−b||ek(si,ti)||2 (b)=M∑i=1b∑k=−bm∑j=1|ejk(si,ti)|2 =M∑i=1b∑k=−bm∑j=1exp(2R(rj(k))ti) =m∑j=1b∑k=−bM∑i=1exp(2R(rj(k))iT0M) (65)

Both (a) and (b) follow directly from equation (III). For a given value of and , the sum can be considered as a scalar multiple of Riemann sum approximation of the integral with partitions chosen uniformly over the interval. The only difference here that the terms in the sum are not multiplied by the interval difference, which in this case is the same for all intervals and hence can be taken out as a scalar. It is interesting to note here that the value of this integral can be used as a bound on the value of the sum because of that fact that is decreasing since . The bound can be obtained as,

 ∫T00exp(2R(rj(k))t)dt =M∑i=1∫iT0M(i−1)T0Mexp(2R(rj(k))t)dt (a)≤M∑i=1∫iT0M(i−1)T0Mexp(2R(rj(k))(i−1)T0M)dt =M∑i=1exp(2R(rj(k))iT0M)exp(−2R(rj(k))T0M)∫T0M(i−1)T0Mdt =exp(−2R(rj(k))T0M)T0MM∑i=1exp(2R(rj(k))iT0M) ≤exp(−2R(rj(k)))T0MM∑i=1exp(2R(rj(k))iT0M) (66)

where (a) follows from the decreasing nature of and the last step uses that fact that . This implies,

 exp(2R(rj(k))iT0M)≥Mexp(2R(rj(k)))∫T00exp(2R(rj(k))t)dt=MCjk (67)

where is a finite constant for each . Using equation (67) in equation (III), we get

 tr(YH0Y0)≥m∑j=1b∑k=−bMCjk=MC3 (68)

where is a finite deterministic constant given by