Regression and Classification by Zonal Kriging

# Regression and Classification by Zonal Kriging

Jean Serra, Jesus Agulo, B Ravi Kiran
Janvier 2018
###### Abstract

Consider a family , of pairs of vectors and scalars that we aim to predict for a new sample vector . Kriging models as a sum of a deterministic function , a drift which depends on the point , and a random function with zero mean. The zonality hypothesis interprets as a weighted sum of random functions of a single independent variables, each of which is a kriging, with a quadratic form for the variograms drift.

We can therefore construct an unbiased estimator de with minimal variance , with the help of the known training set points. We give the explicitely closed form for without having calculated the inverse of the matrices.

## 1 Introduction

This study follows in the line of work of [Kiran and Serra, 2017]. It was motivated by work done by work done at Centre de Morphologie Mathématique in Nov 2016 [Decencière, 2016, Ferhi, 2016, Alais, 2016]. We study two aspects of statistical learning theory, for which we propose two solutions.

1. The methods to construct decision trees [Breiman et al., 1984] frequently constitute of partitioning the parameter space into half spaces recursively, where each binary decision optimizes various measures such as Entropy, Gini Coefficients. For possibly regionalized random variables, these methods do not take into account this intrinsic structure. Two parameter families with different different variograms shall be treated in the exactly the same way by the same algorithm.

2. Regression and classification here treated as the same problem in this decision tree framework, and we can achieve classification by doing a threshold of the regressed variable. Though this holds in case of binary classification and we could have other cases such as the multi-class classification problem.

We propose to take into account the regionalization of the predictors with the application of Kriging to perform regression in the parameter space. The theory of kriging proposed by Georges Matheron, interpolates the value attributed to each point in the space where we do not know the regionalized variable, using the neighbourhood and spatial correlation [Matheron, 1969, Matheron, 1971]. This not only yields as estimator but also an estimation of the variance. This is pertinent when testing hypothesis. Thus in conclusion, we propose a method to perform regression as opposed to Breiman’s method that achieves the same by partitioning the space.

Kriging, like the support vector machines (SVM) [Schölkopf et al., 2002] is formulated as an optimization of a quadratic function, but unlike (SVM) it uses a kernel for the extension of variances. It was applied in the area of geostatistics to cartography of mineral ores, study sea beds and atmosphere pressure etc. In the present case, its use in multivariate statistics differs from geostatistics. Firstly the space is no more geographical, but a set of parameters. We consider a family , of pairs of parameters and measures .

The s are inputs and the the outputs. Every value is a vector consisting of scalar parameters each coming from , and we would like to predict the measure at a new test point not in the family . Once the estimation is done using the training set, we use another sample set, test set to evaluate the performance of the estimation.

The second difference lies in the size of the training set. The order of and are variable by are sometimes quite large. A family of an average size allows for 1000s of parameters to be associated to 1000s of samples.In other situations, could be millions. But the kriging estimator at a point of the space requires that we solve a linear system of variables. Could we avoid inverting a matrix now that is around , i.e. with elements ?

##### Motivation

This study’s primary goal is to express the estimator ) over all points without inverting a matrix of , while using efficient algorithms. Following this goal we shall decompose the estimators over to scalar uni-dimensional estimators with independent scalar variables. The second objective consists in finding the solution for classification using vector regressions following similar decomposition.

## 2 Reminder on kriging

Kriging is an optimal prediction of , defined on a metric space . To estimate at new test point , we use the linear function for while taking into account the interdependence of the training samples.

The idea is to interpret as a random function with the mean value and the residue :

 {y(x)=m(x)+z(x)=a0+a1x+a2x2+z(x)E[y(x)]=m(x);E[z(x)]=0 (1)

The mean, or the drift, is a deterministic function, though one that depends on the point , while is a random stationary function with zero mean and covariance . We say that the estimated a deterministic value, but with unknown , and we krige the random values and (and not their mean values). The first objective remains the kriging of , but one the objective is to estimate the drift and krige the residue.The functions and are not arbitrary, the terms of the drift should be linearly independent, and the covariance positive definite.

In the present case four enumerations appear, with one part of the variation from to parameters, running index , and then the cell, from to of samples, running index and finally the coefficients of the drift, from to running index . Now, for the problem of classification that will touch upon later, it is also required to take into account the enumeration over classes. However, the four-indexes lead to unreadable equations, we successively treat the estimation of the drift , and then the kriging to estimate . These two steps are done in line with scalar regression; while when the classification shall be studied shall be interpreted as a vector Kriging.

Regarding notations, Matheron uses Einstein convention for implicit sums, while the more recent book by Chilès & Deflfiner avoids to write a more compact matrice representation of the linear systems, which is usually the norm in the machine learning community. In this study we keep the old notations.

### 2.1 Estimation of the drift in x0

Given the space of parameters, we would like to estimate the for the drift , this is introduced in the expression (1). First we give a model for the drift assumed to be quadratic :

 m(x)=a0+a1x+a2x2, (2)

We then form the linear composition :

 m∗(x0)=∑iμiy(xi) (3)

and we require the be universal, that is whatever , , no bias is introduced, i.e.:

 E[m∗(x0)]=m(x0). (4)

This gives us three constraints

 ∑iμi=1∑iμixi=x0∑iμix2i=x20. (5)

The estimation of variance of is given by:

 E[m(x0)−m∗(x0)]2=E{m(x0)−∑iμi(z(xi)+m(xi)}2

which when takes account of the universality conditions in (5) simplifies to

 E[m(x0)−m∗(x0)]2=∑j∑iμjμiσ(i,j) (6)

where is the covariance between the residues ( and (. We obtain the coefficients by minimizing this estimation variance which accounts for the three constraints in (5). The classical Lagrange approach yields three multipliers and leads to the system of equations :

 {∑jμjσ(i,j)=ρ0+ρ1x+ρ2(x)21≤i≤N∑iμi=1∑iμixi=x0∑iμix2i=x20 (7)

The s are the coefficients of the optimal estimator (3), we obtain the estimation variance of the estimate by multiplying with the first relation of the system, and then we sum over and use the second, which finally gives :

 E[m(x0)−m∗(x0)]2=ρ0+ρ1x0+ρ2(x0)2. (8)

#### 2.1.1 Estimation of coefficients of the drift

We can refine the estimate of the drift while detailing the estimations of the three coefficients , since we have

 m∗(x)=a∗0+a∗1x+a∗2x2

the method consists of representing each of these three terms as the weighted sums and then minimizing their variances following the Lagrangian method as usual. We do not indicate the steps, which can be found in [Matheron, 1971]. Though we give for the case of zonal kriging.

The estimators are useful for reducing the drift and the first and second terms. So the constant drift estimator is , and that of linear drift is

### 2.2 Kriging of y in x0

We follow the same procedure as before. To krige in we form an linear estimator of , i.e.

 y∗(x0)=∑λiy(xi) (9)

and we minimize the variance of the error conditionally on the three constraints (5), and introduce three Lagrange multipliers , which gives equations :

 {∑jλjσ(i,j)=σ(i,0)+τ0+τ1x+τ2(x)21≤i≤N∑iλi=1∑iλixi=x0∑iλix2i=x20 (10)

and the minimal variance

 E[y(x0)−y∗(x0)]2=σ2(y(x0))−λiσ(i,0)+τ0+τ1x0+τ2x02.

Instead of attacking the front of the system (10), in general its simpler to combine estimation of the drift with the kriging of the resisdue at . For this we use the linear estimator :

 z∗(x0)=∑iνiz(xi) (11)

of ) from the . In this case there is no universality constraint, and is sufficient to minimize the variance of the error to obtain the system

 ∑jνjσ(i,j)=σ(i,0), (12)

of equations and unknowns which gives the coefficients . The corresponding kriging variance is

 E[z∗(x0)−z(x0)]2=σ(0,0)+∑iνiσ(i,0) (13)

However, we cannot directly construct the estimator (11), since we only know, for each , the value of and the optimal estimate of the drift ). But we can use the theorem of kriging additivity [Matheron, 1971], which states that we can krige the residues as if it were the true residues. Consequently the estimator

 z∗∗(x0)=∑iνi[y(xi)−m∗(xi)] (14)

is optimal when the s are solutions to the system (12) and evaluates to , with a kriging variance

 E[z∗(x0)−z(x0)]2=σ(0,0)+∑iνiσ(i,0). (15)

The kriging of at and its variance is therefore given by:

 y∗(x0)=∑i[(μi+νi)y(xi)−νim∗(xi)]=∑iλiy(xi) (16)

and

 E[y(x0)−y∗(x0)]2=ρ0+ρ1x0+ρ2(x0)2+σ(0,0)+∑iνiσ(i,0)

where these are the weights attributed to in the equation (9), but now that are calculated easily from the s and the s. The two systems (7) et (12) of estimation of the drift and the residue are regular, by convexity of the variance. Each admits a unique solution.

### 2.3 Covariance or Variogram?

We have modeled the residue as a random stationary function with zero mean and covariance , which implies a finite variance . Now this model of residue does not always work in real life. We observe in geosciences certain phenomena like the distribution of minerals in mines, or predicting rainfalls, where the capacity of dispersion appears infinite. Larger the period or the region we study, larger their variance, without the appearance of any kind of limit in the order of values that are measured. The covariance, whose value at the origin is the variance, is therefore undefined, and we must avoid it in the formalism. Fortunately, the variance of the increments is still finite for finite distances between and , and the tool which expresses it is the variogram

 γ(x,x′)=E[y(x)−y(x′)]22=σ(00)−σ(x,x′). (17)

Moving from covariance to variogram in the problem of kriging is equivalent to replacing the model (1) of the function by the following:

 {y(x)=m(x)+z(x)=a1x+a2x2+z(x)E[z(x)]=a0;E[y(x)]=m(x)+a0 (18)

where the mean does not have the constant term anymore and the residue admits for the variogram. When we replace the by in System (7) and in Equation (12), then the get eliminated: we find the same solutions for as before, and also for up to a change of sign. Concerning the residue one should assure that the two measures of the kriging (11) have the same expectation, which implies a condition of universality on the and leads to a system :

 {∑jνjγ(i,j)=γ(i,0)−τ01≤i≤N∑iνi=1 (19)

We obtain the kriging for at by applying the theorem of additivity to the sum

 y∗(x0)=a∗1x0+a∗2x02+z∗(x0) (20)

where the term has disappeared, which shows that kriging involves increments only. The associated variance is

 E[y(x0)−y∗(x0)]2=∑iλiγ(i,0)+ρ0+ρ1x0+ρ2(x0)2.

The term is useless for the kriging of and does not have any significance when the neighbourhood is the whole space . In retrospect it becomes meaningful when the data occupy a bounded portion of the space, or when we perform slipping neighbourhoods, with a bounded support centered at , to map the drift of . The estimator of which results from these equations (7), in their variogram version, just provides the drift at , since the terms at and at cancel at the origin.

## 3 Kriging in the presence of zonal anisotropy

The two systems (7) & (12) of equations (16) respond to the question of optimal estimation of & of , but at the price of inversion of matrices of size and it requires to work in the space . These usually pose computational constraints for large training samples set that now we shall try to solve using zonal anisotropy. This will allow us to decompose the multidimensional kriging of into univariate krigings, such that each parameter generates separately a term of the estimator of at point .

### 3.1 Zonality hypothesis

In geostatistics, zonality is a form of anistropy where the regionalized variable of the physical space with 3 dimensions does varies only in one direction [Matheron, 1971, Jean-Paul Chilès, 2015, Wackernagel, 2003]. The classical example is of that of strata in vertical layers of sediments, which vary very little horizontally. The zonality is only in the vertical direction, but we can combine with other anisotropies, like for example two other zonalities in the horizontal directions.

When we place ourselves in the space of parameters the hypothesis zonal anisotropy is formulated by taking interpreting , in each neighbourhood where the kriging will be applied next, as the weighted sum of random functions on of only one parameter each, and whose cross variations are independent:

 y(x)=w1y1(x1)+...wsys(xs)+...wdyd(xd) (21)

The hypothesis of linearity of the components is a simple extension of the linearity of regression model in . The independence of the components is a strong assumption and which will be discussed in section 5. When we pass on to kriging every is modeled as a sum of a drift and a residue with

 ys(xs)=ms(xs)+zs(xs)

where

 y(x)=∑swsms(xs)+∑swszs(xs)=m(x)+z(x).

Let us see now how the zonality hypothesis simplifies the kriging of residue and of at point .

### 3.2 Zonal kriging of the residue

The zonal residue can be written as sum of random functions , stationary and independent, and each with zero mean:

 (22)

Let us note that the covariance of at -abscissas & & and that of values of , or of at points and , i.e.

 E[zs(xi,s)zs(xj,s)]=σs(i,j)andE[z(xi)z(xj)]=E[y(xi)y(xj)]=σ(i,j)

The cross independence of functions implies that

 E[zs(xi,s)zt(xj,t)]=0∀s,t∈[1...d]s≠t (23)

where

 σ(i,j)=∑sw2sσs(i,j) (24)

In the same way, the covariance between the function at point and the function at point , when accounted for zonal anisotropy, is equal to

 E[z(xi)wszs(xj,s)]=E[wszs(xi,s)wszs(xj,s)]=w2sσs(i,j) (25)

.

The coefficients of kriging

 z∗(x0)=∑iνiz(xi) (26)

of with the help of points satisfies the system of equations (12), i.e. . It is not sure that estimator thus obtained, even if optimal, satisfies the zonal decomposition (22). However, we can decompose the value at point into zonal terms:

 z∗(x0)=∑jνj∑swszs(xi,s)

The corresponding estimation variance can then be written, given the expressions of covariances (24) and (25)

 E[z∗(x0)−z(x0)]2=E[z(x0)]2−2∑iνi∑sw2sσs(i,0)+∑j∑iνjνi∑sw2sσs(i,j).

This variance is minimal when all its derivatives at are null, which leads to the system at equations:

 ∑jνj∑sw2sσs(i,j)=∑sw2sσs(i,0).1≤i≤N (27)

On the other hand, one can write the system for kriging of the component of at point with the help of components at known points . It suffices to form the estimator :

 z∗s(x0,s)=∑iνiszs(xi,s) (28)

of estimation variance

 E[z∗s(x0,s)−zs(x0,s)]2 (29)

whose minimization leads to equations (12):

 ∑jνjsσs(i,j)=σs(i,0),1≤i≤N (30)

The kriging and those of the various components are linked. The sum in of the estimation variances (29), weighted by the coefficients , is equal to

 w2s∑sE[z∗s(x0,s)−zs(x0,s)]2=E{∑sw2s[z∗s(x0,s)−zs(x0,s)]}2 (31)

because the increments of the components are independent between them. When the variances of the right member are minimal, that of the left member also become minimal, which implies

 z∗(x0)=∑sz∗s(x0,s). (32)

Since we do not know the terms that intervene in the sum (28), the decomposition (32) is not sufficient to determine the coefficients of . The set of contains moreover, more information than their sums . However, if we add the systems (30), we obtain:

 ∑j∑sνjsw2sσs(i,j)=∑sw2sσs(i,0) (33)

So we see that if we set to:

 νj(i)=∑sνjsw2sσs(i,j)∑sw2sσs(i,j)), (34)

then the satisfies the parameter equation system (27), but not necessarily the others. However, we can no longer depend on by introducing the term which appears when sum in the equations (34). it leads to the new coefficient

 ν∗j=∑sνjsτs(j)∑sτs(j). (35)

which can be interpreted as a weighted average in of . These coefficients suggest the new estimator :

 z∗∗(x0)==∑iν∗iz(xi) (36)

whose coefficients are not those anymore of the estimator (26). On the other hand, it satisfies the condition of zonality and its coefficients put in set of 1-D krigings only. So, the knowledge of coefficients relating to 1-D residues, and their covariance, are sufficient to determine the weights to estimate residues in space .

### 3.3 Estimation of the drift

Let us first treat the case where the drift is reduced to a constant term . This allows to start with simpler notation, and which a situation is very instructive by itself. We thus suppose that is a stationary random function but we do not know the average . As in the theory general, we form the estimator

 m∗(x0)=∑iμiy(xi),

which is not biased when , i.e. when . The zonal decomposition of is written as:

 m∗(x0)=∑iμi∑swsys(xi,s) (37)

Every is a stationary random function of average with

 a0=∑swsa0,s. (38)

The minimization for system of equations (7) reduces to

 {∑jμj∑sw2sσs(i,j)=ρ01≤i≤N∑jμj=1 (39)

where is the Lagrange parameter and where the covariance between the , and is . Similarly, we form on each axis the estimator

 m∗s(x0,s)=∑iμisys(xi,s) (40)

of Lagrange minimization

 ⎧⎨⎩∑jμjsσs(i,j)=ρ0,s1≤i≤N∑jμjs=1. (41)

By summation in we find for the first equations

 ∑s∑jμjsw2sσs(i,j)=∑sw2sρ0,s1≤i≤N

If we put

 ρ0=∑sw2sρ0,s (42)

and if we suppose that the systems (41) for all , then we deduce the s:

 μj=∑sμjsw2sσs(i,j)∑sw2sσs(i,j))=∑sμjsτs(j)∑sτs(j) (43)

i.e. the solutions of the first equations (39). As the estimates (40) are not biased, we find, by taking the mathematical expectations in (37)

 E[m∗(x0)]=∑iμi∑swsE[yx(xi,s)]=∑iμi∑swsa0,s=a0∑jμj

The sum of the of (43) is therefore 1 so that they are the solution of the system (39).

If now the drift is quadratic, the same reasoning shows that the coefficients are still expressed according to the by the equations (43), but the are no longer the same as for a constant drift. When the coefficients one-dimensional drifts verify the three conditions (5), then the still satisfy them. The approach extends besides any drift of the type where are linearly independent functions (to ensure regularity systems of equations).

### 3.4 Zonal kriging of y

As for the kriging of it suffices to apply the additivity theorem, which gives :

 y∗(x0)=m∗(x0)+νi[y(xi)−m∗(xi)] (44)

where the coefficients of the estimators are those of the equations (43). This result is fundamental. Each coefficient (resp. ) appears in (35) as the average of (resp. ) are 1-D, weighted by the covariances . We further determine these and by formal computation in , valid even for a very large . It requires a covariance model or associated variogram . We will choose the linear variogram, with or without a nugget effect, because it’s the most common. On the other hand, we do not need to estimate the zonal decomposition of into its components of (21), which is a considerable advantage.

We have introduced weights into the expression (21) the zonality of since the parameters are not physically homogeneous. One can designate a temperature, another a chemical content, a third an orientation, etc. Since their variation depends on the choice of their units, the weights allow to take this into account. They can be used for example to calibrate the parameters between them by requiring them to have the same variance.

### 3.5 Why the zonal hypothesis?

As in decision trees, where the euclidean space is replaced by a product of d-, spaces the hypothesis of zonality represents four significant advantages:

1. The dimensions of the geographical space are each physically equivalent, whereas the same can not be said of a parameter space, even if they are eventually correlated. In more technical terms, let us say the space of parameters, the only balls that have a physical sense are the rectangle parallelepipeds parallel to the axes. It’s only relative to these distances we can build variograms and krige these points of which is in accordance with the zonal decomposition.

2. Random Forests manipulate subspaces of parameters drawn at random from and that lead to decision subtrees. Zonal independence allows the kriging of each parameter for any subspace.

3. In the earth sciences, when the number of samples is too high, it is not any more trivial to solve the kriging system. The difficulty is overcome by means of sliding neighborhoods, which reduce locally the volume of data. But the notion of neighborhood implies that the function is sampled fairly homogeneously throughout the space. But this pseudo geographical regularity makes little sense in a parameter space: the coordinate to be predicted at a point can be very close to that of a known point but very distant from the coordinate of the same known point. By making systems of one-dimensional equations, zonal kriging allows moving window neighborhoods since they are then independent of an axis to the other.

4. On the other hand, if the zonality drastically reduces dimensions of parameter spaces, it does not affect the number of samples. On the other hand, we manage to formally solve the systems of equations kriging in some cases, because they relate to , as we will see now.

## 4 1-D regression by kriging

In in fact, we know to solve the equations of kriging for the standard variograms. G. Matheron calculated for a certain number111The texts are available at the website http://cg.ensmp.fr/bibliotheque/cgi-bin/public/bibli_index.cgi for the universal Kriging [Matheron, 1969], La théorie des variables régionalisées et ses applications [Matheron, 1971], and the Note de géostatistique n° 106, Exercices sur le krigeage universel [Matheron, 1970]. We review and consider one of these cases in this section, without proofs, and with discrete calculations.

In this section, the space is always the line , and the regionalized scalar variable is supposed to be known at sample points ,. The calculations bring us to the three cases Linear, Linear with nugget effect and large grid. The figure 1 illustrates the variogram for these cases. The first is modeled by a linear variogram of type

 γ(x−x′)=C∣x−x′∣.

A nugget effect appears in the second, and the third case finally is purely random, without spatial correlation. The second case also of the linear model, but with a jump to the origin which is called the nugget effect. This additional variance testifies regionalization on a small scale, or possibly errors of measurements because it shows that the mean quadratic increase between two points, even very close, does not tend to zero. The third case describes a purely random situation, without spatial correlation.

The first two cases are formulated in terms of variograms, relative a random function only increments are defined, and stationary. For the third case, that of the large grids, where covariance and variograms are equivalent, we formulate the results in terms of covariances. The estimators obtained are always linear functions of the values whose coefficients are estimated by minimizing . For the three models studied below, we express at first the kriging of the residue. In a second step, we give ourselves a drift of the form for the first two cases, and for large grids, and we estimate the parameters of the drift. Thirdly, we calculate kriging at the point by the additivity theorem. All estimators are given with their estimation variances. Finally, if we assume linear drift, i.e. if , the estimator of is therefore the same that was found for the quadratic drift.

### 4.1 Linear Variogram

##### Kriging the residue z(x0)

Let consider in , the random function representing residue , the variogram , where is the distance between and . We suppose a known realization of at two points and and in a certain number of points outside the interval . We want to krige the value at point comprised between and . A solution of type . The kriging at is thus given by:

 z∗(x0)=εz(xi+1)+(1−ε)z(xi),(xi≤x0≤xi+1) (45)

which only involves the residues at the two abscissae that frame . The Lagrange factor is equal to , and the calculation of the variance of kriging gives:

 σ2K=2(xi+1−x0)(x0−xi)xi+1−xi=2ε(1−ε)(xi+1−xi).
##### Estimation of the drift

Given be points at arbitrary distances from each other. We search for an optimal estimator with a shift when the variogram is . Let us define:

 p0=x2−x12pn=xN−xN−12pi=xi+1−xi−12(1

It is convenient to take the point as origin, which is the origin adapted to local neighborhoods, and to define

 c2=112(x21+x2N)(xN−x1)−∑pix2i

The expressions of and of are therefore

 a∗1=yN−y1xN−x1
 a∗2=c2[(xN−x1)(yN+y1)2−∑piyi]
##### Kriging at point x0

By application of the additivity theorem, the expression of kriging at the point is obtained by replacing, in (45), and by their estimations:

 y∗(x0)=ε[y(xi+1)−a∗1xi+1−a∗2x2i+1]+(1−ε)[y(xi)−a∗1xi−a∗2x2i]+a∗1x0+a∗2x20,

given, after simplifications:

 y∗(x0)=εy(xi+1)+(1−ε)y(xi)−a∗2ε(1−ε)(xi+1−xi)2,

The coefficient of the term at of the drift disappears, and the variance de is the sum of the variances of kriging of the residue and the drift:

 σ2U=σ2K+σ2D=2ε(1−ε)(x2−x1)+c2[ε(1−ε)(xi+1−xi)2]2

When the abscissa is regular and distant between they, the term is simplified in

 6N(N2−1)1h3,

and the kriging of becomes

 y∗(x0)=εy(xi+1)+(1−ε)y(xi)−a∗2h2ε(1−ε),

of variance

 σ2U=2ε(1−ε)h+6ε2(1−ε)2N(N2−1)h.
##### Example

The figure 2 represents the same data, which are modeled on the left, by a linear variogram and a linear drift, and on the right by a pure nugget effect. The drift is represented in lines full. On the left, it connects the two extreme points, and on the right it minimizes the squares of the vertical projections of the experimental points on her. The kriging is indicated on the left by dashed lines: it is formed of straight segments connecting the experimental points neighbors. Extrapolation beyond the extreme points prolongs the drift. On the right, kriging and drift coincide, and are obtained by the estimator least squares.

### 4.2 Linear Variogram with nugget effect

##### Kriging the residue z(x0)

For the kriging of the residual we consider samples regularly distributed in . The system to be solved (19) for the variograms:

 {γ(i,j)=γ(xi−xj)=|xi−xj|=|h| si xi≠xjγ(xi−xj)=−C si xi=xj.

The simplest method here is to gather the data in three groups, and take into account only the two-point values which enclose plus the mean of the samples. We have then a system of three equations with three unknowns easy to solve. When the nugget effect tends to zero, we find the linear case previous, and when it becomes very large the case of large grids presented a little further.

##### Estimation of the drift

For the calculation of the drift we will assume the data to be regularly spaced. The nugget effect makes this assumption acceptable, and it can be used as an approximation for irregular data. is supposed to be known at points abscissas:

 xi=(i−n2)h

(originated in the data center). The variogram is the sum of one linear term and variance nugget effect :

 γ(xi−xj)=∣i−j∣,xi≠xjγ(xi−xj)=−C,xi=xj
##### Linear Drift

Consider first the case where the drift is reduced to the term and where we estimate the factor par . Let and the two roots of the equation

 α2−2(1+hC)α+1=0

the estimator is

 a∗1=∑iλi1y(xi)

with

 λi1=αi−n/2−βi−n/22Ch

and the variance of evaluates to

 σ2D=α1+n/2+α−n/2(1−α)Ch

If now the drift has the form , the optimal estimator is ,the term is the same as below and is written as:

 a∗2=∑iλi2y(xi)

with of the form

 λi2=f(αi−n/2+βi−n/2)+g

where

 f=12h2(E−n(n+2)12B)g=−2Bn+1f

with

 B=∑iαi−n/2E=∑i(n/2−i)2αi−n/2

The variance of is

 D2(a∗2)=gh

### 4.3 Large grids and least squares

As previously, we know the values of at training samples at points anywhere on the line, and we want to krige at point falling within two known points & .

##### Kriging the residues

The residue is modeled by a random stationary function with unknown mean, and finite variance with small range (the range is the distance beyond which data become independent). In such as case the variogram and the covariance <