Fast Online Generalized Multiscale Finite Element Method using Constraint Energy Minimization

Fast Online Generalized Multiscale Finite Element Method using Constraint Energy Minimization

Eric T. Chung Department of Mathematics, The Chinese University of Hong Kong (CUHK), Hong Kong SAR. Email: tschung@math.cuhk.edu.hk. The research of Eric Chung is supported by Hong Kong RGC General Research Fund (Project 14317516).    Yalchin Efendiev Department of Mathematics & Institute for Scientific Computation (ISC), Texas A&M University, College Station, Texas, USA. Email: efendiev@math.tamu.edu.    Wing Tat Leung Department of Mathematics, Texas A&M University, College Station, TX 77843
Abstract

Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

1 Introduction

Many multiscale problems are prohibitively expensive to solve due to scale disparity and high contrast. These problems are often solved using some type of reduced-order models. These include numerical homogenization [7, 20], multiscale finite element methods [13, 11, 12], heterogeneous multiscale methods [8], variational multiscale methods [15], mortar multiscale methods [19], localized orthogonal decomposition [16], and so on. The main idea behind local reduced-order model reduction techniques is to compute multiscale basis functions in each coarse block. These basis functions are computed using solutions of local problems.

Various approaches have been developed for designing multiscale basis functions. One of the earlier works [11, 13, 12, 2] use harmonic extensions of standard finite element basis functions in computing multiscale basis functions. Because of “homogeneous” traces along coarse boundaries, these approaches can have large errors due to the mismatch between the fine-grid solution and multiscale solutions along the edges of coarse blocks. These approaches have been generalized by using oversampling ideas [13, 12], where one uses larger regions and solve local problems. The solutions of these local problems are then used in constructing boundary conditions for multiscale basis functions. These approaches reduce the errors due to boundary conditions.

In later works [9, 10, 2], the authors showed that in the presence of high-contrast, one needs multiple basis functions. In [10, 2], Generalized Multiscale Finite Element is introduced, where the authors propose a systematic way of computing multiscale basis functions. The multiscale basis functions are computed by solving spectral problems in each coarse patch and selecting the dominant eigenvectors. In particular, the eigenvalues are ordered in increasing order and the eigenvectors corresponding to small eigenvalues are selected. The spectral convergence rate has been derived for these approaches, where is the smallest eigenvalue (across all coarse blocks) whose corresponding eigenvector is not included in the coarse space. In [4], using oversampling ideas and localization ideas [18, 16, 17, 14], the authors propose an approach which provides both mesh-size dependent convergence and spectral convergence. The main idea of this approach, called CEM-GMsFEM, is to (1) compute some GMsFEM basis (2) use constrained energy minimization in oversampling domains to construct multiscale basis functions. As a result, we have a minimal number of basis functions and can show convergence rate.

The above approaches can be classified as offline methods because the construction of multiscale basis functions does not take into account the right hand side. The offline methods can be tuned in various ways to achieve smaller errors; however, the error decay slows down as we add basis functions after a certain number of basis functions are selected. This slow down is due to some slow decay after certain eigenvalue. To improve this, in [3, 6, 1, 5], the authors propose an online approach. The main idea of online approaches is to add multiscale basis functions using the residual information after computing the coarse-grid solution. These online multiscale basis functions are computed adaptively and are chosen to decrease the error the most. They are solutions of local problems. Our analysis in [3, 6] shows that the error decay is proportional to , where is the constant (independent of scales and contrast) that guarantees the positivity of this quantity. This indicates that the error is not reduced unless is sufficiently away from , i.e., we have suffcient number of offline basis functions. This was demonstrated analytically and numerically in our papers [3, 6]. Since the online procedure can be costly, our goal is to perform only 1-2 iterations.

In this paper, we would like to investigate online approaches for CEM-GMsFEM and show that one can significantly improve the existing online approaches for some cases. In the paper, first we present an online approach, which differs from our previous approach since CEM-GMsFEM uses oversampling. In particular, the online basis functions are formulated in the oversampled regions. Secondly, we present an analysis of the proposed method. Our analysis shows that the error decay by adding online basis functions can be significantly better compared to in online GMsFEM. The error decay can be made close to (i.e., we obtain very accurate approximation in one iteration) by choosing larger oversampling regions provided we have sufficient number of offline basis functions. To our best knowledge, this is a first result of this kind. Moreover, the online approaches can be made adaptive and adaptive error indicators can be derived.

We present numerical result. In our numerical results, we consider high-contrast permeability fields and place the source term in different locations. All results show that the error drops 3 orders of magnitude, which is much better compared to previous online GMsFEM. We also present numerical results using adaptivity, which shows that by selecting only some (few) regions, one can achieve a significant error decay. Moreover, our adaptive algorithm allows one to input a parameter which specifies a desired convergence rate.

The paper is organized as follows. In Section 2, we present some preliminaries. In Section 3, we present the construction of offline basis functions, and in Section 4, we present our online adaptive enrichment algorithm. Section 5 is devoted to numerical results. The analysis of our method is presented in Section 6. Finally, we present some concluding remarks in Section 7.

2 Preliminaries

In this paper, we consider a class of multiscale problems of the form

 −div(κ(x)∇u)=f,inΩ, (1)

subject to the homogeneous Dirichlet boundary condition on , where is the computational domain. We assume that is a heterogeneous coefficient with multiple scales and very high contrast. In solving (1), it is desirable to construct multiscale basis functions that can be computed locally and give coarse-mesh convergence rate independent of the heterogenities and contrast. In [4], such approach has been developed. When the source term does not belong to and when one needs to obtain solutions with more refined accuracy, it is desirable to construct online basis functions that capture properties unresolvable by offline basis functions. It is the purpose of this paper to do this.

Next, we introduce the notion of fine and coarse grids. We let be a usual conforming partition of the computational domain into finite elements (triangles, quadrilaterals, tetrahedra, etc.), and let be the mesh size of . We refer to this partition as the coarse grid and assume that each coarse element is partitioned into a connected union of fine grid blocks. The fine grid partition will be denoted by , and is by definition a refinement of the coarse grid . Here, we use to denote the fine mesh size of . In Figure 1, we give an illustration of the fine grid, coarse grid, and oversampling domain. In the figure, the coarse grid is contained by a union of rectangular coarse elements, denoted generically by . Each coarse element is a union of finer rectangular elements. Moreover, for each coarse grid node , we define as the union of coarse elements having the vertex . We also define as an oversampled region for . Finally, we define as the number of coarse grid vertices.

We let . The weak solution of the problem (1) satisfies

 a(u,v)=(f,v),∀v∈V, (2)

where and . We would like to find a multiscale solution in a subspace of , denoted as , satisfying

 a(ums,v)=(f,v),∀v∈Vms. (3)

We will introduce the construction of the multiscale finite element space in the next section. We remark that the multiscale space consists of two components, which are the offline part and the online part. For the offline part, we will construct multiscale basis functions in the offline stage, that is, before solving the problem (3). Note that these basis functions are independent of the source term . For the online part, we will construct multiscale basis functions in the online stage using the residual of an approximate solution. Note that, these basis functions depend on the source term . In Section 3, we present the construction of offline basis functions. In Section 4, we present the construction of online basis functions and an online adaptive enrichment algorithm.

3 Offline basis functions

In this section, we present the construction of the offline multiscale finite element space. The construction of our offline basis functions follows the framework in [4]. To construct the multiscale space, we will first construct the auxiliary space by solving local spectral problem for each coarse element . Next, we will construct the multiscale basis functions by solving some local minimization problems using the auxiliary basis functions. Our multiscale finite element space will then be the span of these multiscale basis functions. Next, we will discuss the construction of both auxiliary space and multiscale space in detail.

3.1 Auxiliary basis functions

Now, we present the construction of the auxiliary basis functions. For each coarse element , we define , and solve the spectral problem: find

 ai(ϕ(i)j,v)=λ(i)jsi(ϕ(i)j,v),∀v∈V(Ki),

where , , and is a set of partition of unity functions with respect to the coarse grid. We remark that one can take as the standard multiscale basis functions or the standard piecewise linear functions. We assume that the eigenfunctions satisfy the normalized condition . We can assume that the eigenvalues are sorted in ascending order, that is, . We then choose the first eigenfunctions with small eigenvalue, and define the local auxiliary space as the span of these eigen-basis functions, which is

 Vaux(Ki)=span{ϕ(i)j|1≤j≤Ji}.

Notice that, by construction, . We define

 Λ=min1≤i≤Nλ(i)Ji+1.

Finally, the auxiliary space is defined as the sum of these local auxiliary spaces, that is, .

Using the auxiliary space, we define a projection operator by

 π(v)=N∑i=1Ji∑j=1si(v,ϕ(i)j)ϕ(i)j,∀v∈V.

We also denote the kernel of the operator as

 ~V={w∈V|π(w)=0}.

3.2 Multiscale basis functions

Now we present the construction of offline multiscale basis functions using the auxiliary space. For each coarse element , we define an oversampled region by extending by coarse grid layers. See Figure 2 for an illustration of with .

For each , we will construct a basis function whose support is . Using the results in [4], the multiscale basis function is constructed by solving the following local problem: find such that

 a(ψ(i)j,ms,v)+s(π(ψ(i)j,ms),π(v))=s(ϕ(i)j,π(v)),∀v∈V0(K+i), (4)

where and . We remark that the above definition is defined in the continuous space . In our numerical simulations, we solve the above problem using the fine mesh defined in and an appropriate finite element method. Finally, the multiscale finite element space is defined as the span of these multiscale basis functions, namely, . This method is called the CEM-GMsFEM.

We remark that our multiscale basis functions are used to approximate the related global basis functions. The global basis function is defined by solving the following problem: find such that

 a(ψ(i)j,v)+s(π(ψ(i)j),π(v))=s(ϕ(i)j,π(v)),∀v∈V. (5)

The global space is defined by . We note that these global basis functions have an exponential decay property (see [4]), which motivates the definitions of the multiscale basis functions having local supports. One important property of the global space is the orthogonal decomposition with respect to the inner product . This global space will be used in the analysis of the convergence result of our online adaptive enrichment method.

4 Online basis functions and adaptive enrichment

In this section, we will introduce an online enrichment method for this CEM. We will first present the construction of online basis functions. Then, we will give an error estimate for using the online enrichment method. Here, for online basis function, we mean the basis functions constructed in online stage by using the residual of the solution which contain the information of the source term. We will construct the online basis function in an iterative process. We remark that the error will decay rapidly such that the error will within a acceptable range in the first or two iterations.

To begin, we define a residual functional . Let be a numerical solution computed by solving (3). The residual functional is defined by

 r(v)=a(ums,v)−∫Ωfv,∀v∈V.

For each coarse neighborhood , we define the local residual functional by

 ri(v)=r(χiv),∀v∈V.

The local residual gives a measure of the error in the coarse neighborhood .

The construction of online basis function is related to the local residual . Using the local residual , we will construct an online basis function whose support is an oversampled region . More precisely, the online basis function is obtained by solving

 a(β(i)ms,v)+s(π(β(i)ms),π(v))=ri(v),∀v∈V0(ω+i), (6)

where . We can perform the above construction for each , or for some selected (with for an index set ) based on an adaptive criterion. We remark that the above online basis is obtained in the local region . This is the result of a localization result for the corresponding global online basis function defined by

 a(β(i)glo,v)+s(π(β(i)glo),π(v))=ri(v),∀v∈V. (7)

After constructing the online basis functions, we can enrich our multiscale space by adding these online basis to the multiscale space, namely, . Using this multiscale finite element space, we can compute a new numerical solution by solving the equation (3). We can repeat the process to enrich our multiscale space until the residual norm is smaller than a given tolerance. Next, we present the precise online adaptive enrichment algorithm.

We first choose an initial space . This is the space obtained by using the offline basis functions constructed in Section 3. We also choose a real number such that . This number determines how many online basis functions are needed in each online iteration. Then, we will generate a sequence of spaces and a sequence of multiscale solutions obtained by solving (3).

For each , we assume that is given. We will preform the following procedures to obtain the new multiscale space .

1. Find the multiscale solution in the space . That is, find such that

 a(u(m)ms,v)=(f,v),for allv∈V(m)ms. (8)
2. Compute the local residuals where

 zi(v)=a(u(m)ms,v)−(f,v),∀v∈V0(ωi).

Define where . We re-numerate the indices of such that . Choose the first regions so that

 N∑i=k+1δ2i<θN∑i=1δ2i. (9)
3. Compute the local online basis functions. For each and coarse region , find such that

 a(β(i)ms,v)+s(π(β(i)ms),π(v))=r(m)i(v)∀v∈V0(ω+i)

where .

4. Enrich the multiscale space. Let

 V(m+1)ms=V(m)ms+span1≤i≤k{β(i)ms}.

In the next section, we will analyze the convergence rate for this online adaptive enrichment method. In particular, we will prove the following theorem.

Theorem 1.

Let be the solution of (1) and let be the sequence of multiscale solutions obtained by our online adaptive enrichment algorithm. Then we have

 ∥u−u(m+1)ms∥2a≤3(1+Λ−1)(C(ℓ+1)dE+2M2θ)∥u−u(m)ms∥2a

where , is maximum number of overlapping subdomains and is a constant.

Remark 1.

We note that the convergence rate depends two terms and . By using enough number of oversampling layers, the term tends to zero. Thus, the factor dominates the convergence rate. One can choose to obtain a desired convergence rate. We will also confirm this by some numerical examples. This is an improvement over the online method in [3], where the convergence rate is with .

5 Numerical Result

In this section, we present some numerical results to demonstrate the convergence of our proposed method. We take the computational domain . The medium parameter in the equation (1) is chosen to be the function shown in Figure 3. We note that the medium contains high contrast inclusions and channels. The fine mesh size is taken to be , while the coarse mesh size in this example is . In all our results, we take the number of oversampling layers . We will illustrate the performance of our method by using two different source terms and . We will test the performance by considering uniform enrichment and by using the online adaptive enrichment algorithm presented in Section 4.

In Table 1, we present the error and the energy error for the case with uniform enrichment, that is . From the first two online iterations, we observe very fast convergence of the method. Next, we will consider some adaptive results for this case. In Table 2, we present the error decay by using our online adaptive enrichment algorithm with . That is, we only add basis for regions which account for the largest of the residual. From the table, we observe that the convergence rate in the energy norm is , which is close to . This results confirm our assertion that the convergence rate can be controlled by the user-defined parameter . We remark that the convergence rate is computed by taking the maximum of all . In Table 3, we present the adaptive result with . That is, we add basis for regions which account for the largest of the residual. From the table, we observe that the convergence rate in the energy norm is , which is close to . This result also confirms our prediction. Moreover, we note that the adaptive approach allows adding a very few online basis functions to reduce the error to .

Now, we consider the second source term . In Table 4, we present the error decay using uniform enrichement. We observe very fast decay in error from this table. Next, we test the performance using adaptivity. In Tables 5 and 6, we present the error decays with and respectively. We observe that the convergence rates in these two cases are and respectively. This confirms that the user-defined parameter is useful in controlling the convergence rate of our adaptive method. Moreover, we note that the adaptive approach allows adding a very few online basis functions to reduce the error to . Furthermore, in Figure 4, we show the number of online basis functions added in the computational domain. For , we will add a small number of basis functions in each iteration. We observe that the basis functions are added near the singularity of the source and along the high contrast channel. For , more basis functions are added throughout the domain with a faster convergence rate. We still observe that more basis are added near the singularity of and along the high contrast channel in .

Finally, we present a test case with a more singular source term , shown in Figure 5, where the reference solution is also presented. In Table 7, we present the error decay with uniform enrichment and observe the same type of exponential decay as the earlier examples. We also observe that the error is relatively large when no online basis function is used. In Table 8, we present the results with the online adaptive enrichment algorithm with . We see that the numerically computed convergence rate is , which is close to the parameter .

6 Convergence analysis

In this section, we analyze the convergence of the online adaptive enrichment algorithm presented in Section 4. First, we need some notations. We will define two different norms for the space . One is the -norm where . The other is -norm where . For a given subdomain , we will define the local -norm and -norm by and respectively.

Next, we will recall a few theoretical results from [4] that are useful for our analysis. The first result is Lemma 1.

Lemma 1.

There is a constant such that for all there exists a function such that

 π(v)=vaux,∥v∥2a≤D∥vaux∥2s,supp% (v)⊂supp(vaux).

The second result is a localization result, saying that the global basis function defined in (5) has an exponential decay outside an oversampled region. This result motivates the local multiscale basis functions defined in (4).

Lemma 2.

We consider the oversampled domain obtained from by extending coarse grid layers with . Let be a given auxiliary multiscale basis function. We let be the multiscale basis functions obtained in (4) and let be the global multiscale basis functions obtained in (5). Then we have

 ∥ψ(i)j−ψ(i)j,ms∥2a+∥π(ψ(i)j−ψ(i)j,ms)∥2s≤E(∥ψ(i)j∥2a+∥π(ψ(i)j)∥2s)

where .

Next, we will need the following lemma in our analysis. The proof is given in the Appendix.

Lemma 3.

Assume the same conditions in Lemma 2. We have

 ∥N∑i=1Ji∑j=1c(i)j(ψ(i)j−ψ(i)j,ms)∥2a+∥π(N∑i=1Ji∑j=1c(i)j(ψ(i)j−ψ(i)j,ms))∥2s ≤ C(1+Λ−1)(ℓ+1)dN∑i=1(∥Ji∑j=1c(i)j(ψ(i)j−ψ(i)j,ms)∥2a+∥π(Ji∑j=1c(i)j(ψ(i)j−ψ(i)j,ms))∥2s).

In the next lemma, we see that the same localization result holds for the online basis function defined in (6). The proof of the following lemma is the same as that for Lemma 2 and Lemma 3, and is therefore omitted.

Lemma 4.

We consider the oversampled domain obtained from by extending coarse grid layers with . We let be the online multiscale basis functions obtained in (6) and let be the global online multiscale basis functions obtained in (7). Then we have

 ∥β(i)glo−β(i)ms∥2a+∥π(β(i)glo−β(i)ms)∥2s≤E(∥β(i)glo∥2a+∥π(β(i)glo)∥2s)

where . Furthermore, we have

 ∥Nc∑i=1(β(i)glo−β(i)ms)∥2a+∥π(Nc∑i=1(β(i)glo−β(i)ms))∥2s ≤ C(1+Λ−1)(ℓ+1)dNc∑i=1(∥β(i)glo−β(i)ms∥2a+∥π(β(i)glo−β(i)ms)∥2s).

Finally, we define a constant as

 C0=supv∈V∥π(v)∥2s∥v∥2a.

We remark that , where is the Poincare constant defined by for .

Now we are ready to prove Theorem 1.

6.1 Proof of Theorem 1

First, by using the Galerkin orthogonality, we have

 ∥u−u(m+1)ms∥a≤∥u−w∥a,∀w∈V(m+1)ms. (10)

The proof is based on a suitable choice of , and consists of steps.

Step 1:

In this step, we will give a representation of the error . For each , we construct a global online basis function such that

 a(β(i)glo,v)+s(π(β(i)glo),π(v))=r(m)i(v),∀v∈V.

Summing over all , we have

 a(Nc∑i=1β(i)glo,v)+s(π(Nc∑i=1β(i)glo),π(v)) =Nc∑i=1r(m)i(v).

By the definition of , we have

 Nc∑i=1r(m)i(v)=Nc∑i=1(a(u(m)ms,χiv)−∫Ωfχiv)=Nc∑i=1(a(u(m)ms,χiv)−a(u,χiv))=a(u(m)ms−u,v).

Therefore, we have

 a(u−u(m)ms+Nc∑i=1β(i)glo,v)=s(−π(Nc∑i=1β(i)glo),π(v)),∀v∈V. (11)

From the above relation, we see that

 a(u−u(m)ms+Nc∑i=1β(i)glo,v)=0,∀v∈~V.

Using the decomposition , we have

 u−u(m)ms+Nc∑i=1β(i)glo∈~V⊥=Vglo.

Hence, we obtain the representation

 u−u(m)ms+Nc∑i=1β(i)glo=N∑i=1Ji∑j=1c(i)jψ(i)j (12)

where are some coefficients. We will use this representation in the next steps. We remark that, in Step 2 and Step 3, we will localize the terms and , and estimate the errors.

Step 2:

In this step, we will localize each in (12) and estimate the error. In particular, we will estimate . We define . Using (12), we have

 a(η,v)+s(π(η),π(v)) =a(Nc∑i=1Ji∑j=1c(i)jψ(i)j,v)+s(Nc∑i=1Ji∑j=1c(i)jπ(ψ(i)j),π(v)) (13) =Nc∑i=1Ji∑j=1c(i)js(ϕ(i)j,π(v)),∀v∈V, (14)

where the last equality follows from (5). We let . By Lemma 1, there exists a function such that and

 ∥q(i)∥2a≤D∥v(i)aux∥2s.

Taking in (14), we have

 ∥v(i)aux∥2s =a(η,q(i))+s(π(η),π(q(i))) ≤(∥η∥2a(Ki)+∥π(η)∥2s(Ki))12((1+D)∥v(i)aux∥2s)12.

Thus, by the orthogonality of the eigenfunctions and the normalization condition , we have

 N∑i=1Ji∑j=1(c(i)j)2=N∑i=1∥Ji∑j=1c(i)jϕ(i)j∥2s≤(1+D)(∥η∥2a+∥π(η)∥2s)≤(1+D)(C0+1)∥η∥2a

where the last inequality follows from the definition of the constant . Recalling the definition of , we have

 N∑i=1Ji∑j=1(c(i)j)2 ≤(1+D)(C0+1)∥u−u(m)ms+Nc∑i=1β(i)glo∥2a ≤2(1+D)(C0+1)(∥u−u(m)ms∥2a+∥Nc∑i=1β(i)glo∥2a) ≤4(1+D)(C0+1)∥u−u(m)ms∥2a

where the last inequality follows from (11). Finally, using Lemma 2 and Lemma 3, we have

 ∥N∑i=1Ji∑j=1c(i)jψ(i)j−N∑i=1Ji∑j=1c(i)jψ(i)j,ms∥2a≤C(ℓ+1)dE(1+D)(C0+1)∥u−u(m)ms∥2a.

Step 3:

In this step, we will derive an estimate for . By Lemma 4, we have

 ∥β(i)glo−β(i)ms∥2a+∥π(β(i)glo−β(i)ms)∥2s ≤E(∥β(i)glo∥2a+∥π(β(i)glo)∥2s).

Using the equation (7),

 ∥β(i)glo∥2a+∥π(β(i)glo)∥2s =a(u−u(m)ms,χiβ(i)glo) ≤∥u−u(m)ms∥a(ωi)∥χiβ(i)glo∥a(ωi) ≤√2∥u−u(m)ms∥a(ωi)(∥β(i)glo∥2a(ωi)+∥β(i)glo∥2s(ωi))12.

Since is an orthogonal projection onto the space spanned by the eigenfunctions for and , we have

 ∥β(i)glo∥2s(ωi) ≤∥π(β(i)glo)∥2s(ωi)+∥(I−π)(β(i)glo)∥2s(ωi) ≤∥π(β(i)glo)∥2s(ωi)+1Λ∥β(i)glo∥2a(ωi).

Therefore, we have

 Nc∑i=1(∥β(i)glo−β(i)ms∥2a+∥π(β(i)glo−β(i)ms)∥2s)≤2E(1+Λ−1)Nc∑i=1∥u−u(m)ms∥2a(ωi)≤2ME(1+Λ−1)∥u−u(m)ms∥2a.

Step 4:

In this final step, we will prove the required convergence. Let . From the adaptive enrichment algorithm, we add the online basis functions for . We will take in (10) as

 w=u(m)ms−∑i∈Iβ(i)ms+N∑i=1Ji∑j=1c(i)jψ(i)j,ms∈V(m+1)ms.

Using (10) and (12), we have

 ∥u−u(m+1)ms∥2a ≤∥u−u(m)ms+∑i∈Iβ(i)ms−N∑i=1Ji∑j=1c(i)jψ(i