Fast Online Generalized Multiscale Finite Element Method using Constraint Energy Minimization

Fast Online Generalized Multiscale Finite Element Method using Constraint Energy Minimization

Eric T. Chung Department of Mathematics, The Chinese University of Hong Kong (CUHK), Hong Kong SAR. Email: tschung@math.cuhk.edu.hk. The research of Eric Chung is supported by Hong Kong RGC General Research Fund (Project 14317516).    Yalchin Efendiev Department of Mathematics & Institute for Scientific Computation (ISC), Texas A&M University, College Station, Texas, USA. Email: efendiev@math.tamu.edu.    Wing Tat Leung Department of Mathematics, Texas A&M University, College Station, TX 77843
Abstract

Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

1 Introduction

Many multiscale problems are prohibitively expensive to solve due to scale disparity and high contrast. These problems are often solved using some type of reduced-order models. These include numerical homogenization [7, 20], multiscale finite element methods [13, 11, 12], heterogeneous multiscale methods [8], variational multiscale methods [15], mortar multiscale methods [19], localized orthogonal decomposition [16], and so on. The main idea behind local reduced-order model reduction techniques is to compute multiscale basis functions in each coarse block. These basis functions are computed using solutions of local problems.

Various approaches have been developed for designing multiscale basis functions. One of the earlier works [11, 13, 12, 2] use harmonic extensions of standard finite element basis functions in computing multiscale basis functions. Because of “homogeneous” traces along coarse boundaries, these approaches can have large errors due to the mismatch between the fine-grid solution and multiscale solutions along the edges of coarse blocks. These approaches have been generalized by using oversampling ideas [13, 12], where one uses larger regions and solve local problems. The solutions of these local problems are then used in constructing boundary conditions for multiscale basis functions. These approaches reduce the errors due to boundary conditions.

In later works [9, 10, 2], the authors showed that in the presence of high-contrast, one needs multiple basis functions. In [10, 2], Generalized Multiscale Finite Element is introduced, where the authors propose a systematic way of computing multiscale basis functions. The multiscale basis functions are computed by solving spectral problems in each coarse patch and selecting the dominant eigenvectors. In particular, the eigenvalues are ordered in increasing order and the eigenvectors corresponding to small eigenvalues are selected. The spectral convergence rate has been derived for these approaches, where is the smallest eigenvalue (across all coarse blocks) whose corresponding eigenvector is not included in the coarse space. In [4], using oversampling ideas and localization ideas [18, 16, 17, 14], the authors propose an approach which provides both mesh-size dependent convergence and spectral convergence. The main idea of this approach, called CEM-GMsFEM, is to (1) compute some GMsFEM basis (2) use constrained energy minimization in oversampling domains to construct multiscale basis functions. As a result, we have a minimal number of basis functions and can show convergence rate.

The above approaches can be classified as offline methods because the construction of multiscale basis functions does not take into account the right hand side. The offline methods can be tuned in various ways to achieve smaller errors; however, the error decay slows down as we add basis functions after a certain number of basis functions are selected. This slow down is due to some slow decay after certain eigenvalue. To improve this, in [3, 6, 1, 5], the authors propose an online approach. The main idea of online approaches is to add multiscale basis functions using the residual information after computing the coarse-grid solution. These online multiscale basis functions are computed adaptively and are chosen to decrease the error the most. They are solutions of local problems. Our analysis in [3, 6] shows that the error decay is proportional to , where is the constant (independent of scales and contrast) that guarantees the positivity of this quantity. This indicates that the error is not reduced unless is sufficiently away from , i.e., we have suffcient number of offline basis functions. This was demonstrated analytically and numerically in our papers [3, 6]. Since the online procedure can be costly, our goal is to perform only 1-2 iterations.

In this paper, we would like to investigate online approaches for CEM-GMsFEM and show that one can significantly improve the existing online approaches for some cases. In the paper, first we present an online approach, which differs from our previous approach since CEM-GMsFEM uses oversampling. In particular, the online basis functions are formulated in the oversampled regions. Secondly, we present an analysis of the proposed method. Our analysis shows that the error decay by adding online basis functions can be significantly better compared to in online GMsFEM. The error decay can be made close to (i.e., we obtain very accurate approximation in one iteration) by choosing larger oversampling regions provided we have sufficient number of offline basis functions. To our best knowledge, this is a first result of this kind. Moreover, the online approaches can be made adaptive and adaptive error indicators can be derived.

We present numerical result. In our numerical results, we consider high-contrast permeability fields and place the source term in different locations. All results show that the error drops 3 orders of magnitude, which is much better compared to previous online GMsFEM. We also present numerical results using adaptivity, which shows that by selecting only some (few) regions, one can achieve a significant error decay. Moreover, our adaptive algorithm allows one to input a parameter which specifies a desired convergence rate.

The paper is organized as follows. In Section 2, we present some preliminaries. In Section 3, we present the construction of offline basis functions, and in Section 4, we present our online adaptive enrichment algorithm. Section 5 is devoted to numerical results. The analysis of our method is presented in Section 6. Finally, we present some concluding remarks in Section 7.

2 Preliminaries

In this paper, we consider a class of multiscale problems of the form

(1)

subject to the homogeneous Dirichlet boundary condition on , where is the computational domain. We assume that is a heterogeneous coefficient with multiple scales and very high contrast. In solving (1), it is desirable to construct multiscale basis functions that can be computed locally and give coarse-mesh convergence rate independent of the heterogenities and contrast. In [4], such approach has been developed. When the source term does not belong to and when one needs to obtain solutions with more refined accuracy, it is desirable to construct online basis functions that capture properties unresolvable by offline basis functions. It is the purpose of this paper to do this.

Next, we introduce the notion of fine and coarse grids. We let be a usual conforming partition of the computational domain into finite elements (triangles, quadrilaterals, tetrahedra, etc.), and let be the mesh size of . We refer to this partition as the coarse grid and assume that each coarse element is partitioned into a connected union of fine grid blocks. The fine grid partition will be denoted by , and is by definition a refinement of the coarse grid . Here, we use to denote the fine mesh size of . In Figure 1, we give an illustration of the fine grid, coarse grid, and oversampling domain. In the figure, the coarse grid is contained by a union of rectangular coarse elements, denoted generically by . Each coarse element is a union of finer rectangular elements. Moreover, for each coarse grid node , we define as the union of coarse elements having the vertex . We also define as an oversampled region for . Finally, we define as the number of coarse grid vertices.

Figure 1: Illustration of the coarse grid, fine grid and oversampling domain.

We let . The weak solution of the problem (1) satisfies

(2)

where and . We would like to find a multiscale solution in a subspace of , denoted as , satisfying

(3)

We will introduce the construction of the multiscale finite element space in the next section. We remark that the multiscale space consists of two components, which are the offline part and the online part. For the offline part, we will construct multiscale basis functions in the offline stage, that is, before solving the problem (3). Note that these basis functions are independent of the source term . For the online part, we will construct multiscale basis functions in the online stage using the residual of an approximate solution. Note that, these basis functions depend on the source term . In Section 3, we present the construction of offline basis functions. In Section 4, we present the construction of online basis functions and an online adaptive enrichment algorithm.

3 Offline basis functions

In this section, we present the construction of the offline multiscale finite element space. The construction of our offline basis functions follows the framework in [4]. To construct the multiscale space, we will first construct the auxiliary space by solving local spectral problem for each coarse element . Next, we will construct the multiscale basis functions by solving some local minimization problems using the auxiliary basis functions. Our multiscale finite element space will then be the span of these multiscale basis functions. Next, we will discuss the construction of both auxiliary space and multiscale space in detail.

3.1 Auxiliary basis functions

Now, we present the construction of the auxiliary basis functions. For each coarse element , we define , and solve the spectral problem: find

where , , and is a set of partition of unity functions with respect to the coarse grid. We remark that one can take as the standard multiscale basis functions or the standard piecewise linear functions. We assume that the eigenfunctions satisfy the normalized condition . We can assume that the eigenvalues are sorted in ascending order, that is, . We then choose the first eigenfunctions with small eigenvalue, and define the local auxiliary space as the span of these eigen-basis functions, which is

Notice that, by construction, . We define

Finally, the auxiliary space is defined as the sum of these local auxiliary spaces, that is, .

Using the auxiliary space, we define a projection operator by

We also denote the kernel of the operator as

3.2 Multiscale basis functions

Now we present the construction of offline multiscale basis functions using the auxiliary space. For each coarse element , we define an oversampled region by extending by coarse grid layers. See Figure 2 for an illustration of with .

Figure 2: A coarse element and its oversampled region .

For each , we will construct a basis function whose support is . Using the results in [4], the multiscale basis function is constructed by solving the following local problem: find such that

(4)

where and . We remark that the above definition is defined in the continuous space . In our numerical simulations, we solve the above problem using the fine mesh defined in and an appropriate finite element method. Finally, the multiscale finite element space is defined as the span of these multiscale basis functions, namely, . This method is called the CEM-GMsFEM.

We remark that our multiscale basis functions are used to approximate the related global basis functions. The global basis function is defined by solving the following problem: find such that

(5)

The global space is defined by . We note that these global basis functions have an exponential decay property (see [4]), which motivates the definitions of the multiscale basis functions having local supports. One important property of the global space is the orthogonal decomposition with respect to the inner product . This global space will be used in the analysis of the convergence result of our online adaptive enrichment method.

4 Online basis functions and adaptive enrichment

In this section, we will introduce an online enrichment method for this CEM. We will first present the construction of online basis functions. Then, we will give an error estimate for using the online enrichment method. Here, for online basis function, we mean the basis functions constructed in online stage by using the residual of the solution which contain the information of the source term. We will construct the online basis function in an iterative process. We remark that the error will decay rapidly such that the error will within a acceptable range in the first or two iterations.

To begin, we define a residual functional . Let be a numerical solution computed by solving (3). The residual functional is defined by

For each coarse neighborhood , we define the local residual functional by

The local residual gives a measure of the error in the coarse neighborhood .

The construction of online basis function is related to the local residual . Using the local residual , we will construct an online basis function whose support is an oversampled region . More precisely, the online basis function is obtained by solving

(6)

where . We can perform the above construction for each , or for some selected (with for an index set ) based on an adaptive criterion. We remark that the above online basis is obtained in the local region . This is the result of a localization result for the corresponding global online basis function defined by

(7)

After constructing the online basis functions, we can enrich our multiscale space by adding these online basis to the multiscale space, namely, . Using this multiscale finite element space, we can compute a new numerical solution by solving the equation (3). We can repeat the process to enrich our multiscale space until the residual norm is smaller than a given tolerance. Next, we present the precise online adaptive enrichment algorithm.

Online adaptive enrichment algorithm

We first choose an initial space . This is the space obtained by using the offline basis functions constructed in Section 3. We also choose a real number such that . This number determines how many online basis functions are needed in each online iteration. Then, we will generate a sequence of spaces and a sequence of multiscale solutions obtained by solving (3).

For each , we assume that is given. We will preform the following procedures to obtain the new multiscale space .

  1. Find the multiscale solution in the space . That is, find such that

    (8)
  2. Compute the local residuals where

    Define where . We re-numerate the indices of such that . Choose the first regions so that

    (9)
  3. Compute the local online basis functions. For each and coarse region , find such that

    where .

  4. Enrich the multiscale space. Let

In the next section, we will analyze the convergence rate for this online adaptive enrichment method. In particular, we will prove the following theorem.

Theorem 1.

Let be the solution of (1) and let be the sequence of multiscale solutions obtained by our online adaptive enrichment algorithm. Then we have

where , is maximum number of overlapping subdomains and is a constant.

Remark 1.

We note that the convergence rate depends two terms and . By using enough number of oversampling layers, the term tends to zero. Thus, the factor dominates the convergence rate. One can choose to obtain a desired convergence rate. We will also confirm this by some numerical examples. This is an improvement over the online method in [3], where the convergence rate is with .

5 Numerical Result

In this section, we present some numerical results to demonstrate the convergence of our proposed method. We take the computational domain . The medium parameter in the equation (1) is chosen to be the function shown in Figure 3. We note that the medium contains high contrast inclusions and channels. The fine mesh size is taken to be , while the coarse mesh size in this example is . In all our results, we take the number of oversampling layers . We will illustrate the performance of our method by using two different source terms and . We will test the performance by considering uniform enrichment and by using the online adaptive enrichment algorithm presented in Section 4.

Figure 3: The medium for the test case .

In Table 1, we present the error and the energy error for the case with uniform enrichment, that is . From the first two online iterations, we observe very fast convergence of the method. Next, we will consider some adaptive results for this case. In Table 2, we present the error decay by using our online adaptive enrichment algorithm with . That is, we only add basis for regions which account for the largest of the residual. From the table, we observe that the convergence rate in the energy norm is , which is close to . This results confirm our assertion that the convergence rate can be controlled by the user-defined parameter . We remark that the convergence rate is computed by taking the maximum of all . In Table 3, we present the adaptive result with . That is, we add basis for regions which account for the largest of the residual. From the table, we observe that the convergence rate in the energy norm is , which is close to . This result also confirms our prediction. Moreover, we note that the adaptive approach allows adding a very few online basis functions to reduce the error to .

Number of offline basis online iteration oversampling layers error energy error
3 0 2 0.37% 4.71%
3 1 2 6.75e-05% 1.28e-03%
3 2 2 1.57e-08% 2.64e-08%
Table 1: Using source term and uniform enrichment.
Number of offline basis DOF oversampling layers error energy error
3 300 2 0.37% 4.71%
3 311 2 0.14% 2.21%
3 339 2 0.073% 1.12%
3 368 2 0.033% 0.57%
Table 2: Using source term and online adaptivity with . Convergence rate is .
Number of offline basis DOF oversampling layers error energy error
3 300 2 0.37% 4.71%
3 341 2 0.073% 1.09%
3 407 2 0.014% 0.21%
3 470 2 2.93e-03% 0.051%
Table 3: Using source term and online adaptivity with . Convergence rate is .

Now, we consider the second source term . In Table 4, we present the error decay using uniform enrichement. We observe very fast decay in error from this table. Next, we test the performance using adaptivity. In Tables 5 and 6, we present the error decays with and respectively. We observe that the convergence rates in these two cases are and respectively. This confirms that the user-defined parameter is useful in controlling the convergence rate of our adaptive method. Moreover, we note that the adaptive approach allows adding a very few online basis functions to reduce the error to . Furthermore, in Figure 4, we show the number of online basis functions added in the computational domain. For , we will add a small number of basis functions in each iteration. We observe that the basis functions are added near the singularity of the source and along the high contrast channel. For , more basis functions are added throughout the domain with a faster convergence rate. We still observe that more basis are added near the singularity of and along the high contrast channel in .

Number of offline basis online iteration oversampling layers error energy error
3 0 2 1.06% 11.70%
3 1 2 6.43e-05% 1.51e-03%
3 2 2 1.57e-08% 4.25e-08%
Table 4: Using source term and uniform enrichment.
Number of offline basis DOF oversampling layers error energy error
3 300 2 1.06% 11.70%
3 309 2 0.13% 1.95%
3 324 2 0.062% 0.99%
3 347 2 0.031% 0.51%
Table 5: Using source term and online adaptivity with . Convergence rate is .
Number of offline basis DOF oversampling layers error energy error
3 300 2 1.06% 11.70%
3 391 2 9.27e-03% 0.13%
3 513 2 3.65e-04% 6.44e-03%
3 578 2 5.31e-05% 9.89e-04%
Table 6: Using source term and online adaptivity with . Convergence rate is .
Figure 4: Number of online basis functions for the source : Left: . Right: .

Finally, we present a test case with a more singular source term , shown in Figure 5, where the reference solution is also presented. In Table 7, we present the error decay with uniform enrichment and observe the same type of exponential decay as the earlier examples. We also observe that the error is relatively large when no online basis function is used. In Table 8, we present the results with the online adaptive enrichment algorithm with . We see that the numerically computed convergence rate is , which is close to the parameter .

Figure 5: Left: Reference solution for the source . Right: the source term .
Number of offline basis online iteration oversampling layers error energy error
3 0 2 30.01% 82.57%
3 1 2 0.0066% 0.0030%
3 2 2 4.45e-07% 1.22e-07%
Table 7: Using source term and uniform enrichment.
Number of offline basis DOF oversampling layers error energy error
3 300 2 30.01% 82.57%
3 356 2 8.68% 22.06%
3 378 2 4.87% 5.41%
3 392 2 4.46% 1.50%
Table 8: Using source term and online adaptivity with . Convergence rate is .

6 Convergence analysis

In this section, we analyze the convergence of the online adaptive enrichment algorithm presented in Section 4. First, we need some notations. We will define two different norms for the space . One is the -norm where . The other is -norm where . For a given subdomain , we will define the local -norm and -norm by and respectively.

Next, we will recall a few theoretical results from [4] that are useful for our analysis. The first result is Lemma 1.

Lemma 1.

There is a constant such that for all there exists a function such that

The second result is a localization result, saying that the global basis function defined in (5) has an exponential decay outside an oversampled region. This result motivates the local multiscale basis functions defined in (4).

Lemma 2.

We consider the oversampled domain obtained from by extending coarse grid layers with . Let be a given auxiliary multiscale basis function. We let be the multiscale basis functions obtained in (4) and let be the global multiscale basis functions obtained in (5). Then we have

where .

Next, we will need the following lemma in our analysis. The proof is given in the Appendix.

Lemma 3.

Assume the same conditions in Lemma 2. We have

In the next lemma, we see that the same localization result holds for the online basis function defined in (6). The proof of the following lemma is the same as that for Lemma 2 and Lemma 3, and is therefore omitted.

Lemma 4.

We consider the oversampled domain obtained from by extending coarse grid layers with . We let be the online multiscale basis functions obtained in (6) and let be the global online multiscale basis functions obtained in (7). Then we have

where . Furthermore, we have

Finally, we define a constant as

We remark that , where is the Poincare constant defined by for .

Now we are ready to prove Theorem 1.

6.1 Proof of Theorem 1

First, by using the Galerkin orthogonality, we have

(10)

The proof is based on a suitable choice of , and consists of steps.

Step 1:

In this step, we will give a representation of the error . For each , we construct a global online basis function such that

Summing over all , we have

By the definition of , we have

Therefore, we have

(11)

From the above relation, we see that

Using the decomposition , we have

Hence, we obtain the representation

(12)

where are some coefficients. We will use this representation in the next steps. We remark that, in Step 2 and Step 3, we will localize the terms and , and estimate the errors.

Step 2:

In this step, we will localize each in (12) and estimate the error. In particular, we will estimate . We define . Using (12), we have

(13)
(14)

where the last equality follows from (5). We let . By Lemma 1, there exists a function such that and

Taking in (14), we have

Thus, by the orthogonality of the eigenfunctions and the normalization condition , we have

where the last inequality follows from the definition of the constant . Recalling the definition of , we have

where the last inequality follows from (11). Finally, using Lemma 2 and Lemma 3, we have

Step 3:

In this step, we will derive an estimate for . By Lemma 4, we have

Using the equation (7),

Since is an orthogonal projection onto the space spanned by the eigenfunctions for and , we have

Therefore, we have

Step 4:

In this final step, we will prove the required convergence. Let . From the adaptive enrichment algorithm, we add the online basis functions for . We will take in (10) as

Using (10) and (12), we have