Conformal higherorder remeshing schemes
for implicitly defined
interface problems
Abstract
A new higherorder accurate method is proposed that combines the advantages of the classical version of the FEM on bodyfitted meshes with embedded domain methods. A background mesh composed by higherorder Lagrange elements is used. Boundaries and interfaces are described implicitly by the level set method and are within elements. In the elements cut by the boundaries or interfaces, an automatic decomposition into higherorder accurate subelements is realized. Therefore, the zero level sets are detected and meshed in a first step which is called reconstruction. Then, based on the topological situation in the cut element, higherorder subelements are mapped to the two sides of the boundary or interface. The quality of the reconstruction and the mapping largely determines the properties of the resulting, automatically generated conforming mesh. It is found that optimal convergence rates are possible although the resulting subelements are not always wellshaped.
keywords:
pFEM, higherorder element, levelset method, fictitious domain method, embedded domain method, XFEM, interface problem1 Introduction
Numerical simulations in all fields of computational mechanics often involve complexshaped, possibly curved domains and/or interfaces with arbitrary geometry. Examples and applications are numerous and include e.g. in solid mechanics, the modelling of composite materials with complex microstructure Moes_2003a (), material inhomogeneities Sukumar_2001c (), solidification Chessa_2002a () and topology optimization Belytschko_2003b (); Wang_2004a (). Further applications can be found in fluid mechanics and include interfaces in multiphase flows Glowinski_1999a () or problems in multiphysics. Such applications feature nonsmooth solutions and involve differential equations with discontinuities in the coefficients Babuska_1970a () or even different PDEs across the interfaces. It is wellknown that the accuracy of FEM approximations is severely degraded if meshes are used with elements that do not conform to the boundaries and interfaces.
Therefore, over the years a lot of attention has been drawn to the development of techniques to handle interface problems. Several different approaches have been proposed and in terms of mesh representation, the methods can roughly be divided into two different categories. The first category consists of body fitted finite element methods, where all interfaces and boundaries align with element edges. The second category involves the large spectrum of embedding domain methods or fictitious domain methods that use a nonconforming (background) mesh and enforce boundary and interface conditions by tailored approaches, e.g. Lagrange multipliers, penalty methods or Nitsche’s method.
In body fitted finite element methods, the computational meshes are usually generated in two different ways. The first approach is to create a mesh with straight edges and then transform this linear discretization into higherorder elements, see Sherwin_2002a (); Shephard_2005a (); Luo_2004a (); Lu_2014a (). The other possibility is to construct a higherorder mesh directly by creating curved boundaries and using them to discretize the interior of the domain, see Sherwin_2002a (). Because the interface can be arbitrarily complex, curved or even in motion, the generation of interfaceconforming meshes can be very costly from a computational point of view. Even worse, a large amount of human interaction may be required to generate suitable meshes, in particular with higherorder elements. However, the outstanding advantage is that once a conforming mesh is created, basically any finite element solver based on the isoparametric concept can process it. No specialized treatment for the boundary conditions is needed and standard Gaussian quadrature rules are used for the numerical integration of the weak form. All these advantages are also present in the approach proposed herein.
The second category, i.e. embedding domain methods, often uses a structured background mesh as a starting point, which does not account for the element boundaries. The location of the interfaces may, for example, be described via the level set method, see Osher_2003a (); Osher_2001a (); Belytschko_2003c (). Methods characterized as embedding domain methods are e.g. the immersed boundary method Peskin_1972a (); Cottet_2004a (), cut finite element method Burman_2010a (); Burman_2012a (); Burman_2014a (); Hansbo_2002a (), finite cell method Parvizian_2007a (); Duester_2008a (); Schillinger_2012a (); Schillinger_2014a (), unfitted finite element methods Bastian_2009a (); Lehrenfeld_2016a () and the parametric finite element method Frei_2014a (). Even the extended finite element method (XFEM) can be cast in the realm of embedding domain methods for some specific applications, see e.g Fries_2009b (), or in a higher order context Byfut_2009b (); Legrain_2013a (). In all these methods, the boundaries or interfaces are within elements so that the application of boundary and interface conditions as well as the numerical integration in cut elements is quite involved. Furthermore, the conditioning of the system of equations may be degraded due to largely different supports of the involved shape functions.
Herein, based on the method introduced in Noble_2010a () and Kramer_2014a () in a linear, twodimensional context, we propose a higherorder accurate method which is located somewhere inbetween the two aforementioned methodologies. The method uses a background mesh composed by higherorder elements; the mesh does not have to be Cartesian nor structured. Elements that are cut by the implicitly defined boundaries and interfaces are automatically decomposed into conforming subelements. This decomposition is based on a twostep procedure where, firstly, the zero levels of level set functions are identified and meshed. Then, secondly, tailored mappings are introduced which define conforming, higherorder subelements on the two sides of the interface. Due to the similarities to the loworder method proposed in Noble_2010a (), the introduced approach is labeled a higherorder conformal decomposition finite element method (CDFEM). Similar approaches that follow from mesh subdivision and mesh adaptation are sometimes labelled cutcell method are found in the context of higherorder discontinuous Galerkin methods (DG) in Fidkowski_2007a ().
The organization of this paper is as follows. Section 2 describes some model problems and gives a short overview of the proposed method. Section 3 shows how to identify and discretize the approximated zero level set by means of higherorder onedimensional interface elements. This is further referred to as reconstruction. The employed elementwise subdivision technique is discussed in detail in Section 4 and valid topologies for triangular and quadrilateral elements are defined. Furthermore coordinate mappings are discussed that define the location of the internal nodes of the generated higherorder subelements. Section 5 contains numerical examples illustrating the accuracy and the stability of the proposed method. All results are in excellent agreement with the analytical solutions and exhibit optimal higherorder convergence rates. We conclude in Section 6 and give an overview of open topics as well as an outlook on future work.
2 Proposed technique in a nutshell
Let be a domain embedded in the Euclidean space . The domain is bounded by the external interface . Inside , there may be internal interfaces separating, for example, regions with different material properties or the two sides of a crack path. The interfaces and are sharp and possibly curved.
For the solution of a BVP in such a domain with the classical FEM, it is crucial to construct a conforming mesh in order to achieve optimal convergence properties. Thus, the elements must consider the position of the interfaces, i.e. element edges must align there. Furthermore, no hanging nodes are typically accepted. A mesh for some schematic domain is shown in Fig. 1(a). It is not surprising that the generation of the mesh and the maintenance for possibly evolving interfaces are not trivial tasks, especially not with higherorder elements..
Therefore, the aim is to use simple higherorder meshes, further on referred as background meshes, which do not have to account for any of the (external and internal) interfaces. These are composed by standard Lagrange elements which may be triangles or quadrilaterals. These background meshes are simple to construct, but the proposed approach is not necessarily restricted to Cartesian nor structured grids. We propose a procedure to automatically decompose elements in the background mesh that are cut by an arbitrary interface into isoparametric, higherorder subelements. This leads to meshes as shown in Fig. 1(b). It will be seen later that approximations with optimal accuracy are possible on these meshes, although the element shapes are often quite awkward.
Herein, it is assumed that the position of the interfaces are defined by the level set method. That is, a scalar function is introduced whose zero level set defines the position of the interfaces. In the present work we restrict ourselves to the case of one level set function which however enables the definition of several unconnected interfaces. The level set values are prescribed at the nodes of the higherorder background mesh and interpolated by the finite element shape functions in between. The discretized level set function is further on referred to as .
The procedure for the automatic decomposition of cut elements is sketched as follows: Following the work of Fries_2015a (), the zero level set of is first detected and meshed by higherorder interface elements. This is done in the background reference elements as depicted by the blue line in Fig. 2 and is called reconstruction. Here, we moderately modify the procedure to achieve the goals of this work. The second step examines the decomposition into higherorder subelements which align with the reconstructed level set. Therefore, customized mappings and for quadrilaterals and triangles, respectively, are introduced from the reference elements to the subelements in the cut reference background element. Finally, the reference background element is mapped to the physical domain using the isoparametric mapping, denoted as .
3 Reconstruction of the zero level sets
The aim of this section is to approximate the zero level set of the level set function by means of higherorder finite elements, called interface elements herein. As this paper is restricted to two dimensions, the discretized zero level set is composed by line elements. The topic of reconstructing zero level sets is discussed in more detail in Fries_2015a (), including the extension to the threedimensional case.
The strategy has a local character, that is, the reconstruction of the zero isoline advances element by element. The choice of Lagrange elements enforces continuity between the elements. The first step is to transfer the level set data, given in the physical domain, to the corresponding reference domain, as the reconstruction is executed in the reference element. This is particularly simple for Lagrange elements as the nodal values are taken as nodal values in the reference element. Thereby, the indices and denote a specific node in global and local element numbering, respectively. Based on this information, elements that are cut by the zero level set are detected. Once the cut elements are identified, it must be ensured that the level set data is valid in the sense that a unique subdivision of the element is possible. For topologically valid cut elements, a scalar root finding algorithm is provided, giving any desired number of points on the interface. These points serve as nodes of the higherorder interface elements approximating the zero level set.
3.1 Detection of cut elements
An element is considered as cut, when the data at a finite number of points indicates that the approximated zero level set function crosses the element domain. To decide whether the element is cut, a sample grid with a user defined resolution is used in the parent element, see Fig. 4. The resolution of this auxiliary grid should depend on the number of element nodes. For the calculations performed in this work three points between element nodes in each dimension were chosen. Using the element functions and the nodal values , the level set is interpolated at all grid points. The set of grid points assigned to an element is defined as . The check for cut elements is then done by an inspection of the sign of the grid level set values. The considered element is cut if
(1) 
It is important to note that nodal values are not sufficient for the check due to the nonpositiveness of the Lagrange shape functions. Hence, a sample grid with suitable resolution is crucial for the check.
3.2 Valid level set data
Arbitrary level set data in a higherorder element may result in very complex patterns of zero level sets and the implied regions where is positive or negative, respectively. Therefore, it is crucial to impose restrictions on the complexity of the level set data. Herein, we call the level set data in a cut element according to Eq. (1) valid if (i) the element boundary is cut two times and (ii) the zero level set divides the element domain into two subdomains. However, even valid level set data has to meet some restrictions regarding the curvature of the zero level set, which is discussed in more detail in Section 4.
Fig. 3(a) shows valid level set data subdividing the reference element into two subdomains. Inclusions as seen in Fig. 3(b) violate the conditions, hence, the level set data is not valid. A situation violating the second condition is shown in Fig. 3(c). Here the level set data changes more than two times over the element edges, producing more than two separate domains. If there is a high curvature locally, see Fig. 3(d), the level set data is valid according to the conditions above, however, the reconstruction may not be successful anyway (which is automatically detected). If the element is regarded as cut but features invalid level set data, we note that a (locally) refined background mesh always produces valid level set data that meets the conditions from above.
3.3 Reconstruction of zero level set by NewtonRaphson procedure
Assuming that the element is cut and the level set data is valid, the task is now to locate element nodes of the interface element with a userdefined order on the zero level set. The process starts with finding two roots on the element boundary. Then, a number of points , where depends on the desired element order, are evenly distributed between the two roots on the boundary. These points serve as the starting guesses for the iterative rootfinding inside the element.
3.3.1 Root search on the element boundary
The procedure starts with finding the two intersections of the zero level set with the element boundary. Therefore, the sample grid from above is considered on the boundary, see Fig. 4. Based on the signs of the level set data in two adjacent grid points, intervals containing zeros of can be located immediately. The zeros are labeled as and and are found by applying an iterative root finding algorithm along the edge. The middle of the interval is chosen as a starting guess for the iteration and the algorithm is given by
(2) 
For all considered topological cases, a cut element edge has either one or two roots, depending on the topological situation. In this section we restrict ourselves to the case of an edge having only one root. As elaborated further in Section 4 this leads to a topological case labeled as local topology, which is the starting point of the subdivision scheme. The configuration with two roots on one element edge, designated as global topology, is also described in Section 4. In this case the approach is different, but for the reconstruction of the zero level set reduces to the same procedure as described here.
3.3.2 Root search inside an element
For a detailed discussion concerning the root search inside an element, with different choices of start points and search directions, we refer to Fries_2015a (). Once the root search along the boundary is done, the two roots are connected by a straight line. On this line, a number of nodes , associated to the desired order of the interface element are distributed equally spaced. This gives a rough approximation of the zero isoline, and is further referred to as “linear reconstruction”, depicted by the green line in Fig. 5. The root search is then executed along a straight path, going through the start point with a direction given by the gradient of the level set function at , i.e. . Applying a NewtonRaphson approach along , similar to Eq. (2) yields
(3) 
After the residuum of Eq. (3) falls below a userspecified convergence tolerance (in the performed calculations this tolerance is set as ) the newly found coordinates of the roots are set as nodal coordinates of the higherorder interface element, see the red line in Fig. 5. This interface element, further on referred to as , will finally serve as a new (curved) edge for higher order triangular or quadrilateral elements on the two sides of the interface.
4 Conform remeshing of a background element
At this point we assume that a general, higherorder cut background element is given and the zero level set is successfully approximated by the reconstructed interface element , see Section 3. The aim is to: firstly, to identify all topological cases leading to loworder subcells with possibly one higherorder side coinciding with and, secondly, to find a map for the element nodes of higherorder elements into these subcells, which is not trivial as seen below. It is noted that the resulting decomposition features triangular and quadrilateral elements.
There are several mappings and transformations involved and Fig. 6 shows the complete process in one diagram. Emanating from the physical element , in step 1 the level set values are transferred to the parent domain , where the reconstruction of the zero level set is realized. In step 2 the subdivision of takes place. The element is subdivided into a few predefined topologies depending on the cut situation. The only missing link for a working computational mesh is a node distribution within the element patch that preserves optimal accuracy for higherorder elements. This is done by means of a customized mapping procedure, sketched in step 3 of Fig. 6, wherefore the approach for triangular and quadrilateral elements differentiates significantly. Once the nodal coordinates are determined, a conforming higherorder element patch is obtained. At last, in step 4 the data is projected back to the physical domain via an isoparametric mapping.
Regarding the nomenclature of the following topology classifications, some abbreviations are introduced. The first letter describes the type of element. This can be either for triangular or for quadrilateral elements followed by a combination of subscripts and/or a superscripts. The index of the subscripts gives the information which edges are cut, whereas the superscripts describes which nodes are cut. So denotes a quadrilateral, with a zero level set which cuts edge and edge . Another example is , describing a triangular element where one edge and one node are cut. The edge and node numbering are evident from Fig. 7. Note that only the vertex nodes of a higherorder element are relevant for the subdivision scheme.
The introduced notation has the advantage that it indicates directly the number of possible topologies. As mentioned in Section 3, a valid cut element has exactly two intersections of the zero level set along the whole element boundary. As a result, all possible cut situations for triangles are described by , and , with and . The same is valid for quadrilaterals, , and , with and . It is worth recalling that a symmetry in the indices holds, e.g. which changes only the orientation of the normal vector but not the topological situation itself. Consequently, the permutations of and for subscripts and superscripts cover all possible topological situations. The only invalid situation is one node that is cut twice by the level set. In the proposed notation that would result in and with and .
4.1 Subdivision strategy for triangles
A configuration is defined as “local” if the element can be subdivided without influencing neighbour elements. For triangular elements there are two local configurations, that affect only the considered element. Another three nonlocal configurations are present, influencing the subdivision of one neighbour element. The nonlocal scheme works successively which means that the (nonlocal) subdivision scheme produces a local configuration. Once this is done, the local subdivision algorithm yields the desired element patches where some element nodes coincide with the reconstructed interface element.
4.1.1 Local subdivisions
The subdivisions, which do not affect the neighbour elements or do not need any information from the neighbour elements to produce a unique subdivision, are referred to as local subdivisions. The configuration in Fig. 8(a) is the first local topology. It consists of a triangular element divided into a triangle and a quadrilateral by two cut edges. The situation in the figure is labelled as , but all situations fulfilling , and are also included in this topological situation. The other standard topology occurring for triangular elements is given in Fig. 8(b) as , but again the basic topology covers in total the combinations . In this case a vertex and the opposite edge node are cut, see Fig. 8(b). For this topological case, the element patch is easily subdivided into two triangular elements. This case is particularly important when distinguished points occur e.g. a sudden change in the geometry (e.g. corners) or boundary conditions. This is considered in detail in a followup paper.
4.1.2 Nonlocal subdivisions
Some topologies are considered that need a specialized treatment as they influence the neighbour elements. These cases occur in connection with a glancing intersection of the level set with one element edge. Often the problem could also be resolved by a refinement of the background mesh. However, from a computational point of view, it is still preferable to proceed as follows. We go through all elements of the mesh and find the intersections of the level set function with the element boundaries. For edges that are cut twice, the adjacent elements are subdivided as in Fig. 9(a). The element is subdivided in the middle of the interval defined by the two roots on the edge, . Collectively, the topological combinations are covered by this subdivision.
The two other cases are depicted in Fig. 9(b) where a node is cut together with a neighbouring edge, described by , and Fig. 9(c) where two nodes are cut, described by . The subdivision strategy is similar as before: Both cases can be subdivided into two triangles, with the point located in the middle of the intersections of the zero level sets with the element boundary.
Once the nonlocal subdivisions are executed, all topological cases can be treated by the local subdivision strategy described before.
4.2 Subdivision strategy for quadrilaterals
Starting with a mesh composed by quadrilateral elements, one approach would be to convert the cut elements into triangles and apply the scheme for triangles from above. However, this would obviously lead to (slightly) different zero level sets compared to those implied by the interpolation functions in a quadrilateral element. Therefore, we rather develop schemes that work without prior conversion. A total of four local subdivisions are introduced, depending on the cut situation. As before, it is also necessary to consider cases where the neighbour elements are influenced, which are again named nonlocal subdivisions.
4.2.1 Local subdivisions
All situations featuring two cut adjacent edges are topologically equivalent, see Fig. 10(a) for . The subdivision for this topological case produces four triangular elements. All other topological cases are also covered with the proposed subdivision. Another case is when two neighbouring edges are cut as depicted in Fig. 10(b) for . This subdivision creates two quadrilaterals. Again, also the rotated situation , is covered by the subdivision. The first topology considering also nodal cuts is depicted in Fig. 10(c). In that case, a node and a nonadjacent edge are cut and subdivide the original quadrilateral into a quadrilateral and a triangle. The covered situations are . The last local subdivision is depicted in Fig. 10(c). This topology features two cut opposite nodes, , leading to two triangles.
4.2.2 Nonlocal subdivisions
To deal with glancing intersections, as depicted in Fig. 11(a), it is necessary to subdivide the elements adjacent to the considered edge. In total, as for the triangular elements, three different topological cases arise. All situations have in common that one edge is cut twice. The first case, see Fig. 11(a), includes the topologies . The subdivision of this topological case is done by three triangles, whereby the triangles share the point . All other topological cases are similar to the former one when placing one root, as in Fig. 11(b), or both roots on the vertex nodes, see Fig. 11(c). The covered situations are then for one cut vertex and for two cut vertices.
4.3 Coordinate mappings
The task is now to construct a map for all points from a higherorder reference element to the previously defined loworder subcells with potentially one higherorder side. The mapping is constructed so that the following properties are fulfilled: (i) the element resulting from the coordinate transformation yields optimal interpolation properties and (ii) one element edge aligns with the (curved) higherorder interface element .
As seen from Fig. 12, several element types and corresponding coordinate systems are involved: Central to the whole procedure are the loworder subcells (with one higherorder side) described in the coordinate system , as defined above. They define the outer boundary of the higherorder elements resulting from the automatic conformal decomposition. Nevertheless, the mapping for inner points in the subcells is not unique, so that inner nodes of the resulting higherorder elements may be placed differently which has important implications on the convergence properties. The higherorder interface element representing the discretized zero level set is initially defined in the reference coordinate system and is mapped to the reference element by . Finally, there are higherorder reference subelements defined in the coordinate system . These subelements are used to define the initial location of the nodes and the mapping to is denoted as .
It is useful to introduce the following definitions related to nodal sets and corresponding shape functions of the higherorder subelements in the reference coordinate system . The set of the corner nodes for each element is defined by . These are three nodes for a triangular and four for a quadrilateral element. From these nodes, either one node (see Fig. 10(a) topology , elements and ) or two nodes (see Fig. 10(a) for topology , elements and ) belonging to are on the linear reconstruction when mapped to . For simplicity, we choose a node numbering in the subelements such that edge 2 is the one that after the mapping to will coincide exactly with the interface element. The set of nodes on edge 2 is called , and the corner nodes are . Related to these nodes are the nodal coordinates before the mapping and after the mapping to the cut reference background element. Note that the nodes in and superpose in coordinate systems and . We associate the following set of shape functions with the abovementioned nodes: for are the loworder shape functions of the reference subelement. for are the higherorder shape functions on the special side of the subelement. for are loworder shape functions on the special side which coincide with the trace of the loworder shape functions for on edge 2.
The aim is to construct a map for all nodal points from to based on the known parametrization of the edges (note that is included in some 2Delements as a curved edge). The mapping transforms the loworder subcells into higherorder elements with one curved edge. Therefore, we adapt a transfinite interpolation technique as described e.g. in Gordon_1973a (); Gordon_1973b (); Szabo_2004a (); Solin_2003a (). The scheme yields the blending function method for the quadrilaterals Gordon_1973a (); Gordon_1973b () and a slightly modified technique for the triangular elements. We start with the mapping for the elements having one node in (or alternatively the assumption that is straight). In this case the mapping from to is defined completely by the vertices and is given by the standard isoparametric mapping for bilinear elements as
(4) 
The coordinates and are the nodal coordinates and are the corner nodes of a subelement in the cut reference background element in the corresponding coordinate systems. To include the more elaborate case of a curved higherorder interface element, an additional term to the standard isoparametric mapping must be included, defined as
(5) 
Equations (4) and (5) are valid for any point inside . Note that Eq. (5) must include Eq. (4) as a special case of an element with only straight edges. Therefore the additional vector function must be defined such that it vanishes for elements with only straight edges. It is interesting to note that the definition of the geometric mapping (5) formally matches enriched approximations such as those used in the XFEM Belytschko_1999a (); Moes_1999a (); Fries_2009b (); Babuska_2002a (); Babuska_2004a ().
For the definition of , we impose two important constraints:

The corner nodes of the higherorder interface element match the corner nodes of the special side of the loworder element in the coordinate system .

Those edges of the loworder elements that are not on the special side shall remain straight after the mapping.
This leads to the requirement that
(6) 
We find that this can be achieved when splitting into the product
with the definition of as
(7) 
The coordinates are thereby the element nodes of the reconstructed interface elements in the coordinate system . This function corresponds geometrically to subtracting the linear connection of the nodes in from the curved higherorder side. The function is differently defined for triangular and quadrilateral elements and describes the procedure how the curved edge is blended into an element with only straight edges. However, note that is not unique and there are many conceivable alternatives to it.
As theoretically devised by Ciarlet_1972b () for 2D simplex elements and extended by Lenoir_1986a () and Bernardi_1989a () for higher dimensions, certain smoothness properties have to be fulfilled by the desired coordinate transformation. In general, it can be stated that the geometrical mapping that is defined by the shape functions in the reference domain should also be polynomial in the real space. Further explanations and extensions to transfinite mappings are found in Solin_2003a (),Trangenstein_2013a ().
4.3.1 Mapping for curved triangular elements
It remains to define and . The parametrization of edge 2 is done by means of vertex functions and gives . The choice of is then given by Solin_2003a () as
(8) 
The mapping is valid for all points with exception of nodal points on where a division by zero arises. For these points, however, the mapping of the vertices is defined by the standard isoparametric mapping. Anyway, also alternative choices of are conceivable, e.g. using the ramp function (which is in fact the mapping of choice for the quadrilaterals), hence . Another choice is to use mappings such as in Ciarlet_1972b (); Scott_1973a (); Lenoir_1986a (). In this case, intersections of the boundary nodes are taken as internal nodes, this is further on designated as (Lenoirmapping). An issue with this approach is that the extension to 3D is not trivial. Differences in the mappings are plotted in Fig. 13. In the following chapter, we investigate the influence of the choices of on the convergence rates. Note that in the case of straight edges, the enrichment function , and consequently a standard bilinear mapping is obtained.
In Fig. 13 the inner node distributions are shown for different versions of . The edge is hereby parametrized by equally spaced nodes on a quarter circle. Note that the mapping for the boundary nodes coincides for all choices of but differs inside. As shall be seen below, the node distribution inside the elements is of large importance for the achieved convergence properties.
4.3.2 Mapping for curved quadrilateral elements
For quadrilateral elements, the optimal mapping is found using the blending function method. Introduced by Gordon_1973b () for transfinite (in general nonpolynomial) mappings, it can also be used for isoparametric edge approximations. In this case, a scalar vertex or ramp function is defined by
(9) 
The enrichment function for quadrilaterals is then defined as the linearly blended map
For quadrilateral elements the parametrization of edge 2 is given by . It is seen that for and, hence, also for the nodes in . Furthermore, for , i.e. for all remaining nodes in . Again, if edge 2 is straight, the enrichment function , and a standard loworder isoparametric mapping is obtained.
4.4 Invalid topologies
As mentioned in Section 3, the assessment as a valid or invalid topology is based on level set data on the boundary and does not take into account the shape of the level set inside the element. However, in some valid configurations the reconstruction may fail if the curvature of the level set function is too high. A topological situation leading to such a case is shown in Fig. 15(a). The reconstruction, see Fig. 15(b) fails clearly, also the Runge phenomenon is apparent. However, as the mesh is successively refined, see Fig. 15(c) and Fig. 15(d), the situation is finally resolved. For the check, a finite number of points is chosen and the Jacobian is evaluated. Here the points are chosen as the integration points of the elements that emerge from the subdivision scheme but also other choices are possible. The subelements resulting from the decomposition are valid if the Jacobian is strictly positive at all discrete points.
4.5 Influence of the subdivision scheme on the condition number
An obvious consequence of the proposed methodology is the presence of awkward element shapes: Very small inner angles may occur and the size of neighbouring elements can vary strongly. This may lead to poorly conditioned system matrices, see e.g. Frei_2014a (). Some remedies for this problem are to use preconditioning schemes or simple mesh manipulation procedures to prevent very small angles. The influence on the condition number is shown here using the example of a plate with a hole, which is later investigated in Section 5. The mesh consists in total of elements, the element order is varied between and . The circular inclusion is moved vertically and for each case the condition number is computed and shown in Fig. 16(b).
Note that the peaks that represent a very high condition number in the system of equations (resulting from a FEanalysis of the plate with hole) emerge only for very few geometries in a small range of the relative displacement. It is not difficult to identify nodes in the background mesh which cause these high condition numbers. Several techniques are available to regulate the situation. Among them are node manipulation schemes as in Loehnert_2014a (); Rangarajan_2014a (); Labelle_2007a () and stabilization methods as in Loehnert_2014a (). Herein, direct solvers are employed for the solution of the system matrices and none of these techniques are employed.
5 Numerical results
In order to illustrate the convergence properties and the stability of the proposed method we present four different examples with meshes involving nonfitting interfaces. In the test cases, we focus on (i) the quality of interface reconstructions, (ii) the approximation properties in the context of innerelement boundaries and interfaces, and (iii) the conditioning of the system of equations. The first example is a geometry approximation using a smooth level set function defining an interface. The second example investigates an approximation problem (projection) of a smooth function on a remeshed, implicitly defined circular domain. And finally, the third and fourth example are boundary value problems in linear elasticity. The third example is a bimaterial problem and is representative for methods dealing with (weak) discontinuities as e.g. the XFEM. The fourth example is used to demonstrate the capability of the procedure for a problem typical for fictitious domain methods. Both BVPs are often used as benchmarks for the performance of the version of the FEM.
5.1 Interface reconstruction of a flowershaped inclusion
Given a reasonably well graded background mesh, this example shows that using the proposed reconstruction scheme fairly complicated domains can be automatically remeshed. The example is inspired by Li1998_a (); Lehrenfeld_2016a () with the analytical level set function (which could e.g. describe the boundary of an inclusion) given by
(10) 
where is the location of a considered point in , the angle connecting the origin and , and a set of variables determining the shape of the inclusion. For the considered example these variables are chosen as . In Fig. 17(a), a structured Cartesian background mesh is shown and in Fig. 17(b) a deformed background mesh. In the figures, quadrilateral elements of order are shown. Also the interface is seen, which is the zero level set of Eq. (10).
We study the error at integration points which are distributed inside the reconstructed interface elements and evaluate the analytical level set function in these points. Because the interface elements provide an approximation of the zero level set, a suitable error measure is defined as
(11) 
There, are integration points in the reconstructed interface and is the analytical level set function evaluated at . The level set function is evaluated at the nodes of the background mesh and interpolated inside the domain using finite element functions. Note that the NewtonRaphson procedure from Section 3 ensures that the element nodes are “exactly” on the zero level set of (with a tolerance of ). However, for the implied interface element inbetween, and consequently also the integration points, a slight deviation from the zero level set of occurs. The error norm in Eq. (11) accounts for the error in compared to and for the error in the interface elements in approximating the zero level set of .
In Fig. 18, the automatically generated conformal meshes are visualized. It is emphasized once more, that in general a mixed mesh consisting of triangular and quadrilateral elements results. For clarity the element nodes are not plotted in the meshes.
The analysis is carried out using a series of embedded meshes, indicated by the characteristic element length , measured in the background mesh (before the decomposition). As shown in Fig. 19(a) and Fig. 19(b), the convergence properties of the method, based on the Cartesian background mesh as well as on the deformed background mesh, are optimal after the automatic remeshing. Note that no node moving was applied and some of the elements look quite awkward. Nevertheless all elements resulting from the subdivision procedure are topologically valid. However, for higher curvatures inside the elements it can be advisable to apply some mesh manipulations before the subdivision scheme is applied.
5.2 Approximation problem with circular inclusion
The second example shows that the automatic decomposition of the background mesh retains optimal approximation properties of the standard projection. Also the convergence properties of the different mappings for triangular elements, as introduced in Section 4, are investigated. Again, a Cartesian and a deformed background mesh, respectively, are automatically decomposed to align with a circular inclusion, see Fig. 21(a) and Fig. 21(b). The radius of the inclusion is given as and centred in the origin. The function to be interpolated is given as and therefore smooth over the whole domain. In Fig. 20(a) and Fig. 20(b) plots of the function over the background meshes are shown.
The error is measured in the standard norm as well as in the norm, to show that in both error measures optimal convergence rates are obtained.
To begin with, we consider the approximation error of in the norm defined by
As depicted in Fig. 22, optimal convergence rates are achieved. Also the gradient approximation error is investigated in the same way, the error is measured in the energy norm
The results in Fig. 22 and Fig. 23 show that the remeshing strategy, indeed, achieves optimal convergence properties asymptotically, in both the norm and the norm.
For the sake of completeness, the error for the other mappings of the element nodes to the subcells, discussed in Section 4, namely the Lenoirmapping and the blending function mapping, is also shown for the Cartesian background mesh. The Lenoirmapping gives optimal approximation properties (but is more difficult to implement than the actually applied mapping). The blending function mapping, on the contrary, clearly gives suboptimal results and degrades the convergence rates significantly. This is due to the properties of the Jacobian and wellknown in the meshing community.
5.3 Bimaterial boundary value problem
As a third example, a plate with a circular inclusion is considered to illustrate the performance for the approximation of general boundaryvalue problems with a curved (weak) discontinuity. The example is a well known benchmark problem for higherorder extended finite element procedures, see e.g. Cheng_2009a (); Sukumar_2001c (). The error in the displacements (measured in the norm), the error in stresses (measured in the norm) and also the condition number of the resulting system of equations are investigated. Regarding the employed error measures for the convergence studies for this and also for the next example, using the analytical solution and the finite element solution , the norm on is defined as
(12) 
To investigate the convergence behaviour of the function derivatives, we use the strain energy , defined as
(13) 
with being the elastic stiffness tensor and and the strain fields of the exact solution and the finite element approximation, respectively. We use relative versions of these two norms, therefore and . Assuming that a smooth function is approximated on a regular domain using a polynomial of degree and with a discretization aligning with the interface, optimal convergence rates are expected, see Szabo_1991a (); Strang_2008a (). Subject to these limitations, the convergence rates in the error norms defined above are
In case of unfitted interfaces, e.g. using a piecewise straight approximation of curved boundaries or material interfaces crossing the elements, the approximation quality is significantly degraded. The convergence rates are then independent of the ansatz order of the finite element functions and are limited by the smoothness of the approximated function or the accuracy of the geometry description.
The problem statement is an infinite domain consisting of material with a circular inclusion of material . As before, two different background meshes are chosen, both are depicted in Fig 25(a) and Fig 25(b). The circle with the radius is centered in the origin. On the entire outer domain boundary, represented by red circles in in Fig 25, Dirichlet boundary conditions are imposed using the analytical solution given in Eqs. (14) and (15).
The solution for the displacement components and in polar coordinates is given by Sukumar_2001c (); Fries_2006b () as
(14)  
(15) 
with the parameter defined as
are the standard polar coordinates, the traction in direction, and the shear modulus. For our example, plane strain conditions are considered, therefore the Kolosov constant is defined as , with being the Poisson ratio. For the numerical computations, the material constants for material and material are chosen as and , respectively. For all following examples, the mesh is sequentially subdivided into quadrilateral elements per side. The characteristic element length is given as . A standard displacementbased finite element formulation is used to solve the equations.
The first convergence study measures the displacement error in the norm, , as introduced in Eq. (12). The dotted lines show the theoretically optimal convergence rates. Fig. 26 displays the error as a function of the characteristic element size . The results validate the proposed approach and show excellent agreement between the optimal and the calculated convergence rates.
Also the error in strain energy is calculated using the analytical solution as presented in Sukumar_2001c (); Fries_2006b (). The results in Fig. 27 clearly show optimal convergence rates also for the strain energy norm.
Investigating the condition number of the system stiffness matrix and plotting over the characteristic element length , Fig. 28(a) is obtained for the Cartesian background mesh and Fig. 28(b) for the deformed background mesh. It is seen that in this example, the condition number is not excessively high, therefore, no effort was made to manipulate the node positions in the background mesh.
5.4 Infinite plate with a circular hole
The last example is chosen to illustrate the performance of the proposed higherorder CDFEM in the context of fictitious domain methods where the (curved) domain boundary is described implicitly using the level set function. The example is, as before, a benchmark problem for higher order finite element procedures, see e.g. Cheng_2009a (), Daux_2000b (). The error in the displacements (measured in a norm), the error in displacement derivatives (measured in norm) and also the condition number of the equation system are investigated.
The problem statement is an infinite domain with a circular inclusion and unidirectional tension in direction. As before, two different background meshes are chosen, a Cartesian background mesh and a deformed background mesh, both are depicted in Fig. 29(a) and (b). The circular hole with the radius is centered in the origin. Instead of traction boundary conditions, on the entire outer domain boundary, represented by red circles in Fig 29, Dirichlet boundary conditions are imposed using the analytical solution given in Eq. (16) and Eq. (17).
The solution for the displacement components and in Cartesian coordinates is given in Szabo_1991a () as
(16)  
(17) 
with being the standard polar coordinates, the traction in direction, and the shear modulus. For our example, plane strain conditions are considered, therefore the Kolosov constant is given as , with as Poisson ratio. For the following numerical computations, the material constants are chosen as and .
As before, the mesh is sequentially subdivided in quadrilateral elements per side. The automatically generated elements inside the inclusion are simply neglected when integrating the weak form of the partial differential equation. The first convergence study measures the displacement error in norm, as introduced in Eq. (12). The dotted lines show the theoretically optimal convergence rates. Fig. 30 displays the error as a function of the characteristic element length . The results show again excellent agreement between the optimal and the calculated convergence rates.
The exact stresses and strains are given in e.g. Cheng_2009a (), Daux_2000b (). The error in strain energy is calculated using the error measure in Eq. (13). The results are shown in Fig. 31 and are again optimal. The condition number for the Cartesian and the deformed background mesh are plotted in Fig. 32(a) and Fig. 32(b).
6 Conclusions, Limitations and Perspectives
In this work, a higherorder conformal decomposition finite element method (CDFEM) has been proposed. It combines the advantages of the FEM on body fitted meshes (application of boundary and interface conditions, numerical integration) with those of embedded domain methods (no explicit manual mesh generation). The keyaspect is the automatic decomposition of cut background elements into higherorder conforming subelements.
Therefore, a twostep procedure is suggested: In the first part, the implicit interfaces described by the level set method are detected and meshed. Then, based on the topological situation in the cut element, higherorder elements are mapped on the two sides of the interface. It is shown that the location of the inner element nodes, depending on the concrete mapping, have important implications for the achieved convergence rates.
The reconstruction and remeshing scheme is executed in the reference element, therefore also nonstructured background meshes can be used. Under restrictions on the curvature of the level set function, all topological situations can be covered. Used in a FEM simulation, the automatically determined subelements may lead to an illconditioning of the system matrix. Several solution strategies will be investigated in future works. Herein, none of these techniques have been employed and, nevertheless, excellent have been obtained with direct solvers.
The proposed approach shows excellent agreement between optimal and calculated convergence rates. It is very flexible with respect to the employed element types and not limited to structured background meshes. It shows a stable behaviour even for geometrically complex interfaces. The method is able to automatically remesh a broad class of problems with an implicitly sharp interface representation. A number of extensions for the method can be conceived, for example, evolving interfaces that occur in plasticity or twophase flows. Also an extension to threedimensional problems is currently in progress. It is also planned to combine the approach with adaptive refinements such that even more complex geometrical situations can be covered automatically.
References
 [1] I. Babuška. The finite element method for elliptic equations with discontinuous coefficients. Computing, 5(3):207–213, 1970.
 [2] I. Babuška, U. Banerjee, and J.E. Osborn. Meshless and generalized finite element methods: A survey of some major results. In M. Griebel and M.A. Schweitzer, editors, Meshfree Methods for Partial Differential Equations, volume 26 of Lectures Notes in Computational Science and Engineering. Springer, Berlin, 2002.
 [3] I. Babuška, U. Banerjee, and J.E. Osborn. Generalized finite element methods: Main ideas, results and perspectives. Int. J. Comp. Meth., 1:67 – 103, 2004.
 [4] P. Bastian and C. Engwer. An unfitted finite element method using discontinuous galerkin. Internat. J. Numer. Methods Engrg., 79(12):1557–1576, 2009.
 [5] T. Belytschko and T. Black. Elastic crack growth in finite elements with minimal remeshing. Internat. J. Numer. Methods Engrg., 45:601 – 620, 1999.
 [6] T. Belytschko, C. Parimi, N. Moës, N. Sukumar, and S. Usui. Structured extended finite element methods for solids defined by implicit surfaces. Internat. J. Numer. Methods Engrg., 56:609 – 635, 2003.
 [7] T. Belytschko, S.P. Xiao, and C. Parimi. Topology optimization with implicit functions and regularization. Internat. J. Numer. Methods Engrg., 57:1177 – 1196, 2003.
 [8] C. Bernardi. Optimal finiteelement interpolation on curved domains. SIAM Journal on Numerical Analysis, 26(5):1212–1240, 1989.
 [9] E. Burman, S. Claus, P. Hansbo, M.G. Larson, and A. Massing. CutFEM: Discretizing geometry and partial differential equations. Internat. J. Numer. Methods Engrg., 2014.
 [10] E. Burman and P. Hansbo. Fictitious domain finite element methods using cut elements: I. a stabilized Lagrange multiplier method. Comp. Methods Appl. Mech. Engrg., pages 2680 – 2686, 2010.
 [11] E. Burman and P. Hansbo. Fictitious domain finite element methods using cut elements: II. A stabilized Nitsche method. Applied Numerical Mathematics, 62:328 – 341, 2012.
 [12] A. Byfut, A. Schröder, and C. Carstensen.  adaptive extended finite element method. In T.P. Fries and A. Zilian, editors, Proceedings of the ECCOMAS Thematic Conference XFEM 2009, Aachen, Germany, 2009.
 [13] K.W. Cheng and T.P. Fries. Higherorder XFEM for curved strong and weak discontinuities. Internat. J. Numer. Methods Engrg., 82:564 – 590, 2010.
 [14] J. Chessa, P. Smolinski, and T. Belytschko. The extended finite element method (XFEM) for solidification problems. Internat. J. Numer. Methods Engrg., 53:1959 – 1977, 2002.
 [15] P.G. Ciarlet and P.A. Raviart. Interpolation theory over curved elements, with applications to finite element methods. Computer Methods in Applied Mechanics and Engineering, 1(2):217 – 249, 1972.
 [16] G.H. Cottet and E. Maitre. A levelset formulation of immersed boundary methods for fluidstructure interaction problems. Comptes Rendus Mathematique, 338:581 – 586, 2004.
 [17] C. Daux, N. Moës, J. Dolbow, N. Sukumar, and T. Belytschko. Arbitrary branched and intersecting cracks with the extended finite element method. Internat. J. Numer. Methods Engrg., 48:1741 – 1760, 2000.
 [18] A. Düster, Z. Yang J. Parvizian, and E. Rank. The finite cell method for threedimensional problems of solid mechanics. Comp. Methods Appl. Mech. Engrg., 197:3768 – 3782, 2008.
 [19] K. J. Fidkowski and D. L. Darmofal. A triangular cutcell adaptive method for highorder discretizations of the compressible navierstokes equations. J. Comput. Phys., 225(2):1653–1672, August 2007.
 [20] S. Frei and T. Richter. A locally modified parametric finite element method for interface problems. SIAM Journal on Numerical Analysis, 52(5):2315–2334, 2014.
 [21] T.P. Fries and S. Omerović. Higherorder accurate integration of implicit geometries. Internat. J. Numer. Methods Engrg., pages n/a–n/a, 2015. nme.5121.
 [22] T.P. Fries and T. Belytschko. The intrinsic XFEM: A method for arbitrary discontinuities without additional unknowns. Internat. J. Numer. Methods Engrg., 68:1358 – 1385, 2006.
 [23] T.P. Fries and T. Belytschko. The extended/generalized finite element method: An overview of the method and its applications. Internat. J. Numer. Methods Engrg., 84:253 – 304, 2010.
 [24] R. Glowinski, T.W. Pan, T.I. Hesla, D.D. Joseph, and J. Périaux. A distributed Lagrange multiplier/fictitious domain method for flows around moving rigid bodies: Application to particulate flow. Int. J. Numer. Methods Fluids, 30:1043 – 1066, 1999.
 [25] W.J. Gordon and C.A. Hall. Construction of curvilinear coordinate systems and applications to mesh generation. Internat. J. Numer. Methods Engrg., 7:461 – 477, 1973.
 [26] W.J. Gordon and C.A. Hall. Transfinite element methods: blending function interpolation over arbitrary curved element domains. Numer. Math., 21:109 – 129, 1973.
 [27] A. Hansbo and P. Hansbo. An unfitted finite element method, based on Nitsche’s method, for elliptic interface problems. Comp. Methods Appl. Mech. Engrg., 191:5537 – 5552, 2002.
 [28] R. M. J. Kramer and D. R. Noble. A conformal decomposition finite element method for arbitrary discontinuities on moving interfaces. Internat. J. Numer. Methods Engrg., 100(2):87–110, 2014.
 [29] F. Labelle and J. Shewchuk. Isosurface stuffing: Fast tetrahedral meshes with good dihedral angles. ACM Transactions on Graphics, 26:57.1 – 57.10, 2007.
 [30] G. Legrain. A nurbs enhanced extended finite element approach for unfitted cad analysis. Comput. Mech., 52(4):913–929, 2013.
 [31] C. Lehrenfeld. High order unfitted finite element methods on level set domains using isoparametric mappings. Comp. Methods Appl. Mech. Engrg., 300:716 – 733, 2016.
 [32] M. Lenoir. Optimal isoparametric finite elements and error estimates for domains involving curved boundaries. Society of Industrial and Applied Mathematics, 23:562 – 580, 1986.
 [33] Z. Li. A fast iterative algorithm for elliptic interface problems. SIAM J. Numer. Anal., 35(1):230–254, 1998.
 [34] S. Loehnert. A stabilization technique for the regularization of nearly singular extended finite elements. Comput. Mech., 54(2):523–533, 2014.
 [35] Q. Lu, M. S. Shephard, S. Tendulkar, and M. W. Beall. Parallel mesh adaptation for highorder finite element methods with curved element geometry. Engineering with Computers, 30(2):271–286, 2014.
 [36] X.J. Luo, M. S. Shephard, R. O’Bara, R. Nastasia, and M. W. Beall. Automatic pversion mesh generation for curved domains. Engineering with Computers, 20(3):273–285, 2004.
 [37] N. Moës, M. Cloirec, P. Cartraud, and J.F. Remacle. A computational approach to handle complex microstructure geometries. Comp. Methods Appl. Mech. Engrg., 192:3163–3177, 2003.
 [38] N. Moës, J. Dolbow, and T. Belytschko. A finite element method for crack growth without remeshing. Internat. J. Numer. Methods Engrg., 46:131 – 150, 1999.
 [39] D. R. Noble, E. P. Newren, and J. B. Lechman. A conformal decomposition finite element method for modeling stationary fluid interface problems. Int. J. Numer. Methods Fluids, 63(6):725–742, 2010.
 [40] S. Osher and R.P. Fedkiw. Level set methods: an overview and some recent results. J. Comput. Phys., 169:463 – 502, 2001.
 [41] S. Osher and R.P. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces. Springer, Berlin, 2003.
 [42] J. Parvizian, A. Düster, and E. Rank. Finite cell method : h and pextension for embedded domain problems in solid mechanics. Comput. Mech., 41:121 – 133, 2007.
 [43] C. S. Peskin. Flow patterns around heart valves: a digital computer method for solving the equations of motion. Phd thesis, Albert Einstein College of Medicine, Yeshiva University, 1972.
 [44] R. Rangarajan and A. Lew. Universal meshes: A method for triangulating planar curved domains immersed in nonconforming meshes. Internat. J. Numer. Methods Engrg., 98(4):236–264, 2014.
 [45] D. Schillinger, A. Düster, and E. Rank. The hpdadaptive finite cell method for geometrically nonlinear problems of solid mechanics. Internat. J. Numer. Methods Engrg., 89:1171 – 1202, 2012.
 [46] D. Schillinger, M. Ruess, N. Zander, Y. Bazilevs, A. Düster, and E. Rank. Small and large deformation analysis with the p and Bspline versions of the Finite Cell Method. Comput. Mech., 50:445 – 478, 2012.
 [47] L. R. Scott. Finite element techniques for curved boundaries. PhD thesis, Massachusetts Institute of Technology, 1973.
 [48] M. S. Shephard, J. E. Flaherty, K. E. Jansen, X. Li, X. Luo, N. Chevaugeon, J.F. Remacle, M. W. Beall, and R. M. O’Bara. Adaptive mesh generation for curved domains. Applied Numerical Mathematics, 52(2â3):251 – 271, 2005. {ADAPT} ’03: Conference on Adaptive Methods for Partial Differential Equations and LargeScale Computation.
 [49] S. J. Sherwin and J. Peiró. Mesh generation in curvilinear domains using highorder elements. Internat. J. Numer. Methods Engrg., 53(1):207–223, 2002.
 [50] P. Šolín, K. Segeth, and I. Doležel. Higherorder finite element methods. CRC Press, Boca Raton, FL, 2003.
 [51] G. Strang and G. Fix. An Analysis of the Finite Element Method. WellesleyCambridge Press, 2008.
 [52] N. Sukumar, D.L. Chopp, N. Moës, and T. Belytschko. Modeling holes and inclusions by level sets in the extended finiteelement method. Comp. Methods Appl. Mech. Engrg., 190:6183 – 6200, 2001.
 [53] B. Szabó and I. Babuška. Finite Element Analysis. John Wiley & Sons, Chichester, 1991.
 [54] B. Szabó, A. Düster, and E. Rank. The pVersion of the Finite Element Method, chapter 5, pages 119 – 139. John Wiley & Sons, Chichester, 2004.
 [55] J. A. Trangenstein. Numerical Solution of Elliptic and Parabolic Partial Differential Equations. Cambridge University Press, 2013. Cambridge Books Online.
 [56] X. Wang, M.Y. Wang, and D. Guo. Structural shape and topology optimization in a levelsetbased framework of region representation. Structural and Multidisciplinary Optimization, 27:1 – 19, 2004.