Multilevel preconditioning based on matching of graphs

# Algebraic multilevel preconditioners for the graph Laplacian based on matching in graphs

## Abstract.

This paper presents estimates of the convergence rate and complexity of an algebraic multilevel preconditioner based on piecewise constant coarse vector spaces applied to the graph Laplacian. A bound is derived on the energy norm of the projection operator onto any piecewise constant vector space, which results in an estimate of the two-level convergence rate where the coarse level graph is obtained by matching. The two-level convergence of the method is then used to establish the convergence of an Algebraic Multilevel Iteration that uses the two-level scheme recursively. On structured grids, the method is proven to have convergence rate and complexity for each cycle, where denotes the number of unknowns in the given problem. Numerical results of the algorithm applied to various graph Laplacians are reported. It is also shown that all the theoretical estimates derived for matching can be generalized to the case of aggregates containing more than two vertices.

###### 1991 Mathematics Subject Classification:
65N30, 65N15
This work supported in part by the National Science Foundation, DMS-0810982, OCI-0749202 and by the Austrian Science Fund, Grants P19170-N18 and P22989-N18.

## 1. Introduction

Algebraic Multigrid (AMG) attempts to mimic the main components of Geometric Multigrid in an algebraic fashion, that is, by using information from the coefficient matrix only to construct the multilevel solver. The basic algorithm uses a setup phase to construct a nested sequence of coarse spaces that are then used in the solve phase to compute the solution. The two main approaches to the AMG setup algorithm are classical AMG [3, 4] and (smoothed) aggregation AMG [13, 17, 9, 18, 11], which are distinguished by the type of coarse variables used in the construction of AMG interpolation.

In the classical AMG algorithm, the coarse variables are chosen using a coloring algorithm which is designed to find a suitable maximal independent subset of the fine variables. Then, given the coarse degrees of freedom, a row of interpolation is constructed for each fine point from its neighboring coarse points. In contrast, the aggregation-based AMG setup algorithm partitions the fine variables into disjoint subdomains, called aggregates. Then, a column (or several columns as in [18]) of interpolation is associated to each aggregate, which has nonzero entries only for the unknowns belonging to this aggregate. The focus of this paper is on the development and, in particular, the analysis of the latter aggregation-type methods.

The idea of aggregating unknowns to coarsen a system of discretized partial differential equations dates back to work by Leont’ev in 1959 [12]. Simon and Ando developed a related technique for aggregating dynamic systems in 1961 [13] and a two-grid aggregation-based scheme was considered in the context of solving Markov chain systems by Takahashi in 1975 [15]. Aggregation-based methods have been studied extensively since and numerous algorithms and theoretical results have followed [9, 18, 11]. Vanék introduced an extension of these methods known as smoothed aggregation multigrid in which smoothing steps are applied to the columns of the aggregation-based interpolation operator to accelerate two-level convergence and a modification of this two-level algorithm with overcorrection is presented in [17]. A multilevel smoothed aggregation algorithm and its convergence analysis are found in [16] and, in [19], an improved convergence theory of the method is presented. The latter theory is then extended to allow for aggressive coarsening, provided an appropriate polynomial smoother is used [8]. A further generalization known as adaptive smoothed aggregation is developed in [7]. Variants of the above approaches continue to be developed for use in scientific computing and have been developed for higher order partial differential equations [18], convection diffusion problems [11], Markov chains [14, 2], and the Dirac equation in quantum chromodynamics [5].

In this paper, an aggregation based Algebraic Multigrid method for the graph Laplacian is presented. The approach constructs the sequence of coarse graphs recursively using a pair-wise aggregation, or matching, form of interpolation. However, it is demonstrated here, that the convergence rate of a two-level method based on such a construction is uniformly bounded for the graph Laplacian on general graphs and, thus, can be used within an Algebraic Multilevel Iteration (AMLI) [1, 19] as a preconditioner to the Conjugate Gradient iteration to obtain a nearly optimal solver. A noteworthy feature of the approach is its simplicity, which makes it possible to analyze the convergence and complexity of the method with few assumptions and without any geometric information.

The remainder of the paper is organized as follows. In Section 2, we introduce the graph Laplacian problem and discuss some of its applications. In Section 3, we introduce a graph matching algorithm and demonstrate that the energy norm of the projection onto the coarse space is a key quantity in deriving convergence and complexity estimates of the method. Additionally, we introduce an approach computing an approximation of the energy norm of this projection operator. In Section 4, we present an analysis on the two-level method for the graph Laplacian operator. In Section 5, we consider the convergence and complexity of the resulting AMLI method, and in Section 6 we provide numerical results and address some practical issues of the method.

## 2. Problem formulation and notation

Consider an unweighted connected graph , where denotes the set of vertices and denotes the set of edges of . The variational problem considered here is as follows: Given an satisfying , where is a constant vector, find a , where denotes the cardinality of the set of vertices , such that

 (2.1) (Au,v)=(f,v),∀v∈Rn,

where

 (2.2) (Au,v)=∑k=(i,j)∈E(ui−uj)(vi−vj),(f,v)=∑i∈Vfivi , (f,1)=∑i∈Vfi.

Define the discrete gradient operator such that

 (Bu)k=ui−uj,k=(i,j)∈E,i

where and are standard Euclidean bases. The operator is named graph Laplacian since

 A=BTB.

The operator is symmetric and positive semi-definite and its kernel is the space spanned by the constant vector. These properties can also be verified by the matrix form of defined in the following way

 (A)ij=⎧⎪⎨⎪⎩dii=j;−1i≠j,(i,j)∈E;0i≠j,(i,j)∉E;

where is the degree of the -th vertex.

Efficient multilevel graph Laplacian solvers are important in numerous application areas, including finite element and finite difference discretizations of elliptic partial differential equations (PDE), data mining, clustering in images, and as a preconditioner for weighted graph Laplacians. Moreover, the theory developed here for multilevel aggregation solvers applied to graph Laplacians should provide insights on how to design a solver for more general weighted graph Laplacians, which cover also anisotropic diffusion problems. Two generalizations of the graph Laplacian systems are as follows.

• Weighted graph Laplacians: Assume that the graph is weighted and the -th edge is assigned a weight , then the corresponding bilinear form of is

 (Au,v)=∑k=(i,j)∈Ewk(ui−uj)(vi−vj).

Define as a diagonal matrix whose -th diagonal entry is equal to , then the matrix can be decomposed as

 A=BTDB.

Finite element and finite difference discretizations of elliptic PDEs with Neumann boundary conditions results in such weighted graph Laplacians.

• Positive definite matrices: Assume that is defined in terms of the bilinear form

 (Au,v)=∑k=(i,j)∈E(ui−uj)(vi−vj)+∑i∈Vuivi=(Asu,v)+(Atu,v).

By introducing a Lagrange multiplier , the system can be rewritten as an augmented linear system

 (As+At−At1−1TAt1TAt1)(uy)=(f−1Tf).

These graph Laplacians with lower order terms are similar to discretized PDE with Dirichlet boundary conditions. A solution for this augmented linear system directly results to the solution of .

The present paper focuses on designing a multilevel preconditioner that is constructed by applying recursively a space decomposition based on graph matching. The aim is to analyze the matching AMLI solver for the graph Laplacian in detail as a first step in gaining an in-depth understanding of a multilevel solver elliptic PDEs. The extension of the proposed algorithm to general graph problems is also a subject of current research.

## 3. Space decomposition based on matching

In this section, an outline of the basic idea of matching is provided, a commutative diagram which can be used to estimate the energy norm of the projection onto the piece-wise constant coarse vector space resulting from a matching in a graph is given, and some auxiliary results that are needed later on in the convergence analysis are discussed.

### 3.1. Subspaces by graph partitioning and graph matching

A graph partitioning of is a set of subgraphs such that

 ∪iVi=V,Vi∩Vj=∅,i≠j.

In this paper, all subgraphs are assumed to be non empty and connected. The simplest non trivial example of such a graph partitioning is a matching, i.e, a collection (subset ) of edges in such that no two edges in are incident.

For a given graph partitioning, subspaces of are defined as

 Vc={v∈V| v= constant on each Vi }.

Note that each vertex in corresponds to a connected subgraph of and every vertex of belongs to exactly one such component. The vectors from are constants on these connected subgraphs. Of importance is the orthogonal projection on , which is denoted by , and defined as follows:

 (3.1) (Qv)i=1|Vk|∑j∈Vkvj,∀i∈Vk.

Given a graph partitioning, the coarse graph is defined by assuming that all vertices in a subgraph form an equivalence class, and that and are the quotient set of and under this equivalence relation. That is, any vertex in corresponds to a subgraph in the partitioning of , and the edge exists in if and only if the -th and -th subgraphs are connected in the graph . Figure 1 is an example of matching of a graph and the resulting coarse graph.

As mentioned above, the reason to focus on matching is that it simplifies the computation of several key quantities used in the upcoming estimates derived for a perfect matching and it is possible to show that a matching which is not perfect can be analyzed in a similar way.

### 3.2. Commutative diagram

Let be the discrete gradient of a graph Laplacian , as defined in (2.1), and be defined as in (3.1). Assume that there exists an operator such that the following commutative diagram holds true:

 R|V|B−−−−→R|E|@VQVV@VVΠVVc−−−−→BR|E|

The proof of this assumption is provided later on. From the commutative relation it follows that

 (3.2) |Qv|2A=∥BQv∥2=∥ΠBv∥2≤∥Π∥2|v|2A.

Thus, an estimate on the -semi-norm of amounts to an estimate of the norm of . In the next subsection, an explicit form of is constructed and an estimate of its norm is derived.

###### Remark 3.1.

A more general approach for weighted graph Laplacians is to assume that the weight matrix , therefore the bound on the norm becomes

 |Qv|2A=(DBQv,BQv)=(DΠBv,ΠBv)≤∥D1/2ΠkD−1/2∥2|v|2A,

where can have some negative weights, which results in a matrix this is complex valued. A detailed analysis in such a setting and the application of this idea to anisotropic diffusion problems are discussed in [6].

### 3.3. Construction of Π in case of piece-wise constant spaces

Here, we proceed with an explicit construction and norm estimate of the operator .

For any graph partitioning in which the subgraphs are connected, a given edge belongs to the set of “internal edges”, whose vertices belong to the same subgraph, or to the set of “external edges”, whose vertices belong to two distinct subgraphs. For example, let and denote the subgraphs 1 and 2 in Fig. 2, then is an internal edge and is an external edge.

Since the vector has the same value on the two endpoints of the edge , we have that . Accordingly, all entries in , the -th row of , are set to zero:

 (ΠBv)k1=(Π)k1Bv=0.

For the external edge , it follows that satisfies

 (3.3) (Π)k2(Bv)=(BQv)k2=1|V1|∑i1∈V1vi1−1|V2|∑i2∈V2vi2,

for every . The following Lemma is useful in computing explicitly the entries of .

###### Lemma 3.2.

Let be a positive semidefinite operator and let be a basis in . Assume that the null space of is one dimensional, namely there exist a nonzero vector such that , and for every integer we have . We then have:

1. For any , the operator with is invertible.

2. The following identity holds for all :

 1(s,s)(u,s)−(u,χi)=1(s,s)(˜A−1s,Au).
###### Proof.

To establish (i) it suffices to show that implies . Assuming that for some it follows that:

 0=(˜Av,v)=(Av,v)+(χi,v)2.

Note that both terms on the right side of the above identity are nonnegative and, hence, their sum can be zero if and only if both terms are zero. Since is positive semidefinite by assumption with one dimensional null space, from we conclude that for some . For the second term we have that , and since for all , it follows that and hence . This proves (i).

Now, applying (i) the result (ii) follows:

 1(s,s)(˜A−1s,Au) = 1(s,s)(˜A−1s,Au+(χi,u)χi)−1(s,s)(˜A−1s,(χi,u)χi) = 1(s,s)(˜A−1s,˜Au)−1(s,s)(χi,u)(s,˜A−1χi) = 1(s,s)(s,u)−1(s,s)(χi,u)(s,˜A−1χi) = 1(s,s)(u,s)−(u,χi).

Here, the equality was used, implying that . ∎

###### Remark 3.3.

A special case is given by taking and , which denote the standard Euclidean bases. Then, it follows that

 (3.4) ui=⟨u⟩−1n((A+eieTi)−11,Au)

in which denotes the average value of .

Next, denote by the restriction of to a subgraph , and set . Then, can be expressed as the -th component of for in (3.5).

 (3.5) ul=⟨u⟩m+1|Vm|(Bm(Am+eleTl)−11m,BmImu),

where are the local indices of the vertex set , the operator and the term are the averaging operator and the constant vector restricted on the subgraph , and maps the global edge indices to the local edge indices.

Applying this formula for and , gives the row of the operator on the edge that connects and as follows

 (3.6) (Π)k2=CT1IT1+eTk2−CT2IT2.

Here is given by

 C1=1|V1|B1(A1+eieTi)−111

which then makes the summation in (3.6) valid. The vector is defined in a similar way.

Assume that the global indices of the vertices in and are ordered consecutively as decreasing integers starting at and increasing integers starting at . Then, the -th row of can be expressed as

 (3.7) (Π)k2=[0,…,0,CT1,1,CT2,0,…,0]

where the number is on the -th position in this row of . Note that from (3.5) it follows that the property (3.3) holds for this construction of , since by definition of

 (BQv)k2 = ⟨v⟩1−⟨v⟩2 = vi−vj+1|V1|(B1(A1+eieTi)−111,B1v1)−1|V2|(B2(A2+ejeTj)−112,B2v2) = (ek2,Bv)+(C1,B1v)+(C2,B2v) = (Π)k2Bv,

where , and and , both in local indices, are the incident vertices of .

## 4. A two-level method

In this section, the -orthogonal projection given in (3.1) based on a matching is proven to be stable assuming the maximum degree of the graph is bounded. Then, a two-level preconditioner is derived and the condition number of the system preconditioned by this two-level method is proven to be uniformly bounded (under the same assumption).

### 4.1. Two-level stability

The construction of for a matching proceeds as follows. First, note that all rows of that correspond to an edge are identically zero. On the other hand, if the edge , then it is an external edge and, thus, by (3.7), the -th row of is

 (Π)k = [0,…,0,12(1,−1)((1−1−11)+(01)T(01))−1(11), 1,−12(1,−1)((1−1−11)+(10)T(10))−1(11),0,…,0] = [0,…,0,12,1,−12,0,…,0].

Hence,

 (4.1) (Π)kl=⎧⎪ ⎪⎨⎪ ⎪⎩1k∉M and l=k;±12k∉M,l∈M and l∩k≠∅;0elsewhere.

The alternative way of describing the entries in is by showing that,

 (4.2) (Π)kl=⎧⎪ ⎪⎨⎪ ⎪⎩1l∉M and k=l;±12l∈M,k∉M and k∩l≠∅;0elsewhere.

Formula (4.1) implies that, the -th row of can be a zero row if , or a row with 3 non-zero entries if , which results to

 ∥Π∥∞=maxk∑l|Πkl|=1+|±1/2|+|±1/2|=2.

Formula (4.2) implies that, the -th column of can have exactly 1 non-zero entry if , or non-zeros entries whose values are if . Here is the number of edges satisfying and for any given , thus is bounded by , where is the maximum degree of the graph, since an edges can have at most neighboring edges. This leads to

 ∥Π∥1=maxl∑k|Πkl|=max(1,(2d−2)|±1/2|)=max(1,d−1).

On a graph whose maximal degree is larger or equal to 2, the estimates on the infinity norm and norm of result to the following estimate on :

 (4.3) ρ(ΠΠT)=∥Π∥22≤∥Π∥1∥Π∥∞=2d−2.
###### Remark 4.1.

Applying Gerschgorin’s theorem directly to the matrix leads to a sharper estimate: .

Formula (4.3) implies directly the following lemma.

###### Lemma 4.2.

On any graph whose maximum degree is 2 (e.g. such graph is a path), the operator defined in (4.1) satisfies and the following estimate holds

 |Q|2A≤∥Π∥22≤2d−2=2.

Numerical tests show that this is a sharp estimate on the semi-norm and that leads to fast convergent and reliable AMG methods.

### 4.2. A two-level preconditioner

Here, using an estimate of the stability of the matching projection (i.e, the norm , where is defined via the matching) two-level convergence is established. Assume that for a graph Laplacian a perfect matching is given and consider the matrix whose -th column is given by

 (4.4) (P)k=eik+ejk,

where and is the -th edge in . Further, define to be the projection from to , i.e.,

 Q=P(PTP)−1PT.

Similar to the definition of , define as the matrix whose columns are given by

 (4.5) (Y)k=eik−ejk,

where and is the -th edge in . Then, the matrix is orthogonal and the columns of and form a hierarchical bases, which can be used to relate the two-level method to a block factorization as follows.

Given , , and , define

 ˆA=(Y,P)TA(Y,P)=(YTAYYTAPPTAYPTAP).

A direct calculation then shows that

 ˆA=L(YTAY00S)LT,

where

 (4.6) S=PTAP−PTAY(YTAY)−1YTAP

is the Schur complement and

 (4.7) L=(I0PTAY(YTAY)−1I).

Next, define as the unweighted coarse graph and denote by the graph Laplacian of . In contrast to most of the existing AMG methods, here , except in special cases, e.g., for 1 dimensional problems. Let, be a positive constant such that

 (4.8) σ=supv:(v,1)=0(APv,Pv)(Acv,v).

Then, the fact that all weights in the graph corresponding to are larger than or equal to one implies , and

 (σAcv,v)(APv,Pv)∈[1,σ],∀v:(v,1)=0.

Consider the two-level preconditioner which uses the coarse graph Laplacian by

 ˆG=L(YTAY00σAc)LT.

Let be a preconditioner for , and be a preconditioner for the graph Laplacian . Then, a two-level preconditioner is defined by

 (4.9) ˆB=˜L(M(M+MT−YTAY)−1MT00σD)˜LT,

where

 ˜L=(I0PTAYM−1I).

As observed in [10] and [19], this gives a block matrix representation of the two-level method

 I−(Y,P)ˆG†(Y,P)TA = (I−Y(YTAY)−1YTA)(I−P(σAc)†PTA)(I−Y(YTAY)−1YTA) I−(Y,P)ˆB†(Y,P)TA = (I−YM−TYTA)(I−P(σD)†PTA)(I−YM−1YTA),

where the pseudo-inverse operator denoted by is used since the graph Laplacian is semi-definite. The convergence of the two-level method can now be estimated by comparing and the preconditioner .

The remainder of this section is dedicated to establishing a spectral equivalence between and for the two-level matching algorithm. The proof uses the following Lemma.

###### Lemma 4.3.

For any the Schur complement as given in (4.6) satisfies

 (4.10) Missing or unrecognized delimiter for \big
###### Proof.

Note that

 (AY(YTAY)−1YTAPx,Px) = (AY(YTAY)−1YTAPx,Y(YTAY)−1YTAPx) = ∥Y(YTAY)−1YTAPx∥2A,

because here, is the orthogonal projection of onto the space spanned by the columns of and, thus, minimizes the distance (in norm) between and this space. Hence,

 (Sx,x) =∥Px∥2A−∥Y(YTAY)−1YTAPx∥2A =infw(A(Yw+Px),(Yw+Px))\qed

Let be a vector satisfying , then the following lemma now holds.

###### Lemma 4.4.

Let , where is defined as in (4.8). Then for any , such that , we have

 (4.11) (ˆGv,v)(ˆAv,v)∈[1,cg].
###### Proof.

By Lemma 4.3 we have

 (APx,Px)≥infw(A(Yw+Px),(Yw+Px)).

Furthermore,

 (APx,Px)(Sx,x) = (APx,Px)infw(A(Yw+Px),(Yw+Px)) = supw(APx,Px)(A(Yw+Px),(Yw+Px)) = supu=Yw+Px(AQu,Qu)(Au,u)≤supu(AQu,Qu)(Au,u)=|Q|2A.

Note that the only difference between the preconditioners and is that the former matrix uses , whereas the latter uses to define the 2-2 block. The spectral equivalence constant between the operators and is obtained as follows:

 infuσ(Acu,u)(APu,Pu)infv(APv,Pv)(Sv,v)≤σ(Acw,w)(Sw,w)≤supuσ(Acu,u)(APu,Pu)supv(APv,Pv)(Sv,v), ∀w:(w,1)=0,

which implies

 σ(Acw,w)(Sw,w)∈[1,σ|Q|2A],∀w:(w,1)=0.

Hence, for any and

 (xy)T(YTAY00σAc)(xy)(xy)T(YTAY00S)(xy)=(AYx,Yx)+(APy,Py)(AYx,Yx)+(Sy,y)∈[1,σ|Q|2A],

which is equivalent to (4.11) since is nonsingular. ∎

Since the two-level method requires exact solvers for and the graph Laplacian , the convergence rate of a method that uses which is defined by replacing these exact solves with approximate ones is of interest. Combining Lemma 4.4 and the two-level convergence estimate (Theorem 4.2 in [10]), yields the following result.

###### Theorem 4.5.

If the preconditioners and are spectrally equivalent to and such that

 ((MT+M−YTAY)−1Mu,Mu)(AYu,Yu)∈[1,κs]and(Dw,w)(Acw,w)∈[1,η],∀u,w:(w,1)=0,

then

 (4.12) (ˆBv,v)(ˆAv,v)∈[1,(κs+ση−1)|Q|2A],∀v:(v,ˆ1)=0.

Note that this estimate reduces to (4.11) when and .

### 4.3. Convergence estimate for matching

We here show the sharpness of the estimation given by Theorem 4.5 when the graph Laplacian corresponds to a structured grid, and the coarse space is given by aligned matching.

Define an -dimensional hypercubic grid as the graph Laplacian such that the following conditions are satisfied.

1. A vertex corresponds to an vector , and , . Here is an Euclidean basis and are given positive integers that represent the numbers of vertices along all dimensions.

2. An edge is in the edge set if and only and .

Then the energy norm can be estimated for aligned matching on a hypercubic grid .

###### Lemma 4.6.

Let be an -dimensional hypercibic grid and is a fixed dimension. Assume that is an even number. The matching along the -th dimension is defined as

 M={l=(iv,iv+ek)|v∈V, and (v,ek) is an odd number }.

Let be the projection onto the piecewise constant space resulting from the matching . Then satisfies .

###### Proof.

Define the set be the collection of all edges along the -th dimension, as

 Ω={l=(iu,iv)|v−u=ek}.

Also define and the graph Laplacians and , derived from and respectively. ∎

The graphs in the set are paths, whose maximum degree is 2, and is a also matching on these paths. Therefore by Lemma 4.2 it is true that

 (4.13) (AΩQu,Qu)≤2(AΩu,u).

On the other hand, the matching is aligned on the set , meaning that any two matched pairs are connected through 0 or 2 edges in , thus the edges in set can then be subdivided into many sets of edges of the same type, one of which is shown in Fig. 3.

Notice that in this figure, the edge and are in , while and are in . Using the definition of , the energy norm of is estimated on the the subset of indicated by Fig. 3, by

 2(ui+uj2−uk+ul2)2 = 12((ui−uk)+(uj−ul))2 ≤ (ui−uk)2+(uj−ul)2.

Thus implies that

 (4.14) (A¯¯¯ΩQu,Qu)≤(A¯¯¯Ωu,u).

Combining (4.13) and (4.14) results that , or .

###### Remark 4.7.

A similar estimate follows for aligned partitionings consisting of line segments of size . Namely, in this case it can be shown that holds. This estimate in turn agrees with properties of Chebyshev polynomials, suggesting the use of an AMLI method equipped with certain Chebyshev polynomials. Comparing this result with the result from Theorem 4.6 suggests that using a more shape regular partitioning rather than one consisting of lines is more appropriate since this gives smaller values of the semi-norm .

A bound on the constant follows by using that is well conditioned and that its condition number depends on the degree of the graph, but not on the size of the graph.

###### Lemma 4.8.

Let be the perfect matching on a graph maximum whose degree is , and let be defined as in (4.5), then we have

 (AYw,Yw)(w,w)∈[4,2d],∀w≠0.
###### Proof.

The -norm of the vector is computed by definition:

 (AYw,Yw)≥∑k=(i,j)∈M((Yw)i−(Yw)j)2=∑k=(i,j)∈M((Yw)i+(Yw)i)2=4wTw.

We also have

 ρ(YTAY)≤∥YTAY∥1≤∥YT∥1∥A∥∞∥Y∥∞=2d.\qed

From the Lemma it follows that for any there exists a smoother such that the bound on the constant in Theorem 4.5 is

 κs≤1+ϵ.

This result in turn implies that an efficient solver for can be constructed by applying a few Conjugate Gradient iterations with an overall cost that is linear with respect to the size of .

The constant in (4.8) can be estimated by checking the weights of the graph for the graph Laplacian . Taking any two distinct subgraphs (edges) in the matching, say the -th and -th such that , it follows that the corresponding entry is equal to the number of exterior edges that connect to these subgraphs. For an aligned matching aligned a fixed dimension of a hypercubic grid, these weights are bounded by . For any general graph , the weights in are bounded by , since there are at most distinct edges that connect to any other 2 distinct edges. Then, letting to denote the unweighted graph Laplacian on the graph defined by , and noting that all off-diagonal entries of are equal to , it follows that

 σ={2for an aligned matching on a % hypercubic grid of any dimension; 4for a given matching on any graph.
###### Remark 4.9.

These estimates can be generalized to other subgraph partitionings in a similar way. As an example, consider again a graph for a hypercubic grid of any dimension. Then, for line aggregates of size (aligned with the grid) the following estimate holds

 |Q|2A≤m,κs≤1+ϵ,η=1,σ≤m.

Such estimates give insight into the design of a nearly optimal multilevel method. Moreover, the bounds are sharp enough, namely, the corresponding multilevel method can be proven to have convergence rate and complexity.

## 5. Algebraic multilevel iteration (AMLI) based on matching

In this section, a multilevel method that uses recursively the two-level matching methods from Section 4.2 in combination with a polynomial stabilization, also known as Algebraic Multilevel Iteration (AMLI) cycle is analyzed. Here, the focus is on proving a nearly optimal convergence rate, that is, one which is nearly independent of the number of unknowns , and at the same time has low computational complexity.

### 5.1. Multilevel hierarchy

Assume that is an graph Laplacian matrix where . For define the matching and the prolongation operator according to (4.4), then compute the graph Laplacian of the coarse graph (Recall that,