On Computing the Number of Short Cycles in Bipartite Graphs Using the Spectrum of the Directed Edge Matrix

On Computing the Number of Short Cycles in Bipartite Graphs Using the Spectrum of the Directed Edge Matrix

Ali Dehghan, and Amir H. Banihashemi,
Abstract

Counting short cycles in bipartite graphs is a fundamental problem of interest in many fields including the analysis and design of low-density parity-check (LDPC) codes. There are two computational approaches to count short cycles (with length smaller than , where is the girth of the graph) in bipartite graphs. The first approach is applicable to a general (irregular) bipartite graph, and uses the spectrum of the directed edge matrix of the graph to compute the multiplicity of -cycles with through the simple equation . This approach has a computational complexity , where is number of edges in the graph. The second approach is only applicable to bi-regular bipartite graphs, and uses the spectrum of the adjacency matrix (graph spectrum) and the degree sequences of the graph to compute . The complexity of this approach is , where is number of nodes in the graph. This complexity is less than that of the first approach, but the equations involved in the computations of the second approach are very tedious, particularly for . In this paper, we establish an analytical relationship between the two spectra and for bi-regular bipartite graphs. Through this relationship, the former spectrum can be derived from the latter through simple equations. This allows the computation of using but with a complexity of rather than .

Index Terms: Counting cycles, short cycles, bipartite graphs, Tanner graphs, low-density parity-check (LDPC) codes, bi-regular bipartite graphs, irregular bipartite graphs, directed edge matrix, girth.

I introduction

Bipartite graphs appear in many fields of science and engineering to represent systems that are described by local constraints on different subsets of variables involved in the description of the system. In such a representation, the nodes on one side of the bipartition represent the variables while the nodes on the other side are representative of the constraints. One example is the Tanner graph representation of low-density parity-check (LDPC) codes, where variable nodes represent the code bits and the constraints are parity-check equations. In the bipartite graph representation of systems, the cycle distribution of the graph often plays an important role in understanding the properties of the system. For example, the performance of LDPC codes, both in waterfall and error floor regions, is highly dependent on the distribution of short cycles of the Tanner graph [1][2][3][4][5][6][7][8][9][10].

Motivated by this, in the coding community, there has been a large body of work on the distribution and counting of cycles in bipartite graphs, see, e.g., [3], [11], [12], [13], [14].

Generally, counting cycles of a given length in a given graph is known to be NP-hard [15]. The problem remains NP-hard even for the family of bipartite graphs [16]. There are, in general, two computational approaches to count the number of short cycles in bipartite graphs. The first approach is applicable to any (irregular) bipartite graph, and is described in the following theorem.

Theorem 1.

[11] Consider a bipartite graph with the directed edge matrix , and let be the spectrum of . Then, the number of -cycles in is given by , for , where is the girth of .

The result of Theorem 1 follows from the property of that the number of tailless backtrackless closed (TBC) walks of length in is equal to , where denotes the trace of . This together with the fact that the set of TBC walks of length less than coincides with the set of cycles of the same size [12] prove the result. To use Theorem 1, one needs to calculate the eigenvalues of . This has a complexity of , where is number of edges in the graph [17].

The second approach, which was introduced by Blake and Lin [14] and extended by Dehghan and Banihashemi [18], uses the spectrum of the adjacency matrix and the degree distribution of the graph. It has a lower complexity of , where is number of nodes in the graph, but is only applicable to bi-regular bipartite graphs. One drawback of this approach is that the recursive equations for calculating are tedious, particularly for values of . The following theorem describes the general calculation of , for any , and the specifics of the calculation of .

Theorem 2.

[18] For a given -regular bipartite graph , the number of -cycles, , is given by

(1)

where is the spectrum of , and and are the number of closed cycle-free walks of length and closed walks with cycle of length in , respectively. For , we have

and

(2)

where and are the number of variable and check nodes in , respectively, and () represents the number of closed cycle-free walks of length from a variable node (a check node ) to itself. (Generating functions are used to compute functions recursively [14].)

In this work, we investigate the relationship between the above two approaches. In particular, our goal is to find the relationship between the two spectra and for bi-regular bipartite graphs. We show that the former spectrum includes eigenvalues , , and . The remaining eigenvalues of are related to the graph spectrum through simple quadratic equations whose coefficients are determined by the node degrees and . This allows one to compute using Theorem 1, but through the calculation of the graph spectrum rather than the direct calculation of . As a result, the computational complexity reduces to rather than , while avoiding the tedious equations of Theorem 2.

The organization of the rest of the paper is as follows: In Section II, we present some definitions and notations. Section III contains our result on the relationship between the two spectra and , and the derivation of the latter from the former. The paper is concluded in Section IV.

Ii Definitions and notations

A graph is a set of nodes and a multiset of unordered pairs of nodes, called edges. If , we say that there is an edge between and (i.e., and are adjacent). We may also use notations or for the edge . We say that a graph is simple, if it does not have any loop (i.e., no edge of the form ) or parallel edges (i.e., no two edges between the two same nodes). A directed graph (digraph) is a set of nodes and a multiset of ordered pairs of nodes called arcs. For an arc , we define the origin of to be , and the terminus of to be . The inverse arc of , denoted by , is the arc formed by switching the origin and terminus of . A digraph is called symmetric if whenever is an arc of , its inverse arc is as well. For each graph , its symmetric digraph is defined by replacing each edge of with two arcs in opposite directions. See Fig. 1. Thus, there is a simple correspondence between and .

Fig. 1: A graph and its symmetric digraph .

In a graph , the number of edges incident to a node is called the degree of , and is denoted by . Also, and are used to denote the maximum and minimum degree of . For every node , the set denotes the set of neighbors of in .

For a graph , a walk of length is a sequence of nodes in such that , for all . A walk can alternatively be represented by its sequence of edges. A walk is a path if all the nodes are distinct. A walk is called a closed walk if the two end nodes are the same, i.e., if . Under the same condition, a path is called a cycle. We denote cycles of length , also referred to as -cycles, by . We use for . The length of the shortest cycle(s) in a graph is called girth and is denoted by .

Consider a walk of length represented by the sequence of edges . The walk is backtrackless if , for any . Also, the walk is tailless if . In this paper, we use the term TBC walk to refer to a tailless backtrackless closed walk.

A graph is connected, if there is a path between any two nodes of . A graph is called bipartite, if the node set can be partitioned into two disjoint subsets and , i.e., , such that every edge in connects a node from to a node from . A graph is bipartite if and only if the lengths of all its cycles are even. Tanner graphs of LDPC codes are bipartite graphs, in which and are referred to as variable nodes and check nodes, respectively. Parameters and in this case are used to denote and , respectively. Parameter is the code’s block length and the code rate satisfies .

A bipartite graph is called bi-regular, if all the nodes on the same side of the bipartition have the same degree, i.e., if all the nodes in have the same degree and all the nodes in have the same degree . In the rest of the paper, we sometimes use notations and as a replacement for and , respectively, to follow the notations commonly used in coding to denote variable and check node degrees, respectively. It is clear that, for a bi-regular graph, . A bipartite graph that is not bi-regular is called irregular. A bipartite graph is called complete, and is denoted by , if every node in is connected to every node in . The degree sequences of a bipartite graph are defined as the two monotonic non-increasing sequences of the node degrees on the two sides of the graph. For instance, the complete bipartite graph has degree sequences and .

The adjacency matrix of a graph is a matrix , where is the number of edges connecting the node to the node , for all . Similarly, The adjacency matrix of a digraph is the matrix , where is one if and only if . The adjacency matrix is symmetric, and since we assumed that has no parallel edges, then , for all . Moreover, since has no loops, then , for all .

An eigenvalue of is a number such that , for some nonzero vector . (Throughout the paper all vectors are assumed to be column vectors.) The vector is then called an eigenvector of . The set of the eigenvalues of the adjacency matrix of a graph is called the spectrum of . The determinant , where is the identity matrix, is called the characteristic polynomial of (with variable ). The roots of this polynomial are the eigenvalues of . An eigenvalue of is said to have multiplicity if, when the characteristic polynomial is factorized into linear factors, the factor appears times. If is an eigenvalue of , then the subspace is called the eigenspace of associated with . The dimension of this eigensapce is at most the multiplicity of .

There are some known results about the eigenvalues and eigenvectors of the adjacency matrix that we review below and use them in our work (see, e.g., [19]). (1) If is an eigenvalue of , then is an eigenvalue of . (2) [Perron-Frobenius, Symmetric Case] Let be the adjacency matrix of a connected graph , and let be the spectrum of . Then, (i.e., the multiplicity of the largest eigenvalue of is one). (3) The largest eigenvalue of bi-regular bipartite graphs is  [20]. (4) A graph is bipartite if and only if its spectrum is symmetric about the origin. (5) By Properties (2) and (4), in connected bipartite graphs, the multiplicity of the smallest eigenvalue is also one. (6) By Property (4), for a given bipartite graph , if is an eigenvalue of with multiplicity , then is also an eigenvalue with multiplicity . Thus, the spectrum of has the following form , for some , and we have . (7) The adjacency matrix of has linearly independent eigenvectors, such that for each , there are linearly independent eigenvectors associated with each eigenvalue and .

Another important property of the adjacency matrix is that the number of walks between any two nodes of the graph can be determined using the powers of this matrix. In other words, the entry in the row and the column of , , is the number of walks of length between nodes and . Consequently, the total number of closed walks of length in is , where is the trace of a matrix. It is well-known that , and thus the multiplicity of closed walks of different length in a graph can be obtained using the spectrum of the graph.

For a given graph , the directed edge matrix , is a matrix defined as follows. For each edge in , we consider two opposite arcs , and denote them by and (i.e., ). We then define

(3)

In other words, for a given graph , we consider its associated symmetric digraph , and then calculate from using (3). For example, for graphs and in Fig. 1, we have

.

The number of -cycles, , in a bipartite graph can be obtained from the spectrum of using Theorem 1.

The rank of a matrix , denoted by , is the dimension of the vector space generated by its columns. This corresponds to the maximum number of linearly independent columns of . The rank is also the dimension of the space spanned by the rows of . Thus, if is an matrix, then

(4)

where is the transpose of . The kernel (null space) of a matrix is the set of solutions to the equation , where is the zero vector. The dimension of the null space of is called the nullity of and is denoted by . For an matrix , we have (Rank-Nullity Theorem):

(5)

Iii The Relationship between the Spectra of and for Bi-regular Bipartite Graphs, and the New Method to Count Short Cycles

In [21], it was shown that for a regular graph , the eigenvalues of can be computed from those of . A key component in the derivations of [21] is the special properties that has as a result of the regularity of the graph. For the bi-regular graphs, considered in this work, however, such properties do not exist and thus the derivations are much different. In this section, we derive the spectrum of from the graph spectrum for bi-regular bipartite graphs, and then use the results to count the short cycles of the graph by Theorem 1.

To derive our results, we first define an auxiliary matrix as a function of . We then find the eigenvalues of , which are on the one hand related to , and on the other hand to . Through these relationships, we derive from . In the following, for simplicity, we use notations and to denote and , respectively.

For a bi-regular bipartite graph , let be a matrix such that the entries of are given by

(6)

where is the Kronecker delta (which is equal to if , and equal to zero, otherwise), and is the entry of the adjacency matrix of . In the rest of the paper, we assume that the rows and columns of are sorted in the following order: First, the set , second , and finally, other pairs . Note that the union of the first two sets is the set of directed edges in the symmetric digraph associated with . Also, by (6), if and only if we have

  • (i.e., ),

  • (i.e., ), and

  • (i.e., ).

Thus, by (3), the matrix has the following form

(7)

and by (7), we have the following result.

Lemma 1.

The eigenvalues of are the same as those of with the addition of zero eigenvalues.

Furthermore, since is bipartite, and based on the labeling of rows and columns (i.e., first, are listed pairs , followed by pairs ), has the following form

(8)

where and are matrices. As an example, by the ordering just described ( are the first arcs, followed by their inverse arcs in the same order), for the graph shown in Fig. 1, we have

From (7) and (8), one can see that the matrix has the following form:

(9)

or equivalently,

(10)

It is easy to see that entry of the element of (denoted by ) is given by

(11)

We thus have

(12)

Next, we study the structure of eigenvectors of .

Lemma 2.

Consider a number and a vector of size , and denote the element that corresponds to the pair in the vector by . Then, is an eigenvector of associated with eigenvalue if and only if, for each pair , where and , we have

(13)

and for each pair , where and , we have

(14)

and for all the other pairs , .

Proof.

By the definition of eigenvalue/eigenvector and (12), it is clear that for , we must have , for all cases where nodes and are on the same side of the graph. On the other hand, for each pair , where and , by the definition of eigenvalue/eigenvector and (12), we have:

Equation (14) is derived similarly. ∎

Iii-a From the non-zero eigenvalues of to the eigenvalues of

Lemma 3.

Let be an eigenvalue of the adjacency matrix . Then the solutions of the quadratic equation are two eigenvalues of .

Proof.

Let be an eigenvalue of the adjacency matrix with a corresponding eigenvector (note that the elements of the eigenvector are sorted by listing the elements corresponding to the nodes in first, followed by those corresponding to the nodes in ). By using , we define a vector of size in the following way (the element corresponding to the pair , in is denoted by ):

(15)

where and are constant numbers. Now, we show that by the proper choice of and , the vector is an eigenvector of , and in the process find the corresponding eigenvalues .

By substituting (15) in (13), we have:

(16)

where in the second and third last steps, we have used the definition of eigenvalue/eigenvector of . From (III-A), and considering , we have:

(17)

From (17) and (15), we obtain:

(18)

By solving (18), we have (note that since , by (18), we have ):

(19)

and

(20)

Similarly, by substituting (15) in (14), and taking the same steps as those taken in the derivation of (III-A), we have:

(21)

From (21) and (15), we have:

(22)

By solving (22), we obtain (since , by (22), ):

(23)

and the same equation as in (20).

Therefore, by solving (20), we find the eigenvalues of corresponding to , and then by substituting the obtained in (19) and (23), we find the constants and . These are then replaced in (15) to obtain the corresponding eigenvectors of . ∎

Next, we discuss how the eigenvalues of can be computed from those of .

Iii-B From the spectrum of to that of

Lemma 4.

[11] Let be a bi-regular bipartite graph and be its directed edge matrix. Then, the eigenvalues of are symmetric with respect to the origin. Moreover, is an eigenvalue of if and only if are eigenvalues of .

Lemma 5.

Let be a bi-regular bipartite graph. Then the spectrum of can be computed from that of , i.e., if has an eigenvalue with multiplicity , then has eigenvalues , each with multiplicity .

Proof.

The proof follows from Lemma 4, (7) and (10). ∎

Using Lemmas 1 and 5, one can obtain the spectrum of from that of .

Iii-C From the spectrum of to that of

Theorem 3.

Let be a connected bi-regular bipartite graph such that each node in has degree and each node in has degree , where , and . Also, assume that and . The eigenvalues of the directed edge matrix of can then be computed from the eigenvalues of the adjacency matrix as follows:
Step 1. For each strictly negative eigenvalue of , use Equation (20) to find two solutions. For each solution , the numbers are eigenvalues of , each with the same multiplicity as that of in the spectrum of . (The total number of eigenvalues of obtained in this step is .)
Step 2. Matrix also has the eigenvalues and . The multiplicity of each of the eigenvalues () is ().111Note that and are solutions of (20) for . (The total number of of eigenvalues of obtained in this step is .)
Step 3. Furthermore, Matrix has eigenvalues , each with multiplicity . (The total number of eigenvalues in this step is .)

Proof.

In the following, we find the set of eigenvalues of and their multiplicities, and then use Lemmas 1 and 5 to obtain the set of eigenvalues of .

Suppose that the spectrum of is , for some , where . For each , , there are linearly independent eigenvectors , associated with the eigenvalue .

For each , let and be the two eigenvalues obtained from (20) by replacing by (note that the solutions of (20) for are the same as those for ). We consider three cases that cover all possible scenarios. Case A: and ; Case B: ; and Case C: and . (Cases A, B and C correspond to Steps 1, 2 and 3 of the derivation of all the eigenvalues of . Note that, for each of Cases A, B and C, in the following, we find a lower bound on the multiplicity of the eigenvalues of (or those of ) that are obtained in those cases. Based on the fact that the sum of the obtained lower bounds is equal to () for (), we conclude that in each case, the multiplicity of the eigenvalues is exactly equal to the lower bound.)
Case A. ( and ) In this case, we show that for each , the multiplicity of is at least .222As explained before, this lower bound is tight.

Consider vectors , each of size , corresponding to eigenvectors of associated with eigenvalue , respectively. Assume that the element , of each vector is derived from the elements of the corresponding vector using the following equation:

(24)

where . Using simple calculations, one can see that for each , we have , and thus, are eigenvectors associated with the eigenvalue .

Also, consider vectors , each of size , corresponding to eigenvectors