On bipartization of networks
Abstract
Relations between discrete quantities such as people, genes, or streets can be described by networks, which consist of nodes that are connected by edges. Network analysis aims to identify important nodes in a network and to uncover structural properties of a network. A network is said to be bipartite if its nodes can be subdivided into two nonempty sets such that there are no edges between nodes in the same set. It is a difficult task to determine the closest bipartite network to a given network. This paper describes how a given network can be approximated by a bipartite one by solving a sequence of fairly simple optimization problems. We also show how the same procedure can be used to detect the presence of a large anticommunity in a network and to identify it.
etwork analysis, network approximation, bipartization, anticommunity.
65F15, 05C50, 05C82.
1 Introduction
Networks describe how discrete quantities such as genes, people, proteins, or streets are related. They arise in many applications, including genetics, epidemiology, energy distribution, and telecommunication; see, e.g., [6, 15] for discussions on networks and their applications. Networks are represented by graphs , which are determined by a set of vertices (nodes) , a set of edges , and a set of positive weights . Here represents an edge from vertex to vertex . The weight is associated with the edge ; a large value of indicates that edge is important. For instance, in a road network, the weight may be proportional to the amount of traffic on the road that is represented by the edge . In this paper, we consider connected undirected graphs without selfloops and multiple edges. In particular, all edges represent “twoway streets,” i.e., if is an edge, then so is . The weights associated with these edges are assumed to be the same. In unweighted graphs all weights are set to one.
We will represent a graph with nodes by its adjacency matrix , where
Since is undirected and the weights associated with each direction of an edge are the same, the matrix is symmetric. The largest possible number of edges of an undirected graph with nodes without selfloops is , but typically the actual number of edges, , of such graphs that arise in applications is much smaller. The adjacency matrix , therefore, generally is very sparse.
A graph is said to be bipartite if the set of vertices that make up the graph can be partitioned into two disjoint nonempty subsets and (with ), such that any edge starting at a vertex in points to a vertex in , and vice versa. This, in particular, excludes the presence of selfloops in a bipartite graph.
Bipartivity is an important structural property. It is therefore interesting to determine the best bipartization of a nonbipartite graph. We say that a splitting of the set of vertices of a weighted undirected graph into two disjoint nonempty subsets and (with ), is a best bipartization of if the sum of the weights associated with edges that point from vertices in () to vertices in the same set is minimal. We remark that this definition is analogous to the definition of a best bipartization of an undirected unweighted graph proposed by Estrada and Gómez–Gardeñes [7], where the spectral bipartivity index of a network with adjacency matrix is defined as
(1.1) 
This measure also can be applied to the weighted graphs considered in the present paper.
It is a computationally difficult problem to determine a best bipartization of a weighted or unweighted undirected graph. In the case of a symmetric bipartite adjacency matrix, the signs of the entries of an eigenvector associated with the smallest eigenvalue can be used to partition the graph, i.e., nodes that correspond to positive entries belong to one set, and nodes that correspond to negative entries belong to the other set; see [19]. In case the smallest eigenvalue is multiple, the splitting of the nodes may vary according to the considered vector in the associated eigenspace. A similar problem arises in graph partitioning based on the Fiedler vector; see [4] for a recent discussion.
We are interested in developing a numerical method for determining a “good” bipartization, i.e., a bipartization for which the sum of the weights associated with the edges that point from a vertex in to a vertex in is fairly small.
As it will be made clear in the following, the same bipartization method may be used for the identification of large anticommunities. A community is a group of nodes which are highly connected among themselves, but are less connected to the rest of the network, or isolated from it. Conversely, an anticommunity is a node set that is loosely connected internally, but has many external connections [8]; see [9], where a spectral method is used to detect communities and anticommunities. Community and anticommunity detection in networks is an important problem with applications in various fields, including physics, computer science, and social sciences [3, 14, 17, 18, 22].
This paper is organized as follows. Section 2 discusses some properties of bipartite graphs and describes an algorithm for determining a “good” bipartization. An application of the bipartization method to the identification of large anticommunities is discussed in Section 3. Symmetric tridiagonal matrices with nonnegative offdiagonal entries and vanishing diagonal entries are adjacency matrices for particular undirected weighted bipartite graphs. Section 4 discusses some properties of these adjacency matrices and associated graphs. Finally, Section 5 presents computed examples and a case study, while Section 6 contains concluding remarks.
2 A bipartization method
This section discusses some properties of the adjacency matrix for an undirected bipartite graph. Also some inequalities that are useful for the design of our bipartization method will be shown. The discussion in the first part of this section assumes that the vertices are suitably ordered. Subsequently, we will describe how to achieve such an ordering. An algorithm for our bipartization method concludes the section.
Assume for the moment that the undirected graph is bipartite, i.e., its vertex set can be split into two disjoint nonempty subsets and with and nodes, respectively, such that there are no edges between the nodes in and between the nodes in . We may assume that ; otherwise, we interchange the sets and .
Let the vertices in the set be ordered so that the first of them belong to the set and the remaining vertices belong to the set . Then the adjacency matrix for the graph is of the form
(2.1) 
where denotes the zero matrix of order , and with if the node in is connected to the node in ; otherwise .
A graph is bipartite if and only if the spectrum of its adjacency matrix is symmetric with respect to the origin, i.e.,
with . We assume that .
Let be a nonzero eigenvalue of (for the result is trivial) and let , with and , be an associated eigenvector. Then
(2.2) 
from which we see that is a singular value of , while and are its left and right singular vectors, respectively. We note that it is clear from (2.2) that neither the vectors and vanish. We therefore may scale them to be of unit length as required by singular vectors.
Since
it follows that is an eigenvalue too, and is a corresponding eigenvector.
Conversely, let be a singular value decomposition of , where has as its upper block, and and are orthogonal matrices with . Introduce the matrix
and observe that . Define the diagonal matrix
and the orthogonal matrix
(2.3) 
where , , and . Then
(2.4) 
Let and . Then the spectral factorization (2.4) takes the form
(2.5) 
with and . In the special case when , the submatrices of (2.3) with columns disappear, and the spectral factorization (2.5) simplifies to
with .
Let be an adjacency matrix of an undirected graph. We would like to approximate the graph by a bipartite one and therefore seek to approximate by a matrix of the form . We do this in several steps and first show some inequalities that are applicable to diagonal eigenvalue matrices.
Let be a nonincreasing real sequence and let be another real sequence. The distance between these sequences measured in the least squares sense,
(2.6) 
is minimal if and only if the are in nonincreasing order, i.e., if .
Assume that both sequences are in nonincreasing order and that the distance can be reduced by changing the order of the . Consider the pairs and . Then
is equivalent to
Assume . Then which is a contradiction unless . If the are ordered arbitrarily, then we can reorder these coefficients pairwise until they form a nonincreasing sequence. Each pairwise swap reduces (2.6).
In our application of Proposition 2, we let be the eigenvalues of the adjacency matrix . The graph associated with this matrix might not be bipartite. We would like the sequence of eigenvalues of the matrix , given by (2.1), be close to the sequence and appear in pairs. By Proposition 2, we know that the eigenvalues of should be in nonincreasing order, and by Proposition 2 they vanish or appear in pairs. We know from (2.5) that at least eigenvalues of should be zero.
Let , with and , be a real nonincreasing sequence. Then the sequence with elements
(2.7) 
is the closest sequence to in the least squares sense consisting of at least zeros and nonvanishing entries appearing in pairs.
The sequence consists of zero values and pairs. Indeed, we have
and it follows that the sequence is nonincreasing. It remains to establish that the defined by (2.7) are the best possible. Consider the minimization problems
(2.8) 
The solution sequence is given by (2.7). Thus, the form a nonincreasing sequence consisting of zero values and pairs. It is the closest such sequence to the sequence in the sense that it solves the minimization problems (2.8).
We would like to determine an approximation of the matrix by a matrix of the form (2.1), where we allow row and column permutations of the latter matrix. Define the spectral factorization
where is an orthogonal matrix and the eigenvalues are ordered according to
We remark that only the first eigenvalues are ordered as in (2.4).
Let us initially assume that the nonzero eigenvalues are distinct. If the eigenvectors are made unique, e.g., by making their first component positive, a comparison with (2.5) shows that
(2.9) 
where is the flip matrix
In the presence of multiple nonzero eigenvalues, the corresponding eigenvectors are not uniquely determined, so the spectral factorization (2.9) is only one of the possible different factorizations.
Let
(2.10) 
be a spectral factorization of with an orthogonal eigenvector matrix and the eigenvalues ordered according to
(2.11) 
Partition the eigenvector matrix conformally with the eigenvector matrix of , i.e.,
We would like to to approximate the eigenvector matrix of by the eigenvector matrix of . This suggests that we solve the minimization problem
where denotes the Frobenius norm. This problem splits into the three independent problems
(2.12)  
(2.13)  
(2.14) 
Problem (2.12) can be written as
(2.15) 
The following result shows how we can easily solve this problem.
The solution of problem (2.15) can be determined by computing the singular value decomposition of and setting all singular values to one.
Consider the problem
It can be written as
The first and last terms are independent of . Therefore we obtain the equivalent linear minimization problem
Similarly, the linear problem associated to the minimization problem (2.15) is given by
(2.16) 
Hence, the problem (2.15) is equivalent to determining the closest orthogonal matrix in the Frobenius norm to the matrix . The solution is given by setting the singular values in the singular value decomposition of to one; see [12, Theorem 4.1] for a proof of the latter statement.
The minimization problems (2.13) and (2.14) are solved similarly. This gives the eigenvector matrix in the spectral factorization (2.5).
Remark \thetheorem
We note that if denotes the singular value decomposition of , then we can express its polar decomposition by
The first factor is the minimizer of (2.16), while the deviation of from the identity matrix measures the quality of the approximation.
Remark \thetheorem
If some of the nonzero eigenvalues of in (2.10) are multiple, the corresponding columns of , , , and , are not uniquely determined. Anyway, when approximating by , and by , those columns contain linear combinations of the previous ones, and so they belong to the same space. Then, the approximations and will make factorization (2.9) valid.
We conclude this section giving an outline of a spectral bipartization method, based on the above results. The method approximates the adjacency matrix of a given connected undirected graph by a matrix of the form (2.1). We also describe procedures to estimate the cardinality of the sets and , as well as for suitably ordering the nodes in the graph .
The first step of our algorithm consists of finding the cardinality of the two disjoint node sets and , that is, the integers and , unless they are known in advance. We do this by identifying the number of eigenvalues that are approximately zero. We first order the eigenvalues of the starting adjacency matrix by increasing absolute value, and compute the ratios
Then, for fixed values of and , we consider the index set
In our experiments we set and .
If the set is empty, then we are not able to identify the partition of the nodes, and we consider the cardinality of the sets and to be the same. Otherwise, we let be the index defined by
and set
where denotes the closest integer to the real number .
Then the algorithm finds the sets and and reorders the nodes by the following procedure. Assume that is bipartite, but that the adjacency matrix corresponds to a random ordering of the nodes, so that
for a permutation matrix . Obviously, the structure (2.1) is lost.
In this case, the spectral factorization (2.4) becomes
i.e., the rows of the eigenvector matrix are permuted. In order to recover the structure of the eigenvectors, let us partition the eigenvector matrix as in
with and .
Assume first that . For (2.9) to be valid, the last rows of the matrix must vanish. Sorting in descending order the 1norms of its rows concentrates the smallest entries in the lower block of . Applying this permutation to the rows of brings this matrix to the form (2.9) and the adjacency matrix to the form (2.1), with the block possibly permuted.
When the block is empty, so we consider the matrix . As its first rows should be exactly zero, we sort the 1norms of its rows in ascending order, and apply the corresponding permutation to the rows of .
To finally obtain an approximation of the matrix (2.1), using the computed eigenvector matrix, we approximate the eigenvalues in the spectral factorization (2.10) by scalars that appear in pairs using Proposition 2. Specifically, we let the in the proposition be the eigenvalues (2.11). The defined in the proposition are the eigenvalues of the matrix in (2.5), in the same order.
Thus, the algorithm determines in the manner described the eigenvectors and eigenvalues of a matrix with desired block structure
(2.17) 
where the matrix has real entries. The matrix may have a different number of nonvanishing entries than . In fact, not all nonvanishing entries may be positive. We can handle this issue in several ways:

Allow to be an adjacency matrix for a weighted graph with both positive and negative weights.

Allow to be an adjacency matrix for a weighted graph with positive weights. We achieve this by replacing the matrix in (2.17) by the closest matrix, , in the Frobenius norm with nonnegative entries. The matrix is obtained from by setting all negative entries to zero.

Requiring to represent an unweighted graph. The closest such matrix in the Frobenius norm to the matrix (2.17) is obtained by setting every entry of to the closest members of the set .
The last procedure is the one adopted in the numerical experiments presented in Section 5.
3 Anticommunities
Let us consider a symmetric matrix of size with a zero leading square block of size . Then, may be considered the adjacency matrix of a network with an anticommunity of nodes. The matrix has the form
(3.1) 
with of size and a square matrix of order .
Let be as in (3.1) with and let be of full rank. Then has zero eigenvalues, and the last entries of the corresponding eigenvectors vanish.
Let us search for vectors such that . If , with and , then we have
(3.2) 
Since is of fullrank and , it follows from that and, hence, . The latter implies that is in the null space of , which has dimension . Thus, the matrix admits the following linearly independent eigenvectors corresponding to the eigenvalue ,
where , , are the left singular vectors of . Hence, has multiplicity .
Remark \thetheorem
Let be given by (3.1) and assume like above that the submatrix is of full rank. Let . Then may or may not have zero eigenvalues. Indeed, for to have a vanishing eigenvalue, the vector that appears in the proof of Theorem 3 has to belong to the null space of , which has dimension . Then, there will be zero eigenvalues if and only if the system
has a solution.
If instead and is nonsingular, then this implies that . Hence, in this case all the eigenvalues of are different from zero.
Theorem 3 shows that if a network has a large anticommunity (), the spectral decomposition has the form
The structures of and are very similar to those of and in (2.9), respectively. For this reason, the bipartization algorithm described in Section 2, is able to detect the presence of a large anticommunity and to order the nodes so that the adjacency matrix takes the form (3.1). In case a group of nodes is only approximately an anticommunity, the algorithm produces an adjacency matrix that approximates (3.1).
We turn to the case when the submatrix of in (3.1) is rank deficient. The situations are similar to those of Theorem 3 and Remark 3.
Let be as in (3.1) with a rankdeficient submatrix . Let . Then the equation
(3.3) 
with an vector and an vector, has linearly independent solutions with . For particular submatrices and , there are solutions of (3.3) with .
Consider the righthand side of (3.2). Let denote the null space of , and let stand for the restriction of the submatrix to . Similarly as the discussion in Remark 3, equation (3.3) is equivalent to
Let be a nontrivial solution of (3.3). When , there has to be a vector with . There is an dimensional null space of . Hence, there are linearly independent solutions of (3.3) with .
The existence of a solution of (3.3) with a nontrivial subvector is equivalent to
This condition does not hold for most matrix pairs .
To summarize, when , if a network is either bipartite or contains a large anticommunity, its adjacency matrix has zero eigenvalues. The converse is not true, but if has a multiple zero eigenvalue, then we can recognize the presence of one of the two above cases by observing the structure of the eigenvector matrix. We can also measure the distance from being bipartite or from having a large anticommunity.
4 Symmetric tridiagonal adjacency matrices
Consider the nonnegative symmetric irreducible tridiagonal matrix
(4.1) 
with , and , .
If all vanish, i.e., if the associated graph has no selfloops, then this matrix is the adjacency matrix of a connected weighted undirected bipartite graph. Indeed, a zerodiagonal matrix of even order of the type (4.1), also known as a Golub–Kahan matrix, results to be similar (via the perfect shuffle permutation matrix; see [5]) to a matrix of the type (2.17), where is nonnegative and lower bidiagonal. If is odd, in addition to the pairs of eigenvalues, there is also a zero eigenvalue.
The signs of the entries of an eigenvector associated with the smallest eigenvalue can be used to partition the graph. Notice that the smallest eigenvalue of (4.1) is simple. The following result is well known; see, e.g., [21, p. 194].
The eigenvalues of a symmetric irreducible tridiagonal matrix are simple.
By the Perron–Frobenius theorem for irreducible matrices, the largest eigenvalue of is simple and equal to the spectral radius of the matrix. The associated eigenvector can be scaled to have positive entries only. This soscaled eigenvector is commonly referred to as the Perron vector of and can be used to identify the best connected node of a network: it is the node associated with the largest entry of the Perron vector; see [6, 15].
We obtain an adjacency matrix that is simple to analyze by projecting orthogonally onto the subspace of tridiagonal Toeplitz matrices of order . Here we measure the distance between matrices with the Frobenius norm. Let be the orthogonal projection of onto . We are interested in properties of the graph determined by the adjacency matrix .
The graph whose adjacency matrix is the orthogonal projection of the matrix (4.1) in the subspace , given by
(4.2) 
is connected.
The diagonal entry of is the average of the diagonal entries of the matrix (4.1), and the offdiagonal entry is the average of the offdiagonal entries of (4.1); see, e.g., [16]. Since by hypothesis and for all , we have and . Hence, is irreducible and nonnegative.
The Perron vector , suitably scaled, for the nonnegative symmetric irreducible tridiagonal Toeplitz matrix (4.2) has the entries , . In particular, when is odd, the largest entry is , and when is even the two largest entries, and , have the same value. Moreover, the eigenvector associated to the smallest eigenvalue has same odd components as , whereas the even components have opposite sign.
Explicit formulas for eigenvectors of tridiagonal Toeplitz matrices can be found in, e.g., [16].
Note that the Perron vector in Proposition 4 is independent of the numerical values of and of the entries of (4.2). Moreover, the Perron vector suggests that the nodes (for odd) and and (for even) are the best connected nodes. This is in agreement with the intuition that the nodes “in the middle” of the network are the best connected nodes.
A nonnegative symmetric irreducible tridiagonal Toeplitz matrix is the adjacency matrix of a connected bipartite graph if and only if its diagonal entries vanish. This happens if and only if is not primitive, i.e., if and only if the eigenvalue associated with the Perron vector is not the unique eigenvalue whose modulus is equal to the spectral radius.
The proof follows straightforwardly from the formulas for the eigenvalues of a tridiagonal Toeplitz matrix (4.2), see, e.g., [16], and by observing that the extreme eigenvalues have the same modulus if and only if .
When replacing a symmetric tridiagonal matrix (4.1) by the closest Toeplitz matrix (4.2) with small relative error, then the relative difference in the spectra of (4.1) and (4.2) also is small. This follows from a result by Bhatia [2]. In detail, let the matrices and be symmetric and consider the relative distance between these matrices in the Frobenius norm,
Let the eigenvalues of the matrices and be ordered nonincreasingly and introduce the vectors
containing the eigenvalues of and . Define the relative distance between these vectors by
Then ; see [2].
Let be the adjacency matrix of a given weighted undirected bipartite graph. Then the adjacency matrix of the associated unweighted graph is the matrix (4.2) with and . Let denote the latter matrix. It is well known that , so that ; see, e.g. [16]. Thus,
where the (simple) eigenvalues of are in decreasing order.
The eigenpairs of the adjacency matrix are easy to analyze. Indeed, according to Proposition 4, the Perron vector is the eigenvector of associated with the eigenvalue , i.e., the vector with entries , , whereas the eigenvector that determines the bipartization is associated with the eigenvalue , i.e., it is the eigenvector with entries , .
Turning to the weighted graph associated with the matrix , we note that the bipartization only depends on the zerostructure. Since and have the same zerostructure, the graphs associated with these matrices are bipartitioned in the same way. This implies, in particular, that the eigenvectors of and associated with the smallest eigenvalue have the same sign pattern.
We also note that while the Perron vectors of and have positive entries only, the relative size of the entries of the Perron vector of depends on the weights , . The best connected nodes in the graph with the weighted adjacency matrix are not necessarily the nodes in the middle of the graph.
5 Computed examples
In the following numerical experiments, we fix the integers and , and construct a random matrix of the form (2.1), with a sparse block with density . The matrix is first perturbed, by replacing its (1,1) and (2,2) blocks by sparse matrices of appropriate size and density , and then “mixed”, by applying the same random permutation to its rows and columns.
We apply the algorithm of Section 2 to the matrix either by supplying the cardinality of the two sets and (this approach is referred to as specbip), or letting the method estimate and from the data; we refer to the latter approach as specbip. Since the block (1,2) of the matrix returned by the method is generally permuted with respect to the initial test matrix, the rows and columns are reordered according to the original sequence of the nodes. The final reordering allows us to compare the resulting matrix to the test matrix .
Our results are compared to the ones obtained by redblack ordering using the MatlabBGL library [10], a Matlab package implementing graphs algorithms. A matrix has a redblack ordering if the corresponding graph is bipartite. To find a bipartite ordering, this software uses a breadth first search algorithm, starting from an arbitrary vertex. The partition of the nodes is determined by forming a group containing all the vertices having even distance from the root, and another group with the vertices at odd distance from the root. This procedure is designed to bipartite networks, not to produce an approximation when the bipartization is not exact.
Figure 1 displays the results for a test matrix with , sparsity , and perturbation . In particular, it reports in the upper row a spy plot of the original test matrix, the perturbed version, with random arcs in the (1,1) and (2,2) blocks, and the permuted matrix which is fed to the bipartization methods. The bottom row shows the reconstructed networks. The specbipn approach, which receives the information about the cardinality of the node sets, produces the matrix closest to the original. The general algorithm estimates the cardinalities , according to the number of “small” eigenvalues; see Figure 2, where the absolute values of the eigenvalues are displayed in nondecreasing order. This algorithm produces a slightly less accurate approximation than the previous one, which is anyway much better than the matrix produced by the red/black ordering.
Figure 3 shows the results for a test matrix similar to the previous one, but with a larger perturbation . The estimation of is inaccurate, but the approximation produced by the specbib methods is quite close to the unperturbed matrix, while the red/black ordering matrix is far from it.
Now, let
where and are square matrices of size and , respectively, and let denote the number of nonzero elements of . To evaluate the quality of the results, we consider the following three indices
The first two indices measure the distance of from the adjacency matrix of a bipartite graph, the first one being based on (1.1). The third index measures the approximation error with respect to the starting bipartite network (2.1).
Tables 1, 2, and 3 report the average values of the three above quality indices over 10 realizations of the random test networks. Three different pairs are considered; each table refers to different densities and ; stands for the execution time in seconds.
(256,128)  specbip  specbip  redblack 

6.66e17  5.55e17  6.22e04  
1.28e03  1.34e03  4.68e03  
4.79e01  4.82e01    
1.04e01  1.06e01  2.42e03  
(512,256)  specbip  specbip  redblack 
7.77e17  2.00e16  3.81e03  
1.12e04  1.52e04  3.14e03  
4.27e02  6.52e02    
7.35e01  7.43e01  9.29e04  
(1024,512)  specbip  specbip  redblack 
0  0  4.21e02  
2.00e05  1.25e04  4.23e03  
2.53e03  8.69e02    
6.51e+00  6.57e+00  1.88e03 
(256,128)  specbip  specbip  redblack 

6.66e17  4.44e17  4.00e04  
3.39e04  7.11e04  3.46e03  
1.72e01  1.72e01    
1.06e01  1.14e01  1.34e03  
(512,256)  specbip  specbip  redblack 
0  0  2.87e04  
2.04e04  2.11e04  2.13e03  
1.45e01  1.32e01    
7.72e01  7.52e01  9.17e04  
(1024,512)  specbip  specbip  redblack 
0  0  4.85e03  
7.63e07  1.03e05  5.21e04  
5.17e04  9.64e03    
6.75e+00  6.66e+00  1.46e03 
(256,128)  specbip  specbip  redblack 

0  0  7.65e02  
0  4.36e04  1.10e02  
1.89e02  3.72e02    
1.23e01  1.20e01  1.54e03  
(512,256)  specbip  specbip  redblack 
0  0  1.41e01  
0  6.66e04  1.88e02  
7.28e03  4.96e02    
8.41e01  8.48e01  1.82e03  
(1024,512)  specbip  specbip  redblack 
0  0  2.58e01  
0  1.69e03  8.81e03  
7.27e03  9.27e02    
7.18e+00  7.21e+00  7.89e03 
A comparison of the tables shows that the spectral bipartization algorithm is always more accurate than the redblack ordering method. At the same time, it is much slower than the MatlabBGL function, as in our experiments we compute the whole spectrum of the adjacency matrix, without exploiting its sparsity. To be competitive with existing methods for largescale problems, the spectral method should be modified in order to perform its task by suitable iterative methods, in order to take advantage of the structure of the adjacency matrix.
From the tables, it can also be observed that knowing in advance the cardinality of the two sets and leads in some cases to a substantial improvement in the quality of the results.
5.1 The yeast network
We illustrate the performance of the spectral bipartization algorithm when applied to the detection of anticommunities by the analysis of a case study.
The yeast network describes the protein interaction network for yeast, each edge representing an interaction between two proteins [13, 20]. The data set is available at [1]. In this section we analyze this network, testing the presence of a bipartization or of a large anticommunity.
The yeast network has 2114 nodes. There are 74 selfloops (nodes connected only to themselves) and 268 nodes disconnected from the network. The adjacency matrix resulting from removing both the selfloops and the unconnected nodes has size , and it has 149 connected components. They were identified by the getconcomp function from the PQser Matlab toolbox [4].
In the case of a reducible adjacency matrix, the spectral bipartization algorithm should treat each single connected component at a time. Since most of the components in the yeast network are very small, often just 2 or 3 nodes, we consider the only component with more than 10 nodes, which has 1458 nodes. We process the reduced adjacency matrix with our bipartization method.
The algorithm determines zero eigenvalues (see Figure 4) and identifies two sets of nodes with cardinalities and .
The starting adjacency matrix is represented in the topleft spy plot of Figure 5. The topright plot shows the same matrix after the ordering produced by the spectral bipartization algorithm is applied to its rows and columns. This graph clearly displays that there is a large group of nodes in the yeast network that do not communicate much among themselves, that is, an anticommunity. In the same graph we show the bipartization detected by the algorithm by means of red lines.
Our algorithm can also be applied by supplying the values of , rather than estimating them from the number of zero eigenvalues. If we do this by setting