# An MBO scheme for clustering and semi-supervised clustering of signed networks

###### Abstract

We introduce a principled method for the signed clustering problem, where the goal is to partition a weighted undirected graph whose edge weights take both positive and negative values, such that edges within the same cluster are mostly positive, while edges spanning across clusters are mostly negative. Our method relies on a graph-based diffuse interface model formulation utilizing the Ginzburg–Landau functional, based on an adaptation of the classic numerical Merriman–Bence–Osher (MBO) scheme for minimizing such graph-based functionals. The proposed objective function aims to minimize the total weight of inter-cluster positively-weighted edges, while maximizing the total weight of the inter-cluster negatively-weighted edges. Our method scales to large sparse networks, and can be easily adjusted to incorporate labelled data information, as is often the case in the context of semi-supervised learning. We tested our method on a number of both synthetic stochastic block models and real-world data sets (including financial correlation matrices), and obtained promising results that compare favourably against a number of state-of-the-art approaches from the recent literature.

###### Contents

- 1 Introduction
- 2 Related literature
- 3 Ginzburg–Landau functional
- 4 Ginzburg–Landau functional on signed graphs
- 5 GLF with constraints
- 6 The algorithm
- 7 Numerical experiments for a signed stochastic block model
- 8 Numerical experiments for the Barabási–Albert model
- 9 Image segmentation
- 10 Time series correlation clustering
- 11 US migration
- 12 Summary and future directions
- Appendix

Keywords: MBO, clustering, signed networks, graph Laplacians, spectral methods, time series

## 1 Introduction

Clustering is one of the most widely used techniques in data analysis, and aims to identify groups of nodes in a network that have similar behaviour or features. It is arguably one of the first tools one uses when facing the task of understanding the structure of a network.

Spectral clustering methods have become a fundamental tool with a broad range of applications in applied mathematics and computer science, and tremendous impact on areas including scientific computing, network science, computer vision, machine learning and data mining, such as ranking and web search algorithms. On the theoretical side, understanding the properties of the spectrum (both eigenvalues and eigenvectors) of the adjacency matrix and its various corresponding Laplacians, is crucial for the development of efficient algorithms with performance guarantees, and leads to a very mathematically rich set of open problems.

Variational methods have a long history in mathematics and physics. In a variational model, the aim is to find minimizers of a given functional (which is often called an ‘energy’ in recognition of the field’s origins, which are rooted in physics, even in situations where this terminology is not physically justified). When formulated in a continuum setting, such models are closely related to partial differential equations (PDEs) which arise as the Euler–Lagrange equations or gradient flow equations in the model. These equations are often nonlinear and can have other interesting properties, such as nonlocality. In recent years it has been recognized that such properties can be useful for clustering problems as well [1, 2]. When attempting a hard clustering problem, i.e. a clustering problem in which each node is to be assigned to exactly one cluster, the approach is usually to relax the exact combinatorial problem. For example, in the case of spectral relaxations, one typically leverages the spectrum of certain graph-based matrix operators, in particular their eigenvalues and especially eigenvectors. This introduces the question of how to translate the real-valued outcomes of the relaxed problem into a discrete cluster label. To this end, non-linearities can prove useful as they can be used to drive the real-valued approximate solution away from the decision boundaries. Moreover, what is non-local in a continuum setting, can be made local in a graph setting, by adding edges between nodes that are located far away in continuum space. Furthermore, since the equations which result from graph-based variational methods tend to resemble discretized PDEs, the large existing literature on numerical methods for solving PDEs often provides fast algorithms for solving such equations. Considerations of similar nature have led to a growing interest in the use of variational methods in a graph setting [3, 4, 5, 6].

In certain applications, negative weights can be used to denote dissimilarity or distance between a pair of nodes in a network. The analysis of such signed networks has become an increasingly important research topic in the past decade, and examples include social networks that contain both friend and foe links, shopping bipartite networks that encode like and dislike relationships between users and products [7], and online news and review websites such as Epinions [8] and Slashdot [9] that allow users to approve or denounce others [10]. Such networks may be regarded as signed graphs, where a positive edge weight captures the degree of trust or similarity, while a negative edge weight captures the degree of distrust or dissimilarity. Further efforts for the analysis of signed graphs have lead to a wide range of applications including edge prediction [11, 10], node classification [12, 13], node embeddings [14, 15, 16, 17], and node ranking [18, 19]. We refer the reader to [20] for a recent survey on the topic.

Finally, another main motivation for our proposed line of work arises in the area of time series analysis, where clustering time series data is a task of particular importance [21]. It arises in a number of applications in a diverse set of fields ranging from biology (analysis of gene expression data [22]), to economics (time series that capture macroeconomic variables [23]). A particularly widely used application originates from finance, where one aims to cluster the large number of time series corresponding to financial instruments in the stock market [24, 25]. In all the above application scenarios, a popular approach in the literature is to consider the similarity measure given by the Pearson correlation coefficient (and related distances), that measures the linear dependence between pairs of variables and takes values in . Furthermore, the resulting empirical correlation matrix can be construed as a weighted network, where the signed edge weights capture pairwise correlations, which reduces the task of clustering the multivariate time series to that of clustering the underlying signed network. For scalability and robustness reasons, one can further consider thresholding on the p-value associated to each individual sample correlation entry [26], following tests of statistical significance, which renders the network sparse, and thus amenable to larger scale computations. We refer the reader to the seminal work of Smith et al. [27] for a detailed survey and comparison of various methodologies for casting time series data into networks, in the context of fMRI time series. Their overall conclusion that correlation-based approaches are typically quite successful at estimating the connectivity of brain networks from fMRI time series data, provides sound motivation for the development of algorithms for analysis of signed networks and correlation clustering.

Contribution. Our main contribution is to extend the Ginzburg–Landau functional to the setting of clustering signed networks, and show the applicability of the MBO scheme in this context, for different types of constraints that arise in the semi-supervised setting. As a side contribution, we provide a more computationally efficient way to determine node-cluster membership (as an alternative to the traditional projection/thresholding step). In addition, we show extensive numerical experiments on both synthetic and real data, showcasing the competitivity of our approach when compared to state-of-the-art methods, across various noise and sparsity regimes.

Paper outline. The remainder of this paper is organized as follows. Section 2 is a summary of related work from the signed clustering literature. Section 3 gives a brief overview of the Ginzburg–Landau functional. Section 4 extends the Ginzburg–Landau framework to the setting of signed graphs, by explicitly handling negative weights and modifying the MBO algorithm accordingly. Section 5 pertains to the semi-supervised learning setting, and incorporates various types of a priori information available to the user. Section 6 summarizes the structure of the algorithm we propose. Section 7 contains numerical experiments under a signed stochastic block model, and includes various scenarios in the semi-supervised setting. Section 8 details the outcome of numerical experiments under the Barabási–Albert model. Section 9 considers an application to image segmentation, Section 10 to clustering correlation matrices arising from financial time-series data, and Section 11 to computational social science using a network resulting from migration patterns between US counties. Section 12 summarizes our findings and details future research directions. Finally, we defer to the Appendix the technical details that pertain to the projection/thresholding step.

Notation. We summarize our notation in Table 1.

## 2 Related literature

The problem of clustering signed graphs traces its roots to the work of Cartwright and Harary from the 1950s from social balance theory [28, 29]. Consider, for example, three mutually connected nodes from a signed graph, and define the relationship between two nodes to be positive if the edge in-between is (they are “friends”), and negative if the edge in-between is (they are “enemies”). We say that such a triad is balanced if the product of its signs is positive, and that the signed graph is balanced if all of its cycles are positive [29], i.e., each cycle contains an even number of negative edges. Cartwright and Harary proved the so-called structure theorem that if a signed graph is balanced, then the set of vertices, can be partitioned into two subsets, called plus-sets, such that all of the positive edges are between vertices within a plus-set and all negative edges are between vertices in different plus-sets.

Davis [30] proposed a weaker notion of balance, weak balance theory, which relaxed the balanced relationship by allowing an enemy of one’s enemy to also be an enemy, and showed that a network with both positive and negative edges is k-weakly balanced if and only if its nodes can be clustered into groups such that edges within groups are positive and edges between groups are negative. For a -balanced network, the problem of finding the partition of the node set can be viewed as an optimization problem: find the -partition which minimizes given by

(1) |

over , the set of all possible -partitions. Here is the number of positive arcs between plus-sets, is the number of negative arcs within plus-sets, and is a weighting constant [31]. If the graph is -balanced the set of minimizers of will not change when varies; if the graph is not -balanced, the parameter allows control over which kind of error is penalized more strongly, positive ties between groups or negative ties within groups.

Motivated by this characterization, the -way clustering problem in signed networks amounts to finding a partition into clusters such that most edges within clusters are positive and most edges across clusters are negative. As an alternative formulation, one may seek a partition such that the number of violations is minimized, i.e., negative edges within the cluster and positive edges across clusters. In order to avoid partitions where clusters contain only a few nodes, one often prefers to incentivize clusters of similar large size or volume, leading to balancing factors such as the denominators in (5) or in (6), discussed further below.

Weak balance theory has motivated the development of algorithms for clustering signed networks. Doreian and Mrvar [32] proposed a local search approach in the spirit of the Kernighan–Lin algorithm [33]. Yang et al. [34] introduced an agent-based approach by considering a certain random walk on the graph. Anchuri et al. [35] proposed a spectral approach to optimize modularity and other objective functions in signed networks, and demonstrated its effectiveness in detecting communities from real world networks. In a different line of work, known as correlation clustering, Bansal et al. [36] considered the problem of clustering signed complete graphs, proved that it is NP-complete, and proposed two approximation algorithms with theoretical guarantees on their performance. On a related note, Demaine and Immorlica [37] studied the same problem but for arbitrary weighted graphs, and proposed an O() approximation algorithm based on linear programming. For correlation clustering, in contrast to k-way clustering, the number of clusters is not given in advance, and there is no normalization with respect to size or volume.

Kunegis et al. [38] proposed new spectral tools for clustering, link prediction, and visualization of signed graphs, by solving a signed version of the 2-way ratio-cut problem via the signed graph Laplacian [39]

(2) |

where is the diagonal matrix with . The authors proposed a unified approach to handle graphs with both positive and negative edge weights, motivated by a number of tools and applications, including random walks, graph clustering, graph visualization and electrical networks. The signed extensions of the combinatorial Laplacian also exist for the random-walk normalized Laplacian and the symmetric graph Laplacian

(3) |

(4) |

the latter of which is particularly suitable for skewed degree distributions. Here denotes the identity matrix. Recent work, by a subset of the authors in our present paper, provided the first theoretical guarantees, under a signed stochastic block model, for the above Signed Laplacian [40].

In related work, Dhillon et al. [41] proposed a formulation based on the Balanced Ratio Cut objective

(5) |

and the closely related Balanced Normalized Cut (BNC) objective for which the balance factors in the denominators are replaced by ,

(6) |

Here denotes the collection of all sets containing the indicator vectors of a -partition of the node set , i.e. if , then each of the vectors has entries

(7) |

for a collection of disjoint clusters which partitions the node set . The same authors claimed that the approach of Kunegis et al. [38] via the Signed Laplacian faces a fundamental weakness when directly extending it to -way clustering, and obtained better results on several real data sets via their proposed algorithm which optimizes the BNC objective function. On the other hand, [40] reported good numerical performance of the Signed Laplacian even for higher values of , aside from providing a robustness analysis of it for .

A rather different approach for signed graph clustering has been investigated by Hsieh et al. [42], who proposed a low-rank model, based on the observation that the problem of clustering a signed network can be interpreted as recovering the missing structure of the network. The missing relationships are recovered via state-of-the-art matrix completion algorithms (MC-SVP), which first complete the network and then recover the clusterings via the k-means algorithm using the top eigenvectors of the completed matrix. An alternative method (MC-MF), scalable to very large signed networks, is based on a low-rank matrix factorization approach, which first completes the network using matrix factorization and derives the two low-rank factors , runs k-means on both and , and selects the clustering which achieves a lower BNC objective function (6).

Both strong and weak balance are defined in terms of local structure at the level of triangles. As observed early in the social balance theory literature, local patterns between the nodes of a signed graph induce certain global characteristics. Chiang et al. [43] exploited both local and global aspects of social balance theory, and proposed algorithms for sign prediction and clustering of signed networks. They defined more general measures of social imbalance based on cycles in the graph, and showed that the classic Katz measure, ubiquitous for unsigned link prediction, also has a balance theoretic interpretation in the context of signed networks. The global structure of balanced networks leads to a low-rank matrix completion approach via convex relaxation, which scales well to very large problems. This approach is made possible by the fact that the adjacency matrix of a complete k-weakly balanced network has rank 1 if , and rank for all [43].

The recent work of Mercado et al. [44] presented an extended spectral method based on the geometric mean of Laplacians. In another recent work, a subset of the authors showed in [45] that, for , signed clustering can be formulated as an instance of the group synchronization problem over , for which spectral, semidefinite programming relaxations, and message passing algorithms have been studied; extensions to the multiplex setting and incorporation of constraints have also been considered. Finally, we refer the reader to [46, 47] for recent surveys on clustering signed and unsigned graphs.

## 3 Ginzburg–Landau functional

The Ginzburg–Landau functional (GLF) was originally introduced in the materials science literature to study phase separation, and has since also found many uses in image processing and image analysis due to its connections with the total variation functional [48, 49]. In this section, we briefly review the main idea behind the GLF together with a description of the (local) minimizer(s) of the functional in terms of solutions of the Allen–Cahn equation (ACE). Then, we will show how this approach can be adapted to the discrete case of clustering over signed graphs.

### 3.1 GLF as potential energy

In this introductory presentation, which is based on [50, 51], we restrict to the one dimensional case for simplicity. Consider the function in the Sobolev space . The GLF is defined as

(8) |

for some . The second term has the shape of a double-well and is often labelled . The GLF is interpreted as the potential energy for the physical observable . At this stage, is also understood as time dependent and we want to find solutions of

(9) |

where is the functional derivative of , which lead in their time evolution to the local minima of the GLF. A common choice for the boundary values of the problem (9) is the von Neumann condition

(10) |

In this way, calculating the functional derivative of the right-hand side in (9) gives rise to the ACE

(11) |

where the last term is often called . Moreover, it is possible to verify by direct calculation that the GLF is decreasing in time on the solution of the ACE, that is

(12) |

as is expected for the evolution of a physical observable along its energy potential. The meaning of the coefficient can be understood as follows. A stationary solution for the ACE, such that in (11), is given by

(13) |

By taking the first derivative with respect to the spatial coordinate, we can see that the parameter is responsible for the sharpness of the solution (see Figure 1). Thus, given the values of the function at the boundary of its domain, the ACE can be understood as the evolution of the layer profile connecting the two boundary values of the system.

### 3.2 GLF and clustering

Using tools from nonlocal calculus, see Section 2.3 in [1], it is possible to write a version of the GLF where the density function is defined on a network. To fix the notation, we denote as a positive-weighted undirected graph with vertex set , edge set and affinity matrix which is a symmetric matrix whose entries are zero if or ^{2}^{2}2The symbol , respectively , denotes that nodes and are connected, respectively not connected, by an edge. and nonnegative otherwise.
From the affinity matrix, ones defines the diagonal degree matrix as

(14) |

where is the cardinality of the vertex set. When the entries of the affinity matrix are positive, the associated network is often called an unsigned graph. In the same way, the discrete equivalent of the density function is now a column vector on the vertex set, that is , written as

(15) |

Following [3], it turns out that the network version of (8) for unsigned graphs, also called graph GLF (GGLF), is given by

(16) |

where is the (combinatorial) graph Laplacian, and denotes the usual inner product on . It is important to notice that the following expression holds true

(17) |

The inequality in (17) shows that is positive semidefinite (i.e., has non-negative, real-valued eigenvalues, including ) and this assures the non-negativity of the GGLF (16). Again, the minimization of the GGLF can shed light on the interpretation of the terms composing the potential. The second term in will force minimizers to have entries which are close to or in order to keep small. The first term in penalizes the assignment to different clusters of nodes that are connected by highly weighted edges. This means that minimizers of are approximate indicator vectors of clusters that have few highly weighted edges between them. In the end, we interpret nodes having different sign as belonging to different clusters.

As before, minimizing the GGLF using the method of the gradient descent, we obtain the ACE for networks which we write component-wise as

(18) |

where indicates the -th component of the vector . Instead of the combinatorial graph Laplacian , it is possible to use the random walk Laplacian (which is positive semidefinite), or the symmetric graph Laplacian (symmetric positive semidefinite). In fact, [52] argues for the use of , but no rigorous results are known that confirm this is necessarily the best choice for each application.

### 3.3 Numerical solution of the ACE

Various methods have been employed to numerically compute (approximate) minimizers of . In [52] a gradient flow approach is used and (18), interpreted as a system of ordinary differential equations (ODEs), is solved. The hope is that the gradient flow dynamics will converge to a local minimizer of . Under some assumptions [53], this hope is theoretically justified [54], but numerically the nonconvexity of can cause some issues. In [52] such issues were tackled with a convex splitting scheme.

Another strategy which has been developed to approximately solve the ACE for networks consists in using a variation of the Merriman–Bence–Osher (MBO) scheme (also known as threshold dynamics). This was first introduced in [55, 56] in a continuum setting as a proposed algorithm to approximate flow by mean curvature. Following [1], we first neglect the terms and discretize (18) in the following way

(19) |

where is the length of a small time interval to be tuned and the superscript indicates the step of the iterative procedure ( simply specifies an intermediate step between the -th step and the subsequent one which will be labelled as ). The above leads to the MBO equation (MBOE)

(20) |

where is the identity matrix. After performing the eigendecomposition of the Laplacian, (20) becomes

(21) |

where is a matrix whose columns are the normalized eigenvectors of and is the relevant eigenvalue diagonal matrix. Since the double-well potential pushes the components of to be either or , its role can be approximated via a thresholding step, in the following way

(22) |

This procedure can be iterated until a convergence criterion is met. In practice, the diffusion step (20) is repeated times before the thresholding step (22). This corresponds to multiplying in (20) by (note the division of by ).

The graph version of MBO has been used successfully in recent years in various applications, both applied to [1], its multi-class extension [57], and its signless Laplacian variant [58]. In general, we notice that the choice of also poses a problem. Smaller values of are expected to lead to minimizers that are closer to being indicator vectors, however the discrete length scales of the problem impose a lower bound on the values that can be resolved. The MBO scheme avoids this problem since the are substituted with a thresholding step. In this way, we can fix . Using the MBO scheme instead of the gradient flow does not solve all problems: It eliminates the terms in the ODEs that originated with the non-convex term in and it removes from consideration, but instead a value for has to be chosen. The discrete length scales in the problem disallow values that are too small, yet has to be small enough to prevent oversmoothing by the diffusion term .

### 3.4 Generalization to multiple clusters

The Ginzburg–Landau method has been adapted to allow for clustering or classification into clusters [59, 60, 61, 62]. We briefly review this line of work here, since our proposed approach for clustering signed graphs is for general as well. This construction relies upon an assignment or characteristic matrix whose entries correspond to the weight for a node to belong to a cluster . More precisely, consider the simplex

(23) |

which corresponds to row vectors whose components sum up to . Then, we can associate to each node a characteristic vector^{3}^{3}3We generally refer to the vector whose entries are the “intensity" of belongingness to a cluster as characteristic vector also if these coefficients are not only or ., written as a column vector

(24) |

which specifies the weight for the node to belong to each of the clusters in the network. So, the characteristic matrix of all the nodes in the system is defined as

(25) |

In the end, as explained in [61], the generalization of the GLF reads as

(26) |

where is the taxicab metric, and correspond to the canonical basis vector defined as

(27) |

whose components satisfy and . In this context, the MBO scheme for the characteristic matrix reads as

(28) |

where we have fixed , as explained at the end of Section 3.3, and we have included the contribution from the diffusion iteration directly in as explained below equation (22). Then, following [61], we project each row of on the simplex . We call the corresponding vector

(29) |

It is important to notice that in the intermediate step . The thresholding step consists in determining the closest vertex to of , that is^{4}^{4}4When has the same distance to multiple vertices, one of those is picked at random with uniform probability.

(30) |

In the end, we have . In Section 4.2, we will use a simplified version of the MBO scheme which reduces the projection (29) and thresholding (30) to a single step (the equivalence of these two versions will be proved in the Appendix A.1).

## 4 Ginzburg–Landau functional on signed graphs

In this section, we define a new affinity matrix that also incorporates negative weights, and investigate how the degree and Laplacian matrices can be suitably modified. In this way, we also show how negative weights can be interpreted in the context of the GL functional. Moreover, we discuss how the standard MBO algorithm can be simplified.

### 4.1 Signed Laplacians

In the following, we will consider connected weighted undirected simple graphs (hence with symmetric affinity matrices and no self-loops). The corresponding affinity matrix is a symmetric matrix whose entries are zero if or and nonzero real otherwise. This can be decomposed in two matrices and whose components are defined as

(31) |

respectively, so that

(32) |

In this decomposition, note that , respectively , takes into account the affinity, respectively antipathy, between nodes. Along the same lines as [38], we define the signed degree matrix as

(33) |

where is the absolute value of . As for the affinity matrix, can be decomposed into and , with components

(34) |

so that

(35) |

Following [38], the signed Laplacian is defined as

(36) |

together with its variants

(37) |

and

(38) |

Kunegis et al. [38] showed that the signed Laplacian is positive semidefinite, and furthermore, unlike the ordinary Laplacian , the signed Laplacian is strictly positive-definite for certain graphs, including most real-world networks. It is known that (see for instance [63, 64]) if a matrix is positive semidefinite then is also positive semidefinite provided that is invertible. We can apply this to show from the definition (38) that is also positive semidefinite since is diagonal with rank . Positive semidefiniteness of follows from the second equality in (38) since it shows that and are similar matrices.

In particular, for (36) we can see that a natural decomposition stemming from (32) and (35) occurs,

(39) |

where we stress that and . The latter is known in the literature as signless Laplacian [65] and, to the best of our knowledge, first appeared, although without this nomenclature, in [66].

To understand the meaning of the different terms contributing to the signed Laplacian, first consider the following quadratic form

(40) |

for some . Using the definition of together with the fact that is symmetric with null diagonal, we have for the first term^{5}^{5}5Here, we use the following shorthand notation , together with and .

(41) |

and for the second

(42) |

Putting all the contributions together gives

(43) |

The minimum of (43) is achieved when . In the simplified case in which the node vector is limited to dichotomic values such as , one can envision the system to be made up of clusters and the value of (being only or ) specifies the node-cluster association. In addition, with this particular choice, (43) counts the number of times, weighted for , two nodes were placed incorrectly in the same cluster (which we want to minimize). We note that choosing as dichotomic values would reach a different conclusion since the result of the minimization would be , that is, the nodes would belong to the same cluster. In regard to (being the Laplacian of the affinity matrix ), using either or leads to the same interpretation as for (17). A more rigorous point of view based on the TV measure can be found in [58] and references therein.

The elements of the canonical basis, which correspond to pure node-cluster associations, can be suitably transformed to take values , via the following expression

(44) |

where the associated simplex is

(45) |

In addition, since each row of from (25) is a linear combination of whose coefficients sum up to , the characteristic matrix in the new representation can be obtained as

(46) |

In the end, by plugging in place of in (26), we have that the signed GL functional is given by

(47) | ||||

(48) |

where the different normalization of the terms comes from (44) and (46). In other words, through the signed GL functional we want to minimize the presence of positive links between clusters as well as negative links within each one. We can verify that the following equality holds

(49) |

where, on the right-hand side, the Laplacian of the GL functional is , and we used the fact that together with known properties of the trace of the product of two matrices.

### 4.2 MBO scheme for signed graphs

The MBO scheme arising from the signed GL has the same structure as in (28), that is

(50) |

together with and which are the eigenvector and eigenvalue matrices of , respectively^{6}^{6}6Here we have used another definition for which is different from the definition of in (28). In fact, since is real and symmetric its eigenvector matrix is orthogonal, that is . So, we have that ..
As explained in the Appendix A.1, the succession of projection (which in [61] is performed using the algorithm developed in [67]) and thresholding steps can be reduced to determine the entry of the largest component of each row of , so that^{7}^{7}7As before, when has the same distance to multiple vertices, one of those is picked at random with uniform probability.

(51) |

However, we note that the calculation of the argmax in (51) is independent of the representation chosen for the characteristic matrix. In fact, using (46) we can see that

(52) |

Since has constant rows, the location of the largest entry in a row of is the same as in , that is from (51)

(53) |

since is independent of by definition. The factor can also be neglected since it does not change the actual node-cluster association. We indicate this equivalence with the symbol

(54) |

In other words, after applying , the position of the largest entry of , resulting from the argmax in (51), is the same as the one of . In this way, we can keep using a sparse node-cluster association matrix by picking instead of .

## 5 GLF with constraints

Now, we extend the GLF to three possible types of constraints. Here, we distinguish between two different types of constraints: soft, when the final result of the algorithm does not necessarily satisfy the original constraint, and hard, when the output matches the constraint.

### 5.1 Must-link and cannot-link

In practice, the affinity matrix usually results from measurements of a system’s properties (see Section 9 for example). However, in the same spirit as in [68], additional information in the form of both positive (must-link) and negative (cannot-link) edge weights, can also be known and could have a different relative strength with respect to measurements. In quantitative terms, we will embed must-links in and cannot-links in and generate the new degree matrices and accordingly. To this end, we define

(55) |

where is the must-link matrix, and is a trade-off parameter. The role of consists in adding links to nodes which should be belong to the same cluster or to increase the weight of a link to reinforce node-cluster associations. In general, it is a symmetric sparse matrix with positive weights (whose values can also be different among each couple of nodes). We indicate the resulting degree matrix for , defined in the same fashion as in (33), with . Similarly

(56) |

where is the cannot-link matrix, and the trade-off parameter. In the same fashion as , the matrix can include links to nodes which should not belong to the same cluster or to decrease even more the weight of a link pushing node couples to belong to different clusters. In this case as well, is a symmetric sparse matrix with positive weights, where the greater the weight the more the two nodes are incentivised to belong to different clusters. Also here we have as the degree matrix of . In the end, we have that the GLF in the presence of must- and cannot-links is given as

(57) |

where , so that . As in (50), we now need to determine the matrix resulting from the eigendecomposition of .

### 5.2 Fidelity and avoidance terms

Soft constraints can also be implemented at the level of the characteristic matrix mainly in two ways. First, as a fidelity term (see for instance [61]) which attracts the dynamics of the ACE towards a particular characteristic matrix . At the level of the GLF this translates into a term which increases the overall energy when the node-cluster association in is not matched. Second, as an avoidance term which pushes the ACE dynamics to avoid particular node-cluster combinations expressed in a characteristic matrix . Again, for the GLF this means an energy increase when the wrong node-cluster association is met. We notice that while the first constraint points the system to a particular combination and increasing the energy every time this condition is not met, the second does the opposite, that is only one combination increases the energy and leaves all the others as equally possible. The GLF for an affinity matrix with positive edges only reads as

(58) |

where is the Hadamard product and (and ) is a diagonal matrix whose element () is positive if and only if there exists a such that () and is zero otherwise. At the level of the MBOE, following [61] the first step in the procedure is given by

(59) |

We can obtain an equivalent expression for the signed GLF using (46). The result is