A Unified Joint Matrix Factorization Framework for Data Integration

# A Unified Joint Matrix Factorization Framework for Data Integration

## Abstract

Nonnegative matrix factorization (NMF) is a powerful tool in data exploratory analysis by discovering the hidden features and part-based patterns from high-dimensional data. NMF and its variants have been successfully applied into diverse fields such as pattern recognition, signal processing, data mining, bioinformatics and so on. Recently, NMF has been extended to analyze multiple matrices simultaneously. However, a unified framework is still lacking. In this paper, we introduce a sparse multiple relationship data regularized joint matrix factorization (JMF) framework and two adapted prediction models for pattern recognition and data integration. Next, we present four update algorithms to solve this framework. The merits and demerits of these algorithms are systematically explored. Furthermore, extensive computational experiments using both synthetic data and real data demonstrate the effectiveness of JMF framework and related algorithms on pattern recognition and data mining.

non-negative matrix factorization, joint matrix factorization, data integration, network-regularized constraint, pattern recognition, bioinformatics.

## 1 Introduction

Nonnegative matrix factorization (NMF) is a powerful matrix factorization technique which typically decomposes a nonnegative data matrix into the product of two low-rank nonnegative matrices [1, 2]. NMF was first introduced by Paatero and Tapper (1994) and has become an active area with much progress both in theory and in practice since the work by Lee and Seung (1999). NMF and its variants have been recognized as valuable exploratory analysis tools. They have been successfully applied into many fields including signal processing, data mining, pattern recognition, bioinformatics and so on [3, 4].

NMF has been shown to be able to generate sparse and part-based representation of data [2]. In other words, the factorization allows us to easily identify meaningful sub-structures underlying the data. In the past decade, a number of variants have been proposed by incorporating various kinds of regularized terms including discriminative constraints [5], network-regularized or locality-preserving constraints [6, 7], sparsity constraints [8, 9], orthogonality constraints [10] and others [11].

However, the typical NMF and its variants in its present form can only be applied to one matrix containing just one type of variables. Large amounts of multi-view data describing the same set of objects can be available now. Thus, data integration methods are urgently needed. Recently, joint matrix factorization based data integration methods have been proposed for pattern recognition and data mining among pairwise or multi-view data matrices. For example, Greene and Cunninghan proposed an integration model based on matrix factorization (IMF) to learn the embedded underlying clustering structures across multiple views. IMF is a late integration strategy, which fuses the clustering solutions of each individual view for further analysis [12]. Zhang et al. (2012) proposed a joint nonnegative matrix factorization (jNMF) to decompose a number of data matrices which share the same row dimension into a common basis matrix and different coefficient matrices , such that by minimizing [13]. This simultaneous factorization can not only detect the underlying part-based patterns in each matrix, but also reveal the potential connections between patterns of different matrices. A further network-regularized version has also been proposed and applied in bioinformatics [14]. Liu et al. (2013) proposed a multi-view clustering method, which factorizes individual matrices simultaneous and requires the coefficient matrices learnt from various views to be approximately common [15]. Specifically, it is defined as follows,

 min∑I∥XI−WIHI∥2F+∑IλI∥WI−W∗∥2Fs.t.∥(HI)i,⋅∥1=1,∀iandWI≥0,HI≥0,W∗≥0,

where is a parameter to tune the relative weight among different views as well the two terms. Zitnik and Zupan (2015) proposed a data fusion approach with penalized matrix tri-factorization (DFMF) for simultaneously factorizing multiple relationship matrices in one framework [16]. They also considered to incorporate the must-link and cannot-link constraints within each data type into the DFMF model as follows,

 min∑Rij∈R∥∥Rij−GiSijGTj∥∥2+max(ti)∑t=1Tr(GTΘ(t)G)s.t.G≥0,

where , , , is the number of data sources for the th object type. represents the relationship data matrix between the th and the th object type (between constraint). DFMF decomposes it into , and constrained by (within constraint), which provides relations between objects of the th object type. This method well exploits the abstracted relationship data, but ignores the sample-specific information of data. In image science, Jing et al. (2012) adopted a supervised joint matrix factorization model to learn latent basis by factorizing both the region-image matrix and the annotation-image matrix simultaneously and incorporating the label information (where indicates the label index of the th image) [17]. This supervised model for image classification and annotation (SNMFCA) is formulated as follows,

 minλ2∥X1−W1H∥2F+1−λ2∥X2−W2H∥2F+η2Tr(HΘHT)s.t.W1≥0,W2≥0,H≥0,

where with if and 0 otherwise. Obviously, the SNMFCA aims to determine the latent basis with known class information. However, this model does not consider the the must-link and cannot-link constraints within each data type and those between data types.

Recently, based on the jNMF [13], Stražar et al. (2016) proposed an integrative orthogonality-regularized nonnegative matrix factorization (iONMF) to predict protein-RNA interactions. iONMF was an extension of jNMF by integrating multiple types of data with orthogonality regularization on the basis matrix [18]. This model learns the coefficients matrices from the training dataset, and the basis matrix from the testing dataset, and then predicts the interaction matrix. However, both jNMF and iONMF were originally solved by a multiplicative update method, which might be limited by its slow convergence or even non-convergence issues.

In this paper, we first generalize and introduce a unified joint matrix factorization framework (JMF) based on the classical NMF and jNMF for pattern recognition and data mining by integrating multi-view data on the same objects and must-link and cannot-link constraints within and between any two data. In addition, sparsity constraints are also considered. We adopt four update algorithms including multiplicative update algorithm (MUR), projected gradient method (PG), Nesterov’s optimal gradient method (Ne), and a novel proximal alternating nonnegative least squares algorithm (PANLS) for solving JMF. Then, the JMF is extended to two types of prediction models with one based on the basis matrix and another based on the coefficients matrices (). Finally, we demonstrate the effectiveness of this framework both in revealing object-specific multiple-view hidden patterns and prediction performance through extensive computational experiments.

Compared with existing NMF techniques for pattern recognition and data integration, JMF has the following characteristics:

1. JMF can model multi-view data as well as must-links/cannot-links simultaneously for recognizing object-specific and multi-view associated patterns.

2. Must-links and cannot links within and between some views can be completely missing, and each within-view or between-view type can be associated with multiple constraint matrices.

3. JMF can be solved with diverse update algorithms, among which PANLS is a representative one for solving JMF with competitive performance in terms of computational accuracy and efficiency.

The rest of the paper is organized as follows. In section 2, we describe the formulation of JMF. In section 3, we present four update methods to solve JMF. In section 4, we propose two prediction models based on JMF. In section 5, we illustrated the experimental results on both synthetic and real datasets. At last, we summarize this study in section 6.

## 2 Problem formulation

Given two nonnegative matrices and with size of and , the networked relationship represented by two adjacency matrices and with size of and and the between networked relationship represented by a bipartite adjacency matrix of size . In our application, our assumption is that the two matrices and are two different kinds of descriptions of the same set of objects, the networked relationship , and are described as prior knowledge about the features. The goal of this study is to find a reduced representation by incorporating all the data we have now.

To achieve the ultimate goal in one framework, we incorporate three components into the objective function. The first one considers the parts-based data representation of two matrices and . The second and third ones consider the networked relationship and of each type of features, and the between networked relationship by imposing network regularized constraints, respectively. Finally, we consider to incorporate sparsity constraints to get a sparse solution.

### 2.1 NMF and its variants

Non-negative matrix factorization (NMF) problem is a matrix factorization model which uses two low-rank non-negative matrices, i.e., one basis matrix and one coefficient matrix, to reconstruct the original data matrix [1, 2]. Its objective function is

 minW,H≥0∥X−WH∥2F,

where and are the basis matrix and coefficient matrix with size of and respectively, and is the Frobenius norm of a matrix. The non-negativity has been stated that parts are generally combined additively to form a whole; hence, it can be useful for learning part-based representations. Thus, the so-called NMF can be a useful technique to decipher distinct sub-structures for revealing subtle data structure in the underlying data. Several approaches for solving NMF have been discussed in [3], and more variants and applications of NMF can refer a recent review paper [4].

Here our goal is to find the linked patterns among two matrices. We assume that there is one common basis matrix between matrices and . So a joint non-negative matrix representation can be derived by the following optimization problem,

 min ∑I=1,2∥XI−WHI∥2F (1) s.t. W≥0,HI≥0.

Ideally, the low-dimensional representation (the coefficient matrices) and for the original matrices and derived based on the best approximation can lead to the linked patterns. However, it is unnecessarily accurate due to the incompleteness and noises of the data and other possible factors. In order to improve the accuracy of the patterns, we incorporate the prior networked knowledge on each data object, and bipartite networked knowledge between the data objects and .

#### Networks Regularized Constraints

Let , denote the low-rank representation of the original data matrices. To decipher the inherent modular structure in a network or say the closeness information of the objects, we assume that adjacent nodes should have similar membership profiles. Therefore, we enforce the must-link constraints by maximizing the following optimization function for (or similarly for ):

 O1=∑ij(Θ1)ij(h1i)Th1j=Tr(H1Θ1HT1), (2)

where . Similarly, the between relationship information between the two types of objects can also be adopted in the following objective function:

 O2=∑ij(R12)ij(h1i)Th2j=Tr(H1R12HT2). (3)

The motivation behind the proposed network regularized constraints are actually quite straightforward. Note that the solution of the problem defined in Eq. 1 is often not unique. We expect to obtain a solution for Eq. 1, which also satisfies the network-regularized constraints well. The limitations of the previous model and the noisy of the real data lead us to consider an integrative framework for jointly handing feature data and networked data simultaneously.

#### Networks-Regularized jNMF

Here, we incorporate all the data (represented in these five matrices) to discover linked patterns based on , and . Specifically, we combine all the above objective functions together to integrate all the five matrices in the following optimization problem:

 min∑I=1,2∥XI−WHI∥2F−λ1∑I=1,2Tr(HIΘIHTI)−λ2Tr(H1R12HT2)s.t.W≥0,H1≥0,H2≥0, (4)

where the parameters and weigh the link constraints in , and respectively. And the first term is to describe the linked patterns between two data matrices by a shared basis or component matrix , the second term defines the summation of the within-variable constraints that decipher the modular structure in network , , and the third term defines the summation of the between-variable constraints which decipher the modular structure in the bipartite network. Here, we can consider the integration of these known networks as graph regularization of the first objective [6] or as a semi-supervised learning problem which aims to enforce the must-link constraints into the framework of pattern recognition, where variables with the ‘must-link’ constraint shall be forced into the same pattern. This can facilitate pattern search by significantly narrowing down the large search space and improve the reliability of the identified patterns.

### 2.2 A Unified Joint NMF Model (JMF)

One of the important characteristics of the NMF is that it usually generates sparse representation that allows us to discover parts-based patterns [2]. However, several studies have showed that the generation of a parts-based representation by NMF depends on the data and the algorithm [8]. Several approaches have been proposed to explicitly control the degree of sparseness in the and/or factors of the NMF [8, 9]. The idea of imposing -norm based constraints for achieving sparse solution has been successfully and comprehensively utilized in various problems [19]. We adopt the strategy suggested by [9], to make the coefficient matrices and sparse. Thus, the sparse network-regularized jNMF can be formulated as follows:

 min∑I=1,2∥XI−WHI∥2F−λ1∑I=1,2Tr(HIΘIHTI)−λ2Tr(H1R12HT2)+γ1∥W∥2F+γ2⎛⎝∑j∥h1j∥21+∑j′∥h2j′∥21)⎞⎠s.t.W≥0,HI≥0,

where and are the th and th column of and respectively. The first term favors modules with the data profiles, and the second term as well as the third term summarize all the must-link constraints in the first and second profiles, and between the two profiles. The term is used to control the scale of matrix , and encourages the sparsity. The parameter suppress the growth of and controls the desired sparsity.

Naturally, with the emergence of various kinds of muti-view, within-view and between-view type data, a unified framework is urgently needed. Therefore, we present a generalized form of JMF framework (Figure 1) as follows,

 F(W,H1,…,HN)=∑I=1,2,…,N∥XI−WHI∥2F−λ1∑I∑tTr(HIΘ(t)IHTI)−λ2∑I≠JTr(HIRIJHTJ)+γ1∥W∥2F+γ2(∑I∑j∥∥hIj∥∥21), (5)

where is the th column of , is the th constraint matrix on the th object, and is the relationship matrix between the th and th objects.

## 3 Algorithms for JMF

Similar to the classical NMF problem, the proposed objective function in Eq.5 is not convex for all variables , (). Therefore, it is unrealistic to expect an algorithm to find the global minimum of the proposed optimization problem. For the classical NMF problem, it is convex for one matrix factor when another is fixed. Therefore, we adopt an alternative update strategy for solving JMF. Specifically, fix (), we can obtain by solving:

 minW≥0F(W)=∑I=1,2,…,N∥XI−WHI∥2F+γ1∥W∥2F (6)

Similarly, fix , we can update by solving:

 HI,I=1,2,…,N≥0minF(H1,H2,…,HN)=∑I∥XI−WHI∥2F−λ1∑I∑tTr(HIΘ(t)IHTI)−λ2∑I≠JTr(HIRIJHTJ)+γ2(∑I∑j∥∥hIj∥∥21), (7)

We can further update one by one. For any , given and , the objective function for optimizing is

 minHI≥0F(HI)=∥XI−WHI∥2F−λ1∑tTr(HIΘ(t)IHTI)−λ2∑J≠ITr(HIRIJHTJ)+γ2(∑j∥∥hIj∥∥21). (8)

Various types of methods have been proposed to solve each subproblem of classical NMF [20, 21, 22, 23, 24, 25]. The most widely used approach is the multiplicative update (MUR) algorithm [20]. This algorithm is easy to implement but converges slowly. And it cannot guarantee the convergence of a local minimum solution. As the resulted matrix factors are nonnegative, Lin treated each subproblem as a bounded constraint optimization problem and used a projected gradient (PG) method to solve it [21]. However, PG is inefficient because the Armijo rule is used for searching step size, which is very time-consuming. As the low-rank matrices of the classical NMF are desirable to be sparse, the active set strategy may be a promising method. Kim and Park adopted an active set (AS) method to solve such types of subproblems, which divides variables into an active set and a passive set. In each iteration, AS exchanges only one variable between these two sets [22]. They further used the block pivoting strategy to accelerate the AS method (BP) [23]. Both AS and BP methods assume that each subproblem is strictly convex, which might bring about numerical instability. As each subprobelm is a convex function and its gradient is Lipchitz continuous, Guan et al. solved each subproblem by Nesterov’s optimal gradient (Ne) method (NeNMF) [24]. NeNMF converges faster than previous methods as it has neither time-consuming line search step, nor numerical instability problem. Moreover, NeNMF can be extended to sparse and network regularization even it is not convex. Recently, Zhang et al. proposed a new proximal alternating nonnegative least squares (PANLS) to solve each subproblem, which switches between the constrained PG step and unconstrained active set step [25]. Luckily, MUR, PG, Ne and PANLS are all suitable for solving JMF, while both AS and BP are not directly applicable to the network-regularized NMF. As noted that the current code of BP needs to be modified and it may not be efficient if the BP update method is used [25]. In the following subsections, we develop four update methods (MUR, PG, Ne and PANLS) for optimizing JMF in spirit of the above exploration, and present their corresponding algorithms in Appendix Algorithms 1-4, respectively.

### 3.1 Multiplicative update algorithm

Firstly, we solve JMF with the MUR algorithm which searches along a rescaled gradient direction with a fixed form of learning rate to guarantee the nonnegativity of the low-rank matrices. The details of MUR are shown as follows. The Lagrange function is , and . The partial derivative of with respect to and are respectively as follows:

 (9)

Based on the KKT conditions , we get the following equations for , respectively,

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩−2∑I(XIHTI)ijWij+[2∑I(WHIHTI)+2γ1W]ijWij=0,⎡⎣−WTXI−λ12∑tHI(Θ(t)I+(Θ(t)I)T)−λ22∑J≠IHTJRTIJ⎤⎦ij×(HI)ij+(WTWHI+γ2eK×KHI)ij(HI)ij=0,(I=1,…,N).

Then we can get the following update rules:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩wij←wij(∑IXIHTI)ij(∑IWHIHTI+γ1W)ij,hIij←hIij(WTXI+λ12∑tHI[ΘI+(Θ(t)I)T]+λ22∑J≠IHJRTIJ)ij((WTW+γ2eK×K)HI)ij, (10)

Note that the usual stopping criterion for MUR is

 F(Wt,Ht1,…,HtI)−F(Wt+1,Ht+11,…,Ht+1I)F(W1,H11,…,H1I)−F(Wt+1,Ht+11,…,Ht+1I)≤τ, (11)

where is a predefined tolerance. While the usual stopping criterion used in the other three update methods is

 ∥∥[∇WF(Wt,Ht1,…,HtN)T,…,∇HNF(Wt,Ht1,…,HtN)]∥∥F≤τ∥∥[∇WF(W1,H11,…,H1N)T,…,∇HNF(W1,H11,…,H1N)]∥∥F. (12)

If is denoted by , then Eq. 12 can be represented by . Note that may vary slowly when the variables close to a stationary point. Thus, we terminate PG, Ne and PANLS, when Eq. 12 or the following Eq. 13 is satisfied,

 ∣∣∥∥∇t+10∥∥F−∥∥∇t∥∥F∣∣≤10−3τ∥∥∇1∥∥F. (13)

Generally, MUR is simple and easy to implement, and it quickly decreases the objective value at the beginning. But it does not guarantee the convergence to any local minimum because its solution is unnecessarily a stationary point. Even though it has a stationary point, it converges slowly. If some rows or columns of are close to zero, the result may have numerical problems.

We adopt PG to solve each subproblem, which uses the Armijo rule to search the step size along the projection arc. We take the subproblem Eq. 6 as an example. The step size satisfies:

 (1−σ)⟨∇F(−W),∼W−−W⟩+12⟨∼W−−W,Q(∼W−−W)⟩≤0, (14)

where (=0.01 is used), and is the second moment matrix of . The gradient function of is

 ∇F(W)=2∑I(WHIHTI−XIHTI)+2γ1W.

The Hessian matrix for is

 QW=2∑IHIHTI⊗IM+2γ1IK⊗IM,

where is Kronecker product. The PG is very easy to implement but it is time-consuming and may suffer from the zigzag phenomenon when approaching the local minimizer if the condition number is bad.

### 3.3 Nesterov’s optimal gradient algorithm

NetNMF updates two sequences recursively to optimize each low-rank matrix. One sequence stores the approximate solutions which are obtained by PG method on the search points with step size determined by the Lipchitz constant. Another sequence stores the search points which are the combination of the latest two approximation solution. In this way, the objective function is convex for the variable and the gradient of the objective function is Lipchitz continuous that are the two prerequisites when applying Nesterov’s method [26, 27, 28]. NeNMF can be conveniently extended for optimizing -norm, -norm and network-regularized NMF and can also been extended for JMF. Given (), the objective function for optimizing in Eq. 6 is a convex function, and the gradient function for satisfies Lipschitz continuity as follows,

 (15)

where is the gradient of . Though the objective function in Eq. 8 for optimizing is nonconvex, the gradient function for satisfies Lipschitz continuity as follows,

 ∥∥∇F(H1I)−∇F(H2I)∥∥F=∥∥2(WTW+γ2eK)(H1I−H2I)−λ1(H1I−H2I)(ΘI+ΘTI)∥∥F≤(2∥∥WTW+γ2eK∥∥2+λ1∥∥ ∥∥∑tΘ(t)I+(Θ(t)I)T∥∥ ∥∥2)×∥∥H1I−H2I∥∥F, (16)

where is the gradient of . Ne indeed decrease the objective function but cannot guarantee the convergence to any stationary point as the objection function is nonconvex.

### 3.4 Proximal alternating nonegative least squares algorithm

Inspired by the PANLS for solving the typical NMF problem [25], we adopt the kernel ideal for solving JMF. The subproblems can be transformed as follows,

 Wk+1=argminW≥0 N∑I=1∥∥XI−WHkI∥∥2F+γ1∥W∥2F (17) +τ1∥∥W−Wk∥∥2F,
 Hk+1I= argminHI≥0∥∥XI−WkHI∥∥2F−λ1∑tTr(HIΘ(t)IHTI) (18) −λ2∑J≠ITr(HIRIJHTJ)+γ2(∑j∥∥hIj∥∥21) +τI2∥∥HI−HkI∥∥2F.

The Hessian matrices of and are:

 Q(W)=2(∑IHIHTI)⊗IM+2(γ1+τ1)IK⊗IM, (19)
 Q(HI)=2INI⊗(WTW)−λ1(∑t(Θ(t))T+∑tΘ(t))⊗IK+2(γ2+τI2)INI⊗IK. (20)

Thus, and are strictly convex function of variables and with proper and . And each subproblem has a unique minimizer according to Frank Wolfe theorem. Therefore, PANLS has a nice convergence property.

## 4 Prediction models based on JMF

NMF and its variants can be used for prediction tasks [18]. JMF can also be extended to the prediction form. Both the basis matrix and coefficients matrices can be used for prediction. The prediction based on the basis matrix is denoted by JMF/L, while the prediction based on the coefficient matrices is denoted by JMF/R. Let () be the training datasets and () be the testing datasets. We can obtain low-rank matrices and by JMF on the training datasets.

For JMF/L, fix the learned coefficients matrices (), the predicted factor can be obtained by solving Eq. 6 on the testing datasets. There were two prediction scenarios based on the learned basis matrix on the training data. In the scenario I for class prediction of new samples, the basis matrix is used as the prediction factor, which can be obtained based on the testing data and learned coefficient matrices . The prediction class of each sample can be obtained based on the maximum value in each row of . In the scenario II for one view data prediction (e.g., ) from other view data (, ), the new basis matrix is computed with the learned coefficient matrices and the testing data , …, . Then the multiplication of and the learned coefficient matrix is used to predict . In this paper, we illustrated the second scenario of JMF/L with one real application.

Similarly, for JMF/R, fix the learned basis matrix , the predicted factors can be obtained by solving Eq. 8, which can be used as prediction factors as the scenario I in JMF/L.

## 5 Experiments

To demonstrate the performance of JMF, we applied it to four synthetic and three real datasets. Firstly, we evaluated how the parameters influence the performance of JMF in terms of the Area Under the Curve (AUC) on four synthetic datasets. Then we compared the average objective values with respect to iteration numbers or running time of the four update methods on the synthetic datasets. Finally, we applied JMF to three real data from diverse fields. We run the experiments of synthetic datasets on a machine with Intel Core i7-4770 CPU @ 3.40GHz ¡Á4 with 16 GB RAM and used MATLAB (R2016a) 64-bit for the general implementation. The real datasets were run on a windows server with Intel (R) Xeon (R) E5-2643 v3 CPU @ 3.40GHz ¡Á2 with 768 GB RAM and implemented on MATLAB (R2013a) 64-bit. For the purpose of reproducibility, the data and code are available at: http://page.amss.ac.cn/shihua.zhang/software.html

### 5.1 Synthetic dataset 1

We adopt a similar simulation strategy as used in Experiment in [29] to demonstrate the effectiveness of the algorithms for JMF. The true low-rank and the ground truth basis matrix represented by was constructed with coph = 0 as follows,

 W0[j,k]={1,1+xk(10)≤j≤10+xk(10),0,otherwise. (21)

Meanwhile, three coefficient matrices () were constructed with coph =0,

 H01[j,k]={1,1+xj(30)≤k≤30+xj(30),0,otherwise. (22)
 H02[j,k]={1,1+xj(40)≤k≤40+xj(40),j≠4,0,otherwise. (23)
 H03[j,k]={1,1+xj(50)≤k≤50+xj(50),j≠3,0, otherwise. (24)

where .

We set the data matrices by (), where was Gaussian noise and . The within constraint on each data matrix was simulated as follows,

 ΘkI[s,t]={1,%ifHI[k,s]=1andHI[k,t]=1,0,otherwise. (25)

We obtained the within constraint matrix on the th source by averaging the value of and , where . The between constraint on the th and th data matrices was simulated as follows,

 RkIJ[s,t]={1,if% HI[k,s]=1andHJ[k,t]=1,0,otherwise. (26)

The between constraint matrix .

### 5.2 Synthetic dataset 2

We simulated a relative large-scale dataset with the true low rank . Different from dataset 1, the entries of the true basis matrix were deemed as independent and identical Bernoulli variables with probability equals to . And we constructed the ground truth coefficient matrices in the following manner with coph=, , , respectively.

 H01[j,k]={1,1+xj(20)≤k≤20+xj(20),0,otherwise.
 H02[j,k]={1,1+xj(30)≤k≤30+xj(30),0,otherwise.
 H03[j,k]={1,1+xj(50)≤k≤50+xj(50),0,otherwise.

We set data matrices by , (), where was Gaussian noise and . Similarly, and were generated as mentioned above.

### 5.3 Synthetic dataset 3

We simulated a dataset with overlap information on coefficient matrices as well as large noise in prior networks. We set the true low-rank and constructed the ground truth basis matrix with coph =,

 W0[j,k]={1,1+xk(100)≤j≤100+xk(100),0,otherwise.

The entries of the true coefficient matrices () were regarded as independent and identical Bernoulli variables with probability equal to . Then we set the data matrices by , (), where was Gaussian noise and . and were generated as mentioned in section 5.1.