1 Introduction
###### Abstract

With the huge influx of various data nowadays, extracting knowledge from them has become an interesting but tedious task among data scientists, particularly when the data come in heterogeneous form and have missing information. Many data completion techniques had been introduced, especially in the advent of kernel methods. However, among the many data completion techniques available in the literature, studies about mutually completing several incomplete kernel matrices have not been given much attention yet. In this paper, we present a new method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that tackles this problem of mutually inferring the missing entries of multiple kernel matrices by combining the notions of data fusion and kernel matrix completion, applied on biological data sets to be used for classification task. We first introduced an objective function that will be minimized by exploiting the EM algorithm, which in turn results to an estimate of the missing entries of the kernel matrices involved. The completed kernel matrices are then combined to produce a model matrix that can be used to further improve the obtained estimates. An interesting result of our study is that the E-step and the M-step are given in closed form, which makes our algorithm efficient in terms of time and memory. After completion, the (completed) kernel matrices are then used to train an SVM classifier to test how well the relationships among the entries are preserved. Our empirical results show that the proposed algorithm bested the traditional completion techniques in preserving the relationships among the data points, and in accurately recovering the missing kernel matrix entries. By far, MKMC offers a promising solution to the problem of mutual estimation of a number of relevant incomplete kernel matrices.

Mutual Kernel Matrix Completion

Tsuyoshi Kato, Rachelle Rivero

† Faculty of Science and Engineering, Gunma University, Kiryu-shi, Gunma, 326–0338, Japan. Center for Informational Biology, Ochanomizu University, Bunkyo-ku, Tokyo, 112–8610, Japan.

## 1 Introduction

Analysis of biological data has been an interesting topic among researchers in the field, due to the challenges presented by biology being a data-rich science: the data are represented in various forms, and are painstakingly obtained. Most of the time, the data obtained are noisy and have missing information. The simplest technique in dealing with missing data is to delete the data points with missing entries; however, it reduces the data size and its statistical power [12]. For subsequent vector-based analysis, some missing data are imputed with zero (zero-imputation) or mean (mean-imputation) in advance. However, doing so dilutes the relationship among variables, which leads to underestimation of the variance [15, 12]. On the other hand, some methods estimate with a regression model or EM algorithm. The EM algorithm was introduced by Dempster et al. as a general approach to compute the maximum likelihood estimates iteratively for incomplete data sets [4]. Each iteration consists of an expectation step (E-step) and a maximization step (M-step), hence the term EM algorithm.

Aside from missing information, another challenge faced by biological data is its heterogeneous representation. Biological data may come as strings (such as protein and genome sequences), microarrays (such as gene expression data), protein-protein interaction, or metabolic pathways. Simultaneous analysis of such data is difficult, unless they are converted into a single form. In Lanckriet et al. [9], each data set is represented by a generalized similarity relationship between pairs of genes or proteins. This similarity relationship is defined by a kernel function; and data sets represented via a kernel function can be combined directly. The resulting matrix whose entries are the relationships defined by a kernel function is called a kernel matrix. A huge literature about kernel methods and matrices is available: [5, 16, 19, 3].

Lanckriet et al. [9] utilized the simultaneous analysis of kernel matrices in solving the problem of protein classification. They optimally combined multiple kernel representations by formulating the problem as a convex optimization problem that can be solved using semidefinite programming (SDP) techniques [9, 10, 11]. They had shown that a statistical learning algorithm performs better if trained from the integrated data than from a single data alone.

Integrated kernel matrices can also be used in the analysis of a data set with missing entries. For example, a supervised network inference among proteins can be formulated as a kernel matrix completion problem, whose missing entries can be inferred from multiple biological data types such as gene expression, phylogenetic profiles, and amino acid sequences [6]. Moreover, when a data set has missing entries, its corresponding kernel matrix will be incomplete as well. Inferring the missing entries of the corresponding matrix rather than the data matrix itself has several advantages [8]: there are far fewer missing entries in the kernel matrix than in the actual data; and machine learning methods like support vector machines (SVMs) work with kernel matrices, to name a few. The completed kernel matrix, as well as the other relevant kernel matrices, can now be used in protein classification or protein network inference problems. Several kernel matrix completion methods that depend on other complete kernel matrices had been introduced: [18, 6, 7]. However, it is not always the case that only one data set is incomplete. To date, simultaneous analysis and mutual completion of kernel matrices have not been extensively studied, but is now becoming an increasing interest among data scientists. Kumar et al. [8] considered the problem of deriving a single consistent similarity kernel matrix (with binary entries) from a set of self-consistent but incomplete similarity kernel matrices. They solved this problem by simultaneously completing the kernel matrices and finding their best linear combination via SVM. However, their algorithm involves an iterative optimization of each of the kernel matrices and weights via semidefinite program (SDP) solver, making their method very time- and memory-consuming. Another multiple kernel completion technique is introduced in [2]. In their work, they only have one set of objects to be observed but some relationships between pairs of these objects are missing, generating a kernel matrix with missing rows and columns. They then generated multiple rearrangements of the said incomplete kernel matrix, thus generating multiple incomplete kernel matrices (multi-view setting) and posed the problem as multi-view completion problem. Their method then completes a kernel matrix with the help from the other views of that matrix, until all of the views are completed.

Our study is more similar to the works of [18, 9, 11, 6], where different descriptions of yeast proteins are utilized (such as primary protein sequences, protein-protein interaction data, and mRNA expression data) for protein classification later on. Each description constitutes a data set. In contrast with their work, our study assumes that some or all of the descriptions (or data set) have missing rows and columns, where the set of proteins with missing information in each data set may not be the same. This is illustrated by the different positions of white rows and columns of the kernel matrices in Fig. 1 (c). We can also see from Fig. 1 how our study (c) differs from those of Tsuda et al. (a) and Kato et al. (b). The problem is now translated to multiple kernel matrix completion problem, which our study aims to solve.

In this paper, we present a solution to multiple kernel matrix completion problem by introducing an algorithm, which we call Mutual Kernel Matrix Completion (MKMC) algorithm, that exploits the EM algorithm to minimize the Kullback-Leibler (KL) divergences among the kernel matrices. The E-step and M-step are given in closed form (the derivations of which are given in Appendix A), making our method time- and memory-efficient.

The rest of the paper is organized as follows: The next section describes the setting for our proposed method; Sect. 3 describes the theory behind the proposed algorithm and how to implement it; Sect. 4 reviews the EM framework to show that the MKMC algorithm is an EM algorithm; Sect. 5 expounds the data used and experimental design; Sect. 5.2 discusses the empirical results; and Sect. 6 concludes.

## 2 Problem Setting

Suppose we wish to analyze objects and we have available relevant data sources for the same set of objects. The relationships among these objects in each of the data sources are given as entries in the following symmetric kernel matrices: . Suppose also that some or all of the available data sources have missing entries, as can be seen in Fig. 1 (c). The rows and columns with missing entries in each of the kernel matrices are rearranged such that the information is available only for the first samples in , and unavailable for the remaining samples. Then, the resultant reordered matrix, denoted by , can be partitioned as

 Q(k)vh,vh=⎛⎝Q(k)v,vQ(k)v,hQ(k)h,vQ(k)h,h⎞⎠, (1)

where the symmetric submatrix have visible entries, while the submatrix and submatrix have all entries missing.

The goal of this study is to utilize the given information to infer the missing information and , thus completing the kernel matrices.

## 3 Mutual Kernel Matrix Completion (MKMC) Method

To determine the difference between two kernel matrices, we will make use of the KL divergence between the corresponding probability distributions of the kernel matrices. To define the KL divergence, let us relate each of the kernel matrices to the covariance of a zero-mean Gaussian distribution as follows:

 q1:=N(0,Q(1)),…,qK:=N(0,Q(K)),

where the ’s are called the empirical distributions. Also, let us consider a model matrix and relate it to the covariance of a zero-mean Gaussian distribution as called the model distribution. Then for , the KL divergence of probability distribution from is given by

 KL(qk,p)=∫qk(x)logqk(x)p(x)dx,

and the KL divergence of the model matrix from is given by

 KL(Q(k),M) = 12[Tr(M−1Q(k))+logdetM−logdetQ(k)−ℓ],

where the size of and is . To infer the missing entries, we minimize the following objective function that takes the sum of the KL divergences:

 J(H,M):=λKL(I,M)+K∑k=1KL(Q(k),M), (2)

where is the set of submatrices with missing entries, and is the model matrix.

Minimization of the KL divergence can be done by first fixing and minimizing the objective function with respect to and , then fixing the obtained solutions to update . Iteratively doing these steps monotonically decreases the KL divergence of from , for each . These steps correspond to E-step and M-step in an EM algorithm, as detailed in the next section.

Succinctly, the following two steps are repeated until convergence:

1. E-step. Fix the model matrix to obtain .

2. M-step. Use to update .

This algorithm is guaranteed to decrease the objective function monotonically. An outline of the proposed algorithm is given in Algorithm 1. In Step 7 of the algorithm, the model matrix is reordered and partitioned in the same way as the current . Since we are taking KL divergences from to each of s, the reordering of as well as the size of its submatrices will depend on that of for which it is being compared to at the moment. We denote the reordered and partitioned with respect to as

 M(k)vh,vh:=⎛⎝M(k)v,vM(k)v,hM(k)h,vM(k)h,h⎞⎠. (3)

We then denote the sizes for the submatrices as for and ; for and ; and for and , where . and denote the matrix transpose of the submatrices and , respectively.

## 4 Connection to EM Algorithm

In this section, we review the EM framework and then show that MKMC is an EM algorithm.

### 4.1 EM Algorithm Revisited

The EM method is a numerical optimization algorithm for statistical inference of model parameters such as maximum likelihood estimation. From observed data , the maximum likelihood estimation finds the value of that maximizes the log-likelihood function defined as

 LML(Θ;V):=logp(V|Θ), (4)

where is the model. A penalizing term is often added to the log-likelihood function [14]. The penalized log-likelihood function is given as

 LII(Θ;V):=LML(Θ;V)+logp(Θ), (5)

where is a predefined prior distribution of the model parameters. With an empirical distribution  [1], a slightly generalized form can be considered:

 LII(Θ;q):=Eq(V)[logp(V|Θ)]+logp(Θ). (6)

Given the empirical distribution , the EM framework iteratively maximizes with respect to by repeating E-step and M-step alternately. In the EM framework, unobserved data are considered in addition to the observed data , and it is supposed that the joint distribution, , is in the exponential family [20]. Namely, there exist vector-valued or matrix-valued functions, and , such that

 logp(V,H|Θ)=⟨S(V,H),G(Θ)⟩−A(Θ), (7)

where is known as the cumulant function

 A(Θ):=log∫exp⟨S(V,H),G(Θ)⟩dVdH. (8)

The pair of and is called the complete data [13]. The model distribution of the observed data is the marginal distribution of the joint distribution:

 p(V|Θ)=∫p(V,H|Θ)dH. (9)

In the E-step of -th iterate, the expected sufficient statistics

 S(t):=Et−1[S(V,H)] (10)

is computed where is the value determined in the previous iterate, and we denote . Then, in the M-step, the so-called Q-function [13] is maximized with respect to to get the new value , where the Q-function is defined as

 Q(t)(Θ):=Et−1[logp(V,H|Θ)]+logp(Θ)=⟨S(t),G(Θ)⟩−A(Θ)+logp(Θ). (11)

If the penalized log-likelihood function is bounded from above, the convergence of the EM algorithm is ensured from the following property:

###### Proposition 4.1.

The EM algorithm produces a non-decreasing series of the penalized log-likelihood:

 LII(Θ(t);q)≥LII(Θ(t−1);q), for t∈N.

(Proof is given in Appendix C.1). The proposed algorithm, MKMC, makes the solution converge to a local optimum, and so does the EM algorithm.

### 4.2 MKMC is an EM Algorithm

We are now ready to establish a connection between MKMC and the EM framework. In a setting described below, we can show that MKMC is an EM algorithm. We first describe the setting of the empirical distribution, followed by the definition of the model distribution. Now let us suppose the setting in which the complete data is expressed as a set of -dimensional vectors where each entry in corresponds to a row or a column of . For , let and be the sub-vectors of , where the entries in correspond to available data in -th kernel matrix, and the entries in correspond to missing data in -th kernel matrix. In the context of EM framework, the observed data is and the unobserved data is . The second order moment of the empirical distribution is supposed to be given as

 Eq(V)[vkv⊤k]=Q(k)v,v. (12)

Notice that an example that satisfies the assumption in (12) is

 q(V)=∫q1(v1,h1)⋅⋯⋅qK(vK,hK)dH, (13)

(proof is given in Appendix C.2) where ,…, have been defined in Sect. 3, although the definition of the empirical distribution is not limited to (13) so long as the assumption in (12) holds.

Next, we define the model distribution. In the model, and the joint densities of the complete data are given by

 p(V,H|Θ)=K∏k=1N(xk|0,M). (14)

Note that this model distribution is in the exponential family by setting

 S(V,H):=K∑k=1xkx⊤k and G(M)=−12M−1, (15)

the derivations of which are given in Appendix C.3. The prior distribution of is defined as the following Wishart distribution:

 p(M)∝(1detM)λ/2exp(−λ2TrM−1). (16)

We now apply the EM algorithm to this setting. In the E-step, the expectation of the sufficient statistics

 Et−1[S(V,H)]=K∑k=1Et−1[xkx⊤k] (17)

has to be computed. The symmetric matrices have the following sub-matrices: , , and . The matrix does not depend on the model parameters . Letting

 M(k)h|v:=M(k)h,h−M(k)h,v(M(k)v,v)−1M(k)v,h, (18)

it can be shown that the remaining two expected matrices are given by

 Et−1[vkh⊤k] = Q(k)v,v(M(k)v,v)−1M(k)v,h, and (19)
 Et−1[hkh⊤k]=M(k)h|v (20) + M(k)h,v(M(k)v,v)−1Q(k)v,v(M(k)v,v)−1M(k)v,h,

(derivations are given in Appendix C.4) which coincides with the update rule given in Sect. 3. Hence, obtained in the -th iterate in MKMC is equal to .

In the M-step, the Q-function is maximized with respect to . Substituting all the settings to (11), the Q-function is expressed as

 Q(t)(M)=−12⟨λI+K∑k=1Q(k),M−1⟩−K+λ2logdetM+const. (21)

which is maximized at

 M=1λ+K(λI+K∑k=1Q(k)). (22)

It has turned out that the update rules of the E-step and the M-step are same as those of MKMC algorithm. In conclusion, MKMC can be interpreted as an EM algorithm.

## 5 Experimental Results

### 5.1 Experimental Settings

The MKMC algorithm is applied to seven kernel matrices derived from three different types of data: four from primary protein sequence, two from protein-protein interaction data, and one from mRNA expression data. These are the data used in [9] to test their statistical framework for genomic data fusion and yeast protein classification. They have shown in their experiment that the knowledge obtained from all of the aforementioned data improved protein classification, as compared to classifying proteins using only a single type of data. In order to utilize the heterogeneous data for protein classification task, the data are reduced into the common format of kernel matrices, allowing them to combine [9]. These kernel matrices are square symmetric matrices whose entries represent similarities among pairs of yeast proteins. Each gene is annotated as membrane (), non-membrane (), or unknown (). In this study we only considered proteins with known annotations, leaving 2,318 yeast proteins out of 6,112.

In our experiment, for each object, one of seven kernel matrices is picked and the corresponding row and column from that matrix are “removed”, i.e., replaced with undetermined values without changing the size of the matrix. We considered varying the ratio of the missing entries from 10% up to 90% missing entries. First, we randomly chose 10% of the entries to be missed, where completion methods were to be performed. Next, we randomly chose another 10% entries to be missed, in addition to the previous 10% — comprising the now 20% missing entries. This was repeatedly done until we reached the 90% missing entries. For comparison, other matrix completion techniques were considered such as zero-imputation and mean-imputation methods. Zero-imputation completes a kernel matrix by imputing zeros in its missing entries. This was also how we initialized the kernel matrices in the MKMC algorithm. On the other hand, imputing the mean of the remaining entries to complete a kernel matrix is called completion by mean-imputation. Appendix B describes how each of the kernel matrices was completed using this method.

The above-mentioned matrix completion techniques were individually performed on each of the seven kernel matrices. We then randomly picked data points to be used in training the SVM classifier for protein classification task afterwards.

In this study, 200 and 1,000 training data points were considered. We first made the division of training and testing datasets when 200 data are used for training, and then made the division when 1,000 training data are used. For the 200 training data, we randomly picked 200 out of 2,318 data points, and used them in training SVM classifiers in each of the kernel matrices. The remaining 2,118 data points served as testing data. The 1,000 training data points were obtained by retaining the first 200 training data points, and adding to it 800 more randomly-picked data points. This time, there were 1,318 testing data.

As regards to the completion methods, the MKMC algorithm gives us the completed kernel matrices , where is the number of kernel matrices used. SVM classifiers were then trained on these matrices, and on the combination of these kernel matrices, for the task of classifying a protein as membrane or non-membrane. Meanwhile, after completion by zero-imputation and mean-imputation, the kernel matrices were also combined to obtain the model matrix as in (22), where the SVM classifier was also trained alongside with the seven completed kernel matrices. The zero-imputation and mean-imputation methods, applied with SVM, were then called zero-SVM and mean-SVM, respectively.

### 5.2 Results

In this section, the classification performance and accuracy of the completion methods used were evaluated. The methods’ classification performances were determined in different set-up: varying number of training points and varying ratio of missing entries. On the other hand, how accurate the methods were able to recover the missing data was evaluated by taking the mean of the correlation between the original matrices and the estimated matrices.

#### 5.2.1 Classification Performance

The classification performance of the three completion methods were measured via the ROC scores. The ROC score, or area under the ROC curve, tells us how well a binary classifier can discriminate the data. Better classifiers achieve higher ROC scores. The ROC is also invariant of class label imbalance, which makes it the best tool in measuring the classification performance in our experiment since our data set is heavily imbalanced: the positively-labeled proteins are only 497 out of 2,318 (only ). In this experiment, we can determine how well a completion method preserves the relationship among pairs of proteins by measuring how accurate an SVM classifier can discriminate proteins as membrane or non-membrane.

The ROC scores of the three completion methods on each kernel matrix, for 200 and 1,000 training points, are shown in Fig. 2. Here, about of the entries in each of the kernel matrices were missing, and had been estimated by the three methods. Our empirical results show that as the number of training data increases, the SVM classification performance improves. We can also see from the plots that the model matrix , which is the combination of the completed kernel matrices, obtained the highest ROC score; this matches the claim by Lanckriet et al. [9] that a learning algorithm performs better if trained from the integrated data than from a single data alone. Most importantly, our proposed method obtained the highest ROC scores in all of the classification performance tests, with significant differences compared to zero-SVM and mean-SVM. The statistical differences were assessed using a two-sample -test. For 200 training data points, MKMC obtained an average ROC score of 0.779 for the model matrix , as compared to those of zero-SVM and mean-SVM: 0.721 (P-value ) and 0.745 (P-value ), respectively. On the other hand, when the number of training data is 1,000, MKMC achieved an average ROC score of 0.848 for as compared to zero-SVM: 0.836 and mean-SVM: 0.832 (P-values and , respectively).

In addition to the above tests, the effect of the amount of missing values in the kernel matrices on the methods’ performances were also determined. The ratio of the missing entries was artificially changed from (no missing entries, as baseline) to . Fig. 3 demonstrates the performances of MKMC, zero-SVM, and mean-SVM in this set-up. Here, although the classification performance decreased as the number of missing entries was increased, MKMC remained having the highest ROC scores among the three methods; the gap in the ROC scores being more prominent when there are to missing entries.

The experiments show that among the matrix completion techniques used, MKMC best preserves the relationships among the data points.

#### 5.2.2 Completion Accuracy

To test how accurate the missing values were recovered by the three methods, we computed the mean of the correlation distance between the estimated matrices and their corresponding complete kernel matrices as follows:

 1KK∑k=1⎛⎜ ⎜ ⎜ ⎜⎝1−Tr(Q(k)^Q(k))∥∥Q(k)∥∥F∥∥∥^Q(k)∥∥∥F⎞⎟ ⎟ ⎟ ⎟⎠,

where is the Frobenius norm, and and are the “true” and the estimated kernel matrices, respectively; the computation was done for each ratio of missing entries. Lower correlation matrix distance means better correlation between the matrices. As with the previous experiments, the ratio of the missing entries was artificially changed from to . Fig. 4 shows how MKMC recovered the missing values more accurately than the two imputation methods, with the exception when about at least of the entries are missing. In this case, however, the gap between MKMC and mean-imputation is not very large.

## 6 Conclusion

In this study, we introduced a new algorithm, called MKMC, to solve the problem of mutually inferring the missing entries of similarity matrices (or kernel matrices). MKMC works by combining relevant data sources, although some or all are incomplete, and using the combined sources (called the model matrix) to estimate the missing entries of the kernel matrices. The inference is done by minimizing the KL divergence of the model matrix from each of the kernel matrices. The new kernel matrices are recombined, thus updating the model matrix which can be reused to improve the estimates of the entries. The optimal solutions given by MKMC in the E- and M-steps are given in closed forms, making MKMC efficient in terms of time and memory. In addition, results show that MKMC has good to excellent classification performance as the amount of training data and missing entries were varied— a good indication that MKMC preserves relationships among pairs of data points. We have also shown that when portions of the kernel matrices were artificially missed, MKMC recovers the missing entries more accurately than the other two methods.

## Acknowledgment

This work was supported by JSPS KAKENHI Grant Number 26249075, 40401236.

## References

• [1] Shun-Ichi Amari. Information geometry of the EM and em algorithms for neural networks. Neural Netw., 8(9):1379–1408, December 1995.
• [2] Sahely Bhadra, Samuel Kaski, and Juho Rousu. Multi-view kernel completion, February 2016.
• [3] Christopher M. Bishop. Pattern recognition and machine learning. Springer, 2006.
• [4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38, 1977.
• [5] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning – Data Mining, Inference, and Prediction. Springer, 2nd edition.
• [6] Tsuyoshi Kato, Koji Tsuda, and Kiyoshi Asai. Selective integration of multiple biological data for supervised network inference. Bioinformatics, 21(10):2488–2495, February 2005.
• [7] Taishin Kin, Tsuyoshi Kato, and Koji Tsuda. Protein classification via kernel matrix completion. In Kernel Methods in Computational Biology, chapter 3, pages 261–274. The MIT Press, 2004. In B. Schölkopf, K. Tsuda and J.P. Vert (eds).
• [8] Ritwik Kumar, Ting Chen, Moritz Hardt, David Beymer, Karen Brannon, and Tanveer Syeda-Mahmood. Multiple kernel completion and its application to cardiac disease discrimination, April 2013. IEEE 10th International Symposium on Biomedical Imaging.
• [9] Gert R. G. Lanckriet, Tijl De Bie, Nello Cristianini, Michael I. Jordan, and William Stafford Noble. A statistical framework for genomic data fusion. Bioinformatics, 20(16):2626–2635, 2004.
• [10] Gert R. G. Lanckriet, Nello Christianini, Michael I. Jordan, and William Stafford Noble. Kernel-based integration of genomic data using semidefinite programming. In Kernel Methods in Computational Biology, pages 231–259. The MIT Press, 2004. In B. Schölkopf, K. Tsuda and J.P. Vert (eds).
• [11] Gert R. G. Lanckriet, M. Deng, Nello Christianini, Michael I. Jordan, and William Stafford Noble. Kernel-based data fusion and its application to protein function prediction in yeast. To appear in Proceedings of the Pacific Symposium on Biocomputing, 2004.
• [12] Roderick J. A. Little and Donald B. Rubin. Statistical Analysis with Missing Data. John Wiley & Sons, Inc., Hoboken, New Jersey, 2nd edition, 2002. ISBN 0-471-18386-5.
• [13] Geoffrey J. McLachlan and Thriyambakam Krishnan. The EM algorithm and extensions, 2nd Edition. Wiley series in probability and statistics. Wiley, Hoboken, NJ, 2008.
• [14] Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012.
• [15] Philip L. Roth, James E. Campion, and Steven D. Jones. The impact of four missing data techniques on validity estimates in human resource management. Journal of Business and Psychology, 11(1), 1996.
• [16] Bernhard Schölkopf and Alexander J. Smola. Learning with Kernels. The MIT Press, Cambridge, Massachusetts, 2000.
• [17] J. Schur. Über potenzreihen, die im innern des einheitskreises beschränkt sind. Journal für die reine und angewandte Mathematik, 147:205–232, 1917. ISSN: 0075-4102; 1435-5345/e.
• [18] Koji Tsuda, Shotaro Akaho, and Kiyoshi Asai. The em algorithm for kernel matrix completion with auxiliary data. Journal of Machine Learning Research, 4:67–81, 2003. Chris Williams (eds).
• [19] Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer, 2nd edition, 2000.
• [20] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1–305, January 2008.

## Appendix A Derivation of the Closed-Form Solutions

This section shows us how the closed-form solutions to the MKMC algorithm were obtained.

To begin with, let us introduce an objective function that takes the sum of the KL divergences:

 J(H,M) := λKL(I,M)+K∑k=1KL(Q(k),M) (23) = λ2Tr(M−1)+λ2logdetM−λ2ℓ +12K∑k=1Tr(M−1Q(k))+K2logdetM −12K∑k=1logdetQ(k)−K2ℓ = λ2Tr(M−1)+λ+K2logdetM +12K∑k=1[Tr(M−1Q(k))−logdetQ(k)] −(λ+K)ℓ2,

where are the data sources represented as symmetric kernel matrices in which some or all may be incomplete; is the model matrix; and is a small positive scalar.

### a.1 E-step.

In this step, we fix and minimize with respect to .

First, suppose and are reordered and partitioned as in (1) and (3), respectively. By Schur [17], the Schur complement of with respect to is defined as

 M(k)vh,vh/M(k)v,v:=M(k)h,h−M(k)h,v(M(k)v,v)−1M(k)v,h,

and the determinant of is given by

 detM(k)vh,vh=detM(k)v,vdet(M(k)vh,vh/M(k)v,v).

Hence,

 logdetM(k)vh,vh (24) = logdetM(k)v,v +logdet[M(k)h,h−M(k)h,v(M(k)v,v)−1M(k)v,h].

Similarly for each in ,

 logdetQ(k)vh,vh (25) = logdetQ(k)v,v +logdet[Q(k)h,h−Q(k)h,v(Q(k)v,v)−1Q(k)v,h].

Let us now represent the inverse of as

 (M(k)vh,vh)−1=⎛⎝S(k)v,vS(k)v,hS(k)h,vS(k)h,h⎞⎠=:S(k),

where

;

;

;

.

Then

 Tr[(M(k)vh,vh)−1Q(k)vh,vh] = Tr(S(k)v,vQ(k)v,v)+2%Tr(S(k)v,hQ(k)v,h) (26) +Tr(S(k)h,hQ(k)h,h).

Rewriting the objective function in terms of the reordered matrices and of (24), (25), and (26), and taking its partial derivative with respect to , we get

 ∂J∂Q(k)h,h = ∂[K∑k=1Tr(S(k)h,hQ(k)h,h)]2∂Q(k)h,h −∂[K∑k=1logdet(Q(k)h,h−Q(k)h,v(Q(k)v,v)−1Q(k)v,h)]2∂Q(k)h,h = 12S(k)h,h−12[(Q(k)h,h−Q(k)h,v(Q(k)v,v)−1Q(k)v,h)−1]⊤,

for . Equating the above equation to 0 and solving for , we get

 Q(k)h,h=(S(k)h,h)−1+Q(k)h,v(Q(k)v,v)−1Q(k)v,h. (27)

Next, we substitute (27) to (26) and solve for for . As a result, we have

Solving for and taking its transpose, we obtain

 Q(k)v,h=−Q(k)v,vS(k)v,h(S(k)h,h)−1, (28)

which we will substitute to (27) to get

 Q(k)h,h=(S(k)h,h)−1+(S(k)h,h)−1S(k)h,vQ(k)v,vS(k)v,h(S(k)h,h)−1. (29)

Rewriting (28) and (29) in terms of the submatrices of and , we arrive at

 Q(k)v,h = Q(k)v,v(M(k)v,v)−1M(k)v,h; Q(k)h,h = M(k)h,h−M(k)h,v(M(k)v,v)−1M(k)v,h +M(k)h,v(M(k)v,v)−1Q(k)v,v(M(k)v,v)−1M(k)v,h,

which gives us the closed-form solutions for the E-step.

### a.2 M-step.

In this step, is fixed and we define

 J(H,S−1) = λ2Tr(S)+12K∑k=1[Tr(SQ(k))−logdetQ(k)] −λ+K2logdetS−λ+K2ℓ=:J′(S).

is minimized when , that is, when

 λ2I+12K∑k=1Q(k)−λ+K2S−1=0.

Solving for , we get

 S=(1λ+K[K∑k=1Q(k)+λI])−1=argminS∈Sℓ++J′(S), (30)

where denotes the set of positive-definite symmetric matrices. Since matrix inverse is one-to-one, we can express as , thus, from (30),

 M=1λ+K[K∑k=1Q(k)+λI]

minimizes , giving us the closed-form solution for the M-step.

## Appendix B The Mean-Imputation Method

This section describes how each of the kernel matrices are completed using mean-imputation method.

Let us again partition the kernel matrices (for ) as shown in (1), with the corresponding sizes for the submatrices. Let us also denote the entries for as where denotes the inner product. The entries for the submatrix will then be given by

 K(¯x,¯x)=⟨¯x,¯x⟩=1|Iv|2∑i∈Iv∑j∈Iv⟨xi,xj⟩,

where , and denotes the index set of the visible entries . On the other hand, the entries for the submatrix is given by

 K(xj,¯x)=⟨xj,¯x⟩=1|Iv|∑i∈Iv⟨xj,xi⟩,

while the submatrix is the matrix transpose of . We have now obtained the values that complete the kernel matrices using mean-imputation method.

## Appendix C Remaining Proofs and Derivations

### c.1 Proof of Proposition 4.1

We shall use the inequality:

 ∀Θ, Et−1[logp(H|V,Θ)]≤Et−1[logp(H|V,Θ(t−1))] (31)

which is immediately derived from the non-negativity of the KL divergence:

 KL(p(⋅|V,Θ(t−1)),p(⋅|V,Θ))≥0. (32)

Under the assumption that , the equality

 logp(V|Θ)=logp(V,H|Θ)−logp(H|V,Θ) (33)

holds for any value of unobserved data . Taking the expectation according to and and adding the penalizing term , the LHS becomes . Hence, we get

 LII(Θ;q)=Q(t)(Θ)−Et−1[logp(H|V,Θ)]. (34)

From the update rule of the M-step, we have . With help of this and (31), we get

 LII(Θ(t);q)=Q(t)(Θ(t))−Et−1[logp(H|V,Θ(t))]≥Q(t)(Θ(t−1))−Et−1[logp(H|V,Θ(t−1))]=LII(Θ(t−1);q). (35)

### c.2 (12) holds with (13)

In the setting of (13), note that

 qk(vk)=N(vk|0,Q(k)v,v) (36)

from the definition of . The empirical distribution is expressed as the product of those Gaussians:

 q(V)=K∏k=1∫qk(vk,hk)dhk=K∏k=1qk(vk)=K∏k=1N(vk|0,Q(k)v,v). (37)

From this, we have

 Eq(V,H)[vkv⊤k]=