Transductive Boltzmann Machines
We present transductive Boltzmann machines (TBMs), which firstly achieve transductive learning of the Gibbs distribution. While exact learning of the Gibbs distribution is impossible by the family of existing Boltzmann machines due to combinatorial explosion of the sample space, TBMs overcome the problem by adaptively constructing the minimum required sample space from data to avoid unnecessary generalization. We theoretically provide bias-variance decomposition of the KL divergence in TBMs to analyze its learnability, and empirically demonstrate that TBMs are superior to the fully visible Boltzmann machines and popularly used restricted Boltzmann machines in terms of efficiency and effectiveness.
Transductive Boltzmann Machines
Mahito Sugiyama National Institute of Informatics email@example.com Koji Tsuda The University of Tokyo RIKEN AIP; NIMS firstname.lastname@example.org Hiroyuki Nakahara RIKEN Center for Brain Science email@example.com
noticebox[b]Preprint. Work in progress.\end@float
Transductive learning is a type of machine learning which directly performs inference on particular given objects without learning unnecessary general rules. Vapnik  describes “If you are limited to a restricted amount of information, do not solve the particular problem you need by solving a more general problem”. Since transductive learning is in principle easier than inductive learning, which requires to learn general rules for inference on unknown objects, it has drawn considerable attention in the field of machine learning.
However, transductive learning has been largely ignored in learning of the Gibbs distribution, which is a fundamental problem for energy-based models [LeCun et al., 2007]. Although various types of Boltzmann machines have been proposed for the task, including restricted Boltzmann machines (RBMs) [Smolensky, 1986, Hinton, 2002] and deep Boltzmann machines (DBMs) [Salakhutdinov and Hinton, 2009, 2012], none of them allows exact learning. This is why the sample space of the Gibbs distribution is always fixed to , or the power set , of the set of variables to infer probability of any outcome for induction, which causes combinatorial explosion of the sample space.
Nevertheless, inference on the entire space is not always essential. The motivating application is neural activity analysis [Ganmor et al., 2011, Ioffe and Berry II, 2017, Köster et al., 2014, Watanabe et al., 2013, Yu et al., 2011], where the goal is to investigate (potentially higher-order) interactions among neurons (variables) by fitting energy-based models. Since prediction on unknown neural activity is not particularly interesting, generalization from a particular dataset to the entire space is optional. Therefore transductive learning to perform inference on only data should be desirable. The transductive approach has been implicitly employed by Ganmor et al. .
Here we propose transductive Boltzmann machines (TBMs), which adaptively construct the minimum required sample space of the Gibbs distribution from data and realize transductive learning from particular to particular without solving the unnecessary general task. Advantages are:
Learning is efficient. The time complexity of computing the gradient of a parameter is independent of the number of variables and linear in the sample size.
Learning is exact. None of approximation method such as Gibbs sampling is required.
Learning is optimal. Convergence to the global optimum that maximizes the (log-)likelihood is always guaranteed.
Moreover, we theoretically analyze TBMs and achieve bias-variance decomposition of the Kullback–Leibler (KL) divergence using information geometry [Amari, 2016]. In particular, our method can be viewed as generalization of the information geometric hierarchical log-linear model introduced by Amari , Nakahara and Amari .
This paper is organized as follows. Section 2 introduces the transductive Boltzmann machines (TBMs). Section 2.1 formulates the TBMs, Section 2.2 gives a learning algorithm, Section 2.3 discusses how to choose parameters, and Section 2.4 provides bias-variance decomposition. We empirically examine TBMs in Section 3 and conclude the paper in Section 4.
2 Transductive Boltzmann Machines
To introduce our proposal of transductive Boltzmann machines (TBMs), we first prepare basic notion of Boltzmann machines. Let be the set of (visible) variables. Given a Boltzmann machine [Ackley et al., 1985], which is represented as a graph with the vertex set and the edge set , the energy for a variable configuration is given by
where each is the binary state of an variable , and are called bias and weight, respectively. The Gibbs distribution for the sample space with the partition function is defined as
We introduce alternative set-theoretic notation of Boltzmann machines. We represent each binary vector as the set of indices of “1”, that is, . The sample space becomes the power set of . We represent bias and weight by a function , where the domain or for a Boltzmann machine . For each , the energy is re-written as
For the partition function , we denote by . Hence each probability of the resulting Gibbs distribution can be written as
In what follows, we use the symbol to represent the empty set . Note that we have
We use uppercase letters for Gibbs distributions and lowercase letters for the corresponding probability.
We present TBMs, which adaptively construct the sample space of the Gibbs distribution from a given dataset . The dataset is naturally assumed to be a multiset, that is, multiple instances of elements are allowed.
Let be the set of variables, Given the domain of parameters and a dataset . The sample space of the Gibbs distribution generated by a transductive Boltzmann machine (TBM) is defined as
for the parameter function , where holds.
Note that in general. Since the cardinality of each element coincides with the order of interactions, TBMs can treat higher-order interactions among variables if for some , which have been considered by Sejnowski , Min et al. . The parameter domain defines the model of TBMs, thus the problem of how to choose corresponds to how to determine the graph structure in Boltzmann machines, which belongs to the model selection problem. We will discuss how to determine the parameter domain in Section 2.3.
Learning of a TBM is achieved by maximizing the log-likelihood or, equivalently, minimizing the KL divergence, which is the same criterion with the existing Boltzmann machines. We introduce corresponding to the expectation, defined as
It is clear that , , for . For a dataset , we use the empirical distribution defined as , where is the multiplicity function denoting how many times appears in the multiset . It follows that . We denote by and .
First we show that the log-likelihood of a Gibbs distribution is concave with respect to for any . This is from
and is convex with respect to any from Equation (4).
Next we prove that the log-likelihood is maximized if and only if
where is obtained in Equation (5) by replacing with , which corresponds to the expectation of the empirical distribution. The gradient of with respect to with is obtained as
We have as is a multiset and
Hence it follows that
Moreover, the KL divergence , where is the entropy of and independent of . Therefore, together with the concavity of , the Gibbs distribution simultaneously maximizes the log-likelihood and minimizes the KL divergence if and only if Equation (6) is satisfied.
Our result means that TBMs can be optimized by the gradient ascent strategy. We show a gradient ascent algorithm for learning of TBMs in Algorithm 1, where is a learning rate. The time complexity of each iteration for updating is , hence the total time complexity is , where is the number of iteration. In the standard Boltzmann machines, the sample space , which means that the time complexity for computing to get the gradient of the parameter (bias or weight) is and exact computation is infeasible due to the combinatorial explosion. Moreover, computing the partition function involves the sum of energy across the entire space , which again causes the combinatorial explosion. In contrast, the complexity of TBMs is independent of the number of variables and linear in the sample size . Hence efficient and exact computation is achieved.
2.3 Parameter Selection
We provide a guideline of how to choose the domain of parameters, which can be selected by the user, although it is hard to find the optimal without the specific domain knowledge of variable connections. This is the same problem with how to determine the network structure in neural networks.
We propose to set as variable combinations that occur frequently enough in a dataset, that is, for with some threshold . This is why is used as the condition of the optimality in Equation (6) and small implies that it might not contribute to the resulting Gibbs distribution. In addition, it is important to restrict higher-order interactions to examine their contribution. Hence the parameter domain is formulated as
and the threshold and the upper bound of the order are the input parameters to TBMs. Note that is possible, where all elements such that are added to .
Interestingly, frequent itemset mining [Agrawal and Srikant, 1994, Aggarwal and Han, 2014] studied in data mining can exactly solve this problem of finding . This technique offers to enumerate all frequent itemsets, where an itemset is an element of and it is said to be frequent if . Moreover, we can include the cardinality constraint on each itemset in the enumeration process. This means that the set of frequent itemsets coincides with introduced in Equation (7). Thus we can simply apply a frequent itemset mining algorithm, e.g., LCM [Uno et al., 2004] known to be the fastest mining algorithm, to efficiently obtain the parameter domain from a dataset . It is interesting to note that itemset mining has been also used in neural activity analysis [Picado-Muino et al., 2013] and genome wide association studies [Zhang et al., 2014a]. We summarize the overall algorithm of TBMs in Algorithm 2. Since frequent itemset mining will find massive number of itemsets if the threshold is small, we recommend to start trying relatively high in TBMs or small and decrease or increase if is empty or too small.
In the Gibbs distribution represented by a TBM, the probability must be satisfied for all , otherwise diverges. Unfortunately, there exists some which causes this situation when the learning condition in Equation (6) is satisfied. For example, let , , , and . Then Equation (6) is satisfied only if and . This problem is solved if we remove or from . Hence, in learning of TBMs in the gradient ascent algorithm in Algorithm 1, one needs to monitor the behavior of and remove the corresponding element from when it starts diverging. Note that this problem exists not only in TBMs but in the standard Boltzmann machines. But it has not been investigated before.
2.4 Bias-Variance Decomposition
To theoretically analyze learnability of TBMs, we provide bias-variance decomposition of the KL divergence, which is a fundamental analysis of machine learning methods. Since the sample space is a subset of , it is always a partially ordered set (poset) [Gierz et al., 2003] with respect to the inclusion relationship “”. This directly leads to the following result:
The log-linear model on posets is an extension of the hierarchical model of probability distributions studied in information geometry [Amari, 2001, Nakahara and Amari, 2002], and the following holds from Proposition 1. The set of Gibbs distributions with the sample space is always a dually flat manifold [Amari, 2009], and the pair of functions is a dual coordinate system of the manifold (Theorem 2 in [Sugiyama et al., 2017]), which is connected with the Legendre transformation:
where and .
The dually flat structure of gives us Pythagorean theorem. Let us consider two submanifolds:
specified by two functions with , where the former submanifold has constraints on while the latter has those on . Submanifolds and are called -flat and -flat, respectively [Amari, 2016, Chapter 2.4]. Assume that and . Then the intersection is always a singleton, that is, the distribution satisfying and always uniquely exists, leading to the Pythagorean theorem:
for any and .
Using Pythagorean theorem, we achieve bias-variance decomposition in learning of TBMs. Our idea is to decompose the expectation of the KL divergence , from the true (unknown) Gibbs distribution to the MLE (maximum likelihood estimation) of an empirical distribution learned by a TBM with a fixed parameter domain , and decompose it using the information geometric property. Interestingly, we can simply obtain the lower bound of the variance as using only the number of parameters and the sample size , which is independent of the sample space . This theorem also applies to the fully visible Boltzmann machines with .
Theorem 1 (Bias-variance decomposition of the KL divergence).
Given a parameter domain and sample space such that . Let be the true Gibbs distribution, be the MLEs of and an empirical distribution learned by a TBM, respectively. We have
where the variance is given as
with the equality holding when the sample size , where denotes the error covariance between and and is the Fisher information.
Proof. For two submanifolds:
with and , we apply Pythagorean theorem, yielding
where . The second term is
Since is the function of , , we use the second-order approximation of the Taylor series expansion [Ang and Tang, 2006, Section 4.3.2], which is given for a function as
Thus is approximated as
We can use the Cramér–Rao bound [Amari, 2016, Theorem 7.1] of the covariance matrix since for any is unbiased, where we additionally have the term as we used the second-order Taylor approximation of the expectation. Finally we have
with the equality holding when .
Since always holds if as , our result supports intuitive property of typical machine learning algorithms: If we extend the parameters , the bias decreases while the variance increases.
We empirically demonstrate the tightness of the lower bound of the variance in this theorem, as shown in Figure 1(a). To obtain the variance , first we fix a true distribution generated from the uniform distribution with its sample space with and get learned by a TBM with and , which gives a reasonable amount of parameters . Then the lower bound is obtained as . In each trial, we repeat times generating a sample with the size from and learned with fixing and to directly estimate the variance ( its standard deviation). In Figure 1(a, top) the sample size is varied from 100 to 1,000,000 with fixing the number of variables while in Figure 1(a, bottom) is varied from 10 to 1,000 with fixing . These plots clearly show that our lower bound is tight enough across all settings. The lower bound exceeds the actual variance in some cases, which is due to the approximation error of the Taylor series expansion or fluctuation of random sampling.
In our simulations, we empirically found the effectiveness and efficiency of transductive Boltzmann machines (TBMs) compared to two representative energy-based models: fully visible Boltzmann machines (BMs) and restricted Boltzmann machines (RBMs), using the setting in the following. We used Cent OS release 6.9 and ran all experiments on 2.20 GHz Intel Xeon CPU E7-8880 v4 and 3 TB of memory. All methods, TBMs, BM, and RBMs, were implemented in C++ and compiled with icpc 18.0.1. We used persistent contrastive divergence with a single step of alternating Gibbs sampling (persistent CD-1) [Hinton, 2002, Tieleman, 2008] in learning of BMs and RBMs. We measured the effectiveness of respective methods by the reconstruction error (smaller is better), which is defined as the KL divergence from the empirical distribution of a given dataset to the learned model distribution . Since it is impossible to compute the exact partition function in BMs and TBMs and an additional approximation method such as AIS is needed [Neal, 2001, Salakhutdinov and Murray, 2008], we consistently normalized the learned energy of each by the proxy in all the three methods to exclude the approximation error of the partition function and realize fair comparison. Throughout the experiments, the number of iterations for TBMs were up to and those for BMs and RBMs are to ensure the convergence. Running time in TBMs includes the itemset mining process by LCM [Uno et al., 2004] to obtain .
Results on Synthetic Data.
First we demonstrate the performance of the thee methods on synthetic data and show that TBMs can accurately learn Gibbs distributions without overfitting. We have randomly generated a dataset , where we first randomly chose without any multiplicity and sampled data points from with replacement. We set , , and , and applied TBM with the cardinality upper bound , or on each parameter with , BM using the same as the TBM (), and RBM with hidden variables. To visualize the performance of these methods, we plot the leaned model distribution against the empirical distribution in Figure 1(b) and the log-likelihood ratio in Figure 1(c). The reconstruction errors are 0.29, 0.26, 0.20, 0.41, and 0.30 for TBMs (k = 1, 2, 3), BM, and RBM, respectively. The results of the TBM with and the RBM are similar, which does not have second order interactions between visible variables by definition, and those of the TBM with and the BM using the same parameters are also similar, which means that TBMs do not overly fit to a random dataset. Furthermore, we observe that the TBM can achieve more accurate inference if we include higher-order interaction in the TBM with .
|Dataset||Reconstruction error (KL divergence)||Running time (sec.)|
Results on Real Data.
Next we examine the performance of the three methods on a variety of real datasets. We collected binary datasets from two domains; neural spiking data [Zhang et al., 2014b] originally obtained by Lefebvre et al.  and itemset mining benchmark datasets from the FIMI repository111http://fimi.ua.ac.be/data/. We set in TBMs and used in both TBMs and BM, hence the parameter domain is always the same between TBMs and BMs. The number of parameters in RBMs is with the number of hidden variables, thereby we set to use the same number of parameters with TBMs as much as possible. The resulting numbers of parameters are shown in Table 1. The large difference (e.g. in kosarak and retail) cannot be avoided as they are the minimum in RBMs, that is, with .
Results are shown in Table 2. TBMs consistently show the best reconstruction errors (smallest KL divergence) across all datasets and the fastest except for 20080624KO. Since the time complexity is independent of in TBMs, it shows two orders of magnitude faster than BMs and RBMs for large such as in kosarak and retail, while the error of TBMs is smaller than RBMs with limited amount of parameters whose size is two orders of magnitude smaller than those in RBMs. This is why TBMs avoid unnecessary learning of the Gibbs distribution with the enormous sample space and fit to minimum required space .
This paper firstly shows the powerful potential of transductive learning in energy-based models. We have proposed to use transduction in learning of the Gibbs distribution and presented transductive Boltzmann machines (TBMs). TBMs avoids unnecessary generalization by adaptively constructing the sample space, which leads to efficient and exact learning of the Gibbs distribution, while exact learning is impossible due to combinatorial explosion of the sample space in the exiting Boltzmann machines. Our experimental results support the superiority of TBMs in terms of efficiency and effectiveness over the inductive approach by the exiting Boltzmann machines. This work opens the door of transductive learning to energy-based models.
- Ackley et al.  D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9(1):147–169, 1985.
- Aggarwal and Han  C. C. Aggarwal and J. Han, editors. Frequent Pattern Mining. Springer, 2014.
- Agrawal and Srikant  R. Agrawal and R. Srikant. Fast algorithms for mining association rules. In Proceedings of the 20th International Conference on Very Large Data Bases, pages 487–499, 1994.
- Amari  S. Amari. Information geometry on hierarchy of probability distributions. IEEE Transactions on Information Theory, 47(5):1701–1711, 2001.
- Amari  S. Amari. Information geometry and its applications: Convex function and dually flat manifold. In F. Nielsen, editor, Emerging Trends in Visual Computing: LIX Fall Colloquium, ETVC 2008, Revised Invited Papers, pages 75–102. Springer, 2009.
- Amari  S. Amari. Information Geometry and Its Applications. Springer, 2016.
- Ang and Tang  A. H. S. Ang and W. H. Tang. Probability Concepts in Engineering. Wiley, 2006.
- Ganmor et al.  E. Ganmor, R. Segev, and E. Schneidman. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences, 108(23):9679–9684, 2011.
- Gierz et al.  G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, and D. S. Scott. Continuous Lattices and Domains. Cambridge University Press, 2003.
- Hinton  G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2002.
- Ioffe and Berry II  M. L. Ioffe and M. J. Berry II. The structured ‘low temperature’ phase of the retinal population code. PLOS Computational Biology, 13(10):1–31, 2017.
- Köster et al.  U. Köster, J. Sohl-Dickstein, C. M. Gray, and B. A. Olshausen. Modeling higher-order correlations within cortical microcolumns. PLOS Computational Biology, 10(7):1–12, 2014.
- LeCun et al.  Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. J. Huang. A tutorial on energy-based learning. In G. Bakir, T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan, editors, Predicting Structured Data. The MIT Press, 2007.
- Lefebvre et al.  J. L. Lefebvre, Y. Zhang, M. Meister, X. Wang, and J. R. Sanes. Gamma-Protocadherins regulate neuronal survival but are dispensable for circuit formation in retina. Development, 135:4141–4151, 2008.
- Min et al.  M. R. Min, X. Ning, C. Cheng, and M. Gerstein. Interpretable sparse high-order Boltzmann machines. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, pages 614–622, 2014.
- Nakahara and Amari  H. Nakahara and S. Amari. Information-geometric measure for neural spikes. Neural Computation, 14(10):2269–2316, 2002.
- Neal  R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001.
- Picado-Muino et al.  D. Picado-Muino, C. Borgelt, D. Berger, G. Gerstein, and S. Gruen. Finding neural assemblies with frequent item set mining. Frontiers in Neuroinformatics, 7(9):1–15, 2013.
- Salakhutdinov and Hinton  R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, pages 448–455, 2009.
- Salakhutdinov and Hinton  R. Salakhutdinov and G. E. Hinton. An efficient learning procedure for deep Boltzmann machines. Neural Computation, 24(8):1967–2006, 2012.
- Salakhutdinov and Murray  R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine learning, pages 872–879, 2008.
- Sejnowski  T. J. Sejnowski. Higher-order Boltzmann machines. In AIP Conference Proceedings, volume 151, pages 398–403, 1986.
- Smolensky  P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart, J. L. McClelland, and PDP Research Group, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, pages 194–281. MIT Press, 1986.
- Sugiyama et al.  M. Sugiyama, H. Nakahara, and K. Tsuda. Tensor balancing on statistical manifold. In Proceedings of the 34th International Conference on Machine Learning, pages 3270–3279, 2017.
- Tieleman  T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pages 1064–1071, 2008.
- Uno et al.  T. Uno, T. Asai, Y. Uchida, and H. Arimura. An efficient algorithm for enumerating closed patterns in transaction databases. In Discovery Science, volume 3245 of Lecture Notes in Computer Science, pages 16–31. Springer, 2004.
- Vapnik  V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 2000.
- Watanabe et al.  T. Watanabe, S. Hirose, H. Wada, Y. Imai, T. Machida, I. Shirouzu, S. Konishi, Y. Miyashita, and N. Masuda. A pairwise maximum entropy model accurately describes resting-state human brain networks. Nature communications, 4(1370):1–10, 2013.
- Yu et al.  S. Yu, H. Yang, H. Nakahara, G. S. Santos, D. Nikolić, and D. Plenz. Higher-order interactions characterized in cortical activity. The Journal of Neuroscience, 31(48):17514–17526, 2011.
- Zhang et al. [2014a] Q. Zhang, Q. Long, and J. Ott. AprioriGWAS, a new pattern mining strategy for detecting genetic variants associated with disease through interaction effects. PLoS Computational Biology, 10(6):e1003627, 06 2014a.
- Zhang et al. [2014b] Y.-F. Zhang, H. Asari, and M. Meister. Multi-electrode recordings from retinal ganglion cells. CRCNS.org., 2014b. URL http://dx.doi.org/10.6080/K0RF5RZT.