The Information Sieve

The Information Sieve

Abstract

We introduce a new framework for unsupervised learning of representations based on a novel hierarchical decomposition of information. Intuitively, data is passed through a series of progressively fine-grained sieves. Each layer of the sieve recovers a single latent factor that is maximally informative about multivariate dependence in the data. The data is transformed after each pass so that the remaining unexplained information trickles down to the next layer. Ultimately, we are left with a set of latent factors explaining all the dependence in the original data and remainder information consisting of independent noise. We present a practical implementation of this framework for discrete variables and apply it to a variety of fundamental tasks in unsupervised learning including independent component analysis, lossy and lossless compression, and predicting missing values in data.

The hope of finding a succinct principle that elucidates the brain’s information processing abilities has often kindled interest in information-theoretic ideas Barlow (1989); Simoncelli & Olshausen (2001). In machine learning, on the other hand, the past decade has witnessed a shift in focus toward expressive, hierarchical models, with successes driven by increasingly effective ways to leverage labeled data to learn rich models Schmidhuber (2015); Bengio et al. (2013). Information-theoretic ideas like the venerable InfoMax principle Linsker (1988); Bell & Sejnowski (1995) can be and are applied in both contexts with empirical success but they do not allow us to quantify the information value of adding depth to our representations. We introduce a novel incremental and hierarchical decomposition of information and show that it defines a framework for unsupervised learning of deep representations in which the information contribution of each layer can be precisely quantified. Moreover, this scheme automatically determines the structure and depth among hidden units in the representation based only on local learning rules.

The shift in perspective that enables our information decomposition is to focus on how well the learned representation explains multivariate mutual information in the data (a measure originally introduced as “total correlation” Watanabe (1960)). Intuitively, our approach constructs a hierarchical representation of data by passing it through a sequence of progressively fine-grained sieves. At the first layer of the sieve we learn a factor that explains as much of the dependence in the data as possible. The data is then transformed into the “remainder information”, which has this dependence extracted. The next layer of the sieve looks for the largest source of dependence in the remainder information, and the cycle repeats. At each step, we obtain a successively tighter upper and lower bound on the multivariate information in the data, with convergence between the bounds obtained when the remaining information consists of nothing but independent factors. Because we end up with independent factors, one can also view this decomposition as a new way to do independent component analysis (ICA) Comon (1994); Hyvärinen & Oja (2000). Unlike traditional methods, we do not assume a specific generative model of the data (i.e., that it consists of a linear transformation of independent sources) and we extract independent factors incrementally rather than all at once. The implementation we develop here uses only discrete variables and is therefore most relevant for the challenging problem of ICA with discrete variables, which has applications to compression Painsky et al. (2014).

After providing some background in Sec. 1, we introduce a new way to iteratively decompose the information in data in Sec. 2, and show how to use these decompositions to define a practical and incremental framework for unsupervised representation learning in Sec. 3. We demonstrate the versatility of this framework by applying it first to independent component analysis (Sec. 4). Next, we use the sieve as a lossy compression to perform tasks typically relegated to generative models including in-painting and generating new samples (Sec. 5). Finally, we cast the sieve as a lossless compression and show that it beats standard compression schemes on a benchmark task (Sec. 6).

1 Information-theoretic learning background

Using standard notation Cover & Thomas (2006), capital denotes a random variable taking values in some domain and whose instances are denoted in lowercase, . In this paper, the domain of all variables are considered to be discrete and finite. We abbreviate multivariate random variables, , with an associated probability distribution, , which is typically abbreviated to . We will index different groups of multivariate random variables with superscripts, , as defined in Fig. 1. We let denote the original observed variables and we often omit the superscript in this case for readability.

Entropy is defined in the usual way as . We use base two logarithms so that the unit of information is bits. Higher-order entropies can be constructed in various ways from this standard definition. For instance, the mutual information between two groups of random variables, and can be written as the reduction of uncertainty in one variable, given information about the other, .

The “InfoMax” principle Linsker (1988); Bell & Sejnowski (1995) suggests that for unsupervised learning we should construct ’s to maximize their mutual information with , the data. Despite its intuitive appeal, this approach has several potential problems (see Ver Steeg et al. (2014) for one example). Here we focus on the fact that the InfoMax principle is not very useful for characterizing “deep representations”, even though it is often invoked in this context Vincent et al. (2008). This follows directly from the data processing inequality (a similar argument appears in Tishby & Zaslavsky (2015)). Namely, if we start with , construct a layer of hidden units that are a function of , and continue adding layers to a stacked representation so that , then the information that the ’s have about cannot increase after the first layer, . From the point of view of mutual information, is a copy and is just a copy of a copy. While a coarse-grained copy might be useful, the InfoMax principle does not quantify how or why.

Instead of looking for a that memorizes the data, we shift our perspective to searching for a so that the ’s are as independent as possible conditioned on this . Essentially, we are trying to reconstruct the latent factors that are the cause of the dependence in . To formalize this, we introduce the multivariate mutual information which was first introduced as “total correlation” Watanabe (1960).

(1)

This quantity reflects the dependence in and is zero if and only if the ’s are independent. Just as mutual information is the reduction of entropy in after conditioning on , we can define the reduction in multivariate information in after conditioning on .

(2)

That can be hierarchically decomposed in terms of short and long range dependencies was already appreciated by Watanabe Watanabe (1960) and has been used in applications such as hierarchical clustering Kraskov et al. (2005). This provides a hint about how higher levels of hierarchical representations can be useful: more abstract representations should reflect longer range dependencies in the data. Our contribution below is to demonstrate a tractable approach for learning a hierarchy of latent factors, , that exactly capture the multivariate information in .

2 Incremental information decomposition

We consider any set of probabilistic functions of some input variables, , to be a “representation” of . Looking at Fig. 1(a), we consider a representation with a single learned latent factor, . Then, we try to save the information in that is not captured by into the “remainder information”, . The final result is encapsulated in Cor. 4 which says that we can repeat this procedure iteratively (as in Fig. 1(b)) and decomposes into a sum of non-negative contributions from each . Note that includes , so that ’s at subsequent layers can depend on latent factors learned at earlier layers.

(a)

(b)

Figure 1: (a) This diagram describes one layer of the information sieve. In this graphical model, the variables in the top layer (’s) represent (observed) input variables. is some function of all the ’s that is optimized to be maximally informative about multivariate dependence in . The remainder information, depends on and and is set to contain information in that is not captured by . (b) Summary of variable naming scheme for multiple layers of the sieve. The input variables are in bold and the learned latent factors are in red.
Theorem 1.

Incremental Decomposition of Information  Let be some (deterministic) function of and let be a probabilistic function of , for each . Then the following upper and lower bounds on hold:

(3)

A proof is provided in App. B. Note that the remainder information, , includes . Bounds on also provide bounds on by using Eq. 1. Next, we point out that the remainder information, , can be chosen to make these bounds tight.

Lemma 2.

Construction of perfect remainder information For discrete, finite random variables drawn from some distribution, , it is possible to define another random variable that satisfies the following two properties:

We give a concrete construction in App. C. We would like to point out one caveat here. The cardinality of may have to be large to satisfy these equalities. For a fixed number of samples, this may cause difficulties with estimation, as discussed in Sec. 3. With perfect remainder information in hand, our decomposition becomes exact.

Corollary 3.

Exact decomposition  For a function of and perfect remainder information, , as defined in Lemma 2, the following decomposition holds:

(4)

The above corollary follows directly from Eq. 3 and the definition of perfect remainder information. Intuitively, it states that the dependence in can be decomposed into a piece that is explained by , , and the remaining dependence in . This decomposition can then be iterated to extract more and more information from the data.

Corollary 4.

Iterative decomposition   Using the variable naming scheme in Fig. 1(b), we construct a hierarchical representation where each is a function of and includes the (perfect) remainder information from according to Lemma 2.

(5)

It is easy to check that Eq. 5 results from repeated application of Cor. 3. We show in the next section that the quantities of the form can be estimated and optimized over efficiently, despite involving high-dimensional variables. As we add the (non-negative) contributions from optimizing , the remaining dependence in the remainder information, , must decrease because is some data-dependent constant. Decomposing data into independent factors is exactly the goal of ICA, and the connections are discussed in Sec. 4.

3 Implementing the sieve

Because this learning framework contains many unfamiliar concepts, we consider a detailed analysis of a toy problem in Fig. 2 while addressing concrete issues in implementing the information sieve.

Figure 2: A simple example for which we imagine we have samples of drawn from some distribution.

Step 1: Optimizing   First, we construct a variable, , that is some arbitrary function of and that explains as much of the dependence in the data is possible. Note that we have to pick the cardinality of and we will always use binary variables. Dropping the layer indices, , the optimization can be written as follows.

(6)

Here, we have relaxed the optimization to allow for probabilistic functions of . If we take the derivative of this expression (along with the constraint that should be normalized) and set it equal to zero, the following simple fixed point equation emerges.

The state space of is exponentially large in , the number of variables. Fortunately, this fixed point equation tells us that we can write the solution in terms of a linear number of terms which are just marginal likelihood ratios. Details of this optimization are discussed in Sec. A. Note that the optimization provides a probabilistic function which we round to a deterministic function by taking the most likely value of for each . In the example in Fig. 2, , which can be verified from Eq. 2.

Surprisingly, we did not need to restrict or parametrize the set of possible functions; the simple form of the solution was implied by the objective. Furthermore, we can also use this function to find labels for previously unseen examples or to calculate ’s for data with missing variables (details in Sec. A). Not only that, but a byproduct of the procedure is to give us a value for the objective , which can be estimated even from a small number of samples.

Step 2: Remainder information  Next, the goal is to construct the remainder information, , as a probabilistic function of , so that the following conditions are satisfied: and This can be done exactly and we provide a simple algorithm in Sec. C. Solutions for this example are given in Fig. 2. Concretely, we estimate the marginals, from data and then write down a conditional probability table, , satisfying the conditions. The example in Fig. 2 was constructed so that the remainder information had the same cardinality as the original variables. This is not always possible. While we can always achieve perfect remainder information by letting the cardinality of the remainder information grow, it might become difficult to estimate marginals of the form at subsequent layers of the sieve, as is required for the optimization in step 1. In results shown below we allow the cardinality of the variables to increase by only one at each level to avoid state space explosion, even if doing so causes . We keep track of these penalty terms so that we can report accurate lower bounds using Eq. 3.

Another issue to note is that in general there may not be a unique choice for the remainder information. In the example, already so we choose , but would also have been a valid choice. If the identity transformation, satisfies the conditions, we will always choose it.

Step 3: Repeat until the end  At this point we repeat the procedure, putting the remainder information back into step 1 and searching for a new latent factor that explains any remaining dependency. In this case, we can see by inspection that and, using Eq. 5, we have . Generally, in high-dimensional spaces it may be difficult to verify that the remainder information is truly independent. When the remainder information is independent, the result of attempting the optimization . In practice, we stop our hierarchical procedure when the optimization in step 1 stops producing positive results because it means our bounds are no longer tightening. Code implementing this entire pipeline is available Ver Steeg ().

Prediction and compression  Note that our condition for the remainder information that implies that we can perfectly reconstruct each variable from the remainder information at the next layer. Therefore, we can in principle reconstruct the data from the representation at the last layer of the sieve. In the example, the remainder information requires two bits to encode each variable separately, while the data requires three bits to encode each variable separately. The final representation has exploited the redundancy between to create a more succinct encoding. A use case for lossy compression is discussed in Sec. 5. Also note that at each layer some variables are almost or completely explained ( in the example become constant). Subsequent layers can enjoy a computational speed-up by ignoring these variables that will no longer contribute to the optimization.

4 Discrete ICA

If represents observed variables then the entropy, , can be interpreted as the average number of bits required to encode a single observation of these variables. In practice, however, if is high-dimensional then estimating or constructing this code requires detailed knowledge of , which may require exponentially many samples in the number of variables. Going back at least to Barlow Barlow (1989), it was recognized that if is transformed into some other basis, , with the ’s independent (), then the coding cost in this new basis is , i.e., it is the same as encoding each variable separately. This is exactly the problem of independent component analysis: transform the data into a basis for which , or is minimized Comon (1994); Hyvärinen & Oja (2000).

Figure 3: On the far left, we consider three independent binary random variables, . The vertical position of each signal is offset for visibility. From left to right: Independent source data is linearly mixed, . This data, , is fed into the information sieve. After going through some intermediate representations, the final result is shown on the far right. Consult Fig. 1(b) for the variable naming scheme. The final representation recovers the independent input sources.

While our method does not directly minimize the total correlation of , Eq. 5 shows that, because is a data-dependent constant, every increase in the total correlation explained by each latent factor directly implies a reduction in the dependence of the resulting representation,

Since the terms in the sum are optimized (and always non-negative), the dependence is decreased at each level. That independence could be achieved as a byproduct of efficient coding has been previously considered Hochreiter & Schmidhuber (1999). An approach that leading to “less dependent components” for continuous variables has also been shown Stögbauer et al. (2004).

For discrete variables, which are the focus of this paper, performing ICA is a challenging and active area of research. Recent state-of-the-art results lower the complexity of this problem to only a single exponential in the number of variables Painsky et al. (2014). Our method represents a major leap for this problem as it is only linear in the number of variables, however, we only guarantee extraction of components that are more independent, while the approach of Painsky et. al. guarantees a global optimum.

The most commonly studied scenario for ICA is to consider a reconstruction problem where some (typically continuous) and independent source variables are linearly mixed according to some unknown matrix Comon (1994); Hyvärinen & Oja (2000). The goal is to recover the matrix and unmix the components (back into their independent sources). Next we demonstrate our discrete independent component recovery on an example reminiscent of traditional ICA examples.

An ICA example  Fig. 3 shows an example of recovering independent components from discrete random variables. The sources, , are hidden and the observations, , are a linearly mixture of these sources. The mixing matrix used in this example is

The information sieve continues to add layers as long as it increases the tightness of the information bounds. The intermediate representations at each layers are also shown. For instance, layer 1 extracts one independent component, and then removes this component from the remainder information. After three layers, the sieve stops because consists of independent variables and therefore the optimization of .

In this case, the procedure correctly stops after three latent factors are discovered. Naively, three layers makes this a “deep” representation. However, we can examine the functional dependence of ’s and ’s by looking at the strength of the mutual information, , as shown in Fig. 4. This allows us to see that none of the learned latent factors (’s) depend on each other so the resulting model is actually, in some sense, shallow. The example in the next section, for contrast, has a deep structure where ’s depend on latent factors from previous layers. Note that the structure in Fig. 4 perfectly reflects the structure of the mixing matrix (i.e., if we flipped the arrows and changes the ’s to ’s, this would be an accurate representation of the generative model we used).

Figure 4: This visualizes the structure of the learned representation for the ICA example in Fig. 3. The thickness of links is proportional to .

While the sieve is guaranteed to recover independent components in some limit, there may be multiple ways to decompose the data into independent components. Because our method does not start with the assumption of a linear mixing of independent sources, even if such a decomposition exists we might recover a different one. While the example we showed happened to return the linear solution that we used to generate the problem, there is no guarantee to find a linear solution, even if one exists.

5 Lossy compression on MNIST digits

The information sieve is not a generative probabilistic model. We construct latent factors that are functions of the data in a way that maximizes the (multivariate) information that is preserved. Nevertheless, because of the way the remainder information is constructed, we can run the sieve in reverse and, if we throw away the remainder information and keep only the ’s, we get a lossy compression. We can use this lossy compression interpretation to perform tasks that are usually achieved using generative models including in-painting and generating new examples (the converse, interpreting a generative model as lossy compression, has also been considered Hinton & Salakhutdinov (2006)).

We illustrate the steps for lossy compression and in-painting in Fig. 5. Imagine that we have already trained a sieve model. For lossy compression, we first transform some data using the sieve. The sieve is an invertible transformation, so we can run it in reverse to exactly recover the inputs. Instead we store only the labels, , throwing the remainder information, , away. When we invert the sieve, what values should we input for ? During training, we estimate the most likely value to occur for each variable, . W.l.o.g., we relabel the symbols so that this value is . Then, for lossy recovery, we run the sieve in reverse using the labels we stored, , and setting .

In-painting proceeds in essentially the same way. We take advantage of the fact that we can transform data even in the presence of missing values, as described in Sec. A. Then we replace missing values in the remainder information with ’s and invert the sieve normally.

Figure 5: (a) We use the sieve to transform data into some labels, plus remainder information. For lossy recovery, we invert the sieve using only the ’s, setting ’s to zero. (b) For in-painting, we first transform data with missing values. Then we invert the sieve, again using zeros for the missing remainder information.

(a)

(b)

Figure 6: (a) We visualize each of the learned components, arranged in reading order. (b) The structural relationships among the latent factors is based on . The size of a node represents the magnitude of .

For the following tasks, we consider 50k MNIST digits that were binarized at the normalized grayscale threshold of . We include no prior knowledge about spatial structure or invariance under transformations through convolutional structure or pooling, for instance. The binarized images are treated as binary vectors in a 784 dimensional space. The digit labels are also not used in our analysis. We trained the information sieve on this data, adding layers as long as the bounds were tightening. This led to a 12 layer representation and a lower bound on of about 40 bits. It seems likely that more than 12 layers could be effective but the growing size of the state space for the remainder information increases the difficulty of estimation with limited data. A visualization of the learned latent factors and the relationships among them appears in Fig. 6. Unlike the ICA example, the latent factors here have exhibit multi-layered relationships.

Figure 7: The top row are randomly selected MNIST digits. In the second row, we compress the digits into the 12 binary variables, , and then attempt to reconstruct the image. In the bottom row, we learn ’s using just the pixels in the top half and then recover the pixels in the bottom half.

The middle row of Fig. 7 shows results from the lossy compression task. We use the sieve to transform the original digits into 12 binary latent factors, , plus remainder information for each pixel, , and then we use the ’s alone to reconstruct the image. In the third row, the ’s are estimated using only pixels from the top half. Then we reconstruct the pixels on the bottom half from these latent factors. Similar results on test images are shown in Sec. D, along with examples of “hallucinating” new digits.

6 Lossless compression

Given samples of drawn from , the best compression we can do in theory is to use an average of bits for our compressed representation Shannon (1948). However, in practice, if is high-dimensional then we cannot accurately estimate to achieve this level of compression. We consider alternate schemes and compare them to the information sieve for a compression task.

Benchmark  For a lossless compression benchmark, we consider a set of 60k of binarized digits with 784 pixels, where the order of the pixels has been randomly permuted (the same unknown permutation is applied to each image). Note that we have made this task artificially more difficult than the straightforward task of compressing digits because many compression schemes exploit spatial correlations among neighboring pixels for efficiency. The information sieve is unaffected by this permutation since it does not make any assumptions about the structure of the input space (e.g. the adjacency of pixels). We use 50k digits as training for models, and report compression results on the 10k test digits.

Naively, these 28 by 28 binary pixels would require 784 bits per digit to store. However, some pixels are almost always zero. According to Shannon, we can compress pixel using just bits on average Shannon (1948). Because the state space of each individual bit is small, this bound is actually achievable (using arithmetic coding Cover & Thomas (2006), for example). Therefore, we should be able to store the digits using on average.

We would like to make the data more compressible by first transforming it. We consider a simplified version of the sieve with just one layer. We let take possible values and then optimize it according to our objective. For the remainder information, we use the (invertible) function . In other words, represents deviation from the most likely value of for a given value of . The cost of storing a digit in this new representation will be , where bits are used to store the value of .

For comparison, we consider an analogous benchmark introduced in Gregor & LeCun (2011). For this benchmark, we just choose random digits as representatives (from the training set). Then for each test digit, we store the identity of the closest representative (by Hamming distance), along with the error which we will also call , so that we can recover the original digit. Again, the number of bits per digit will just by plus the cost of storing the errors for each pixel according to Shannon.

Figure 8: This shows for for each pixel, , in an image.

Consider the single layer sieve with and . After optimizing, Fig. 8 visualizes the components of . As an exercise in unsupervised clustering the results are somewhat interesting; the sieve basically finds clusters for each digit and for slanted versions of each digit. In Fig. 9 we explicitly construct the remainder information (bottom row), i.e. the deviation between the most likely value of each pixel conditioned on (middle row) and the original (top row).

Figure 9: The top row shows the original digit, the middle row shows the most likely values of the pixels conditioned on the label, , and the bottom row shows the remainder or residual error, .

The results of our various compression benchmarks are shown in Table 6. For comparison we also show results from two standard compression schemes, gzip, based on Lempel-Ziv coding Ziv & Lempel (1977), and Huffman coding Huffman et al. (1952). We take the better compression result from storing and compressing the data array in column-major or row-major order with these (sequence-based) compression schemes. Note that the sieve and random representative benchmark that we described require a codebook of fixed size whose contribution is asymptotically negligible and is not included in the results.

\topcaption

Summary of compression results. Results with a “*” are reported based on empirical compression results rather than Shannon bounds. Method Bits per digit Naive 784 Huffman* Huffman et al. (1952) 376 gzip* Ziv & Lempel (1977) 328 Bitwise 297 20 random representatives 293 50 random representatives 279 100 random representatives 267 20 sieve representatives 266 50 sieve representatives 252 100 sieve representatives 243

Discussion

First of all, sequence-based compression schemes have a serious disadvantage in this setup. Because the pixels are scrambled, to take advantage of correlations would require longer window sizes than is typical. The random compression scheme does significantly better. Despite the scrambled pixels, at least it uses the fact that the data consist of iid samples of length 784 pixels. However, the sieve leads to much better compression; for instance, 20 sieve representatives are as good as 100 random ones. The idea behind “factorial codes” Barlow (1989) is that if we can transform our data so that the variables are independent, and then (optimally) compress each variable separately, we will achieve a globally optimal compression. The compression results shown here are promising, but are not state-of-the-art. The reason is that our discovery of discrete independent components comes at a cost of increasing the cardinality of variables at each layer of the sieve. To define a more practical compression scheme, we would have to balance the trade-off between reducing dependence and controlling the size of the state space. We leave this direction for future work.

7 Related work

The idea of decomposing multivariate information as an underlying principle for unsupervised representation learning has been recently introduced Ver Steeg & Galstyan (2015, 2014) and used in several contexts Pepke & Ver Steeg (2016); Madsen et al. (2016). While bounds on were previously given, here we provided an exact decomposition. Our decomposition also introduces the idea of remainder information. While previous work required fixing the depth and number of latent factors in the representation, remainder information allows us to build up the representation incrementally, learning the depth and number of factors required as we go. Besides providing a more flexible approach to representation and structure learning, the invertibility of the information sieve makes it more naturally suited to a wider variety of tasks including lossy and lossless compression and prediction. Another interesting related result showed that positivity of the quantity (the same quantity appearing in our bounds) implies that the ’s share a common ancestor in any DAG consistent with  Steudel & Ay (2015). A different line of work about information decomposition focuses on distinguishing synergy and redundancy Williams & Beer (2010), though these measures are typically impossible to estimate for high-dimensional systems. Finally, a different approach to information decomposition focuses on the geometry of the manifold of distributions defined by different models Amari (2001).

Connections with ICA were discussed in Sec. 4 and the relationship to InfoMax was discussed in Sec. 1. The information bottleneck (IB) Tishby et al. (2000) is another information-theoretic optimization for constructing representations of data that has many mathematical similarities to the objective in Eq. 6, with the main difference being that IB focuses on supervised learning while ours is an unsupervised approach. Recently, the IB principle was used to investigate the value of depth in the context of supervised learning Tishby & Zaslavsky (2015). The focus here, on the other hand, is to find an information-theoretic principle that justifies and motivates deep representations for unsupervised learning.

8 Conclusion

We introduced the information sieve, which provides a decomposition of multivariate information for high-dimensional (discrete) data that is also computationally feasible. The extension of the sieve to continuous variables is nontrivial but appears to result in algorithms that are more robust and practical Ver Steeg et al. (2016). We established here a few of the immediate implications of the sieve decomposition. First of all, we saw that a natural notion of “remainder information” arises and that this allows us to extract information in an incremental way. Several distinct applications to fundamental problems in unsupervised learning were demonstrated and appear promising for in-depth exploration. The sieve provides an exponentially faster method than the best known algorithm for discrete ICA (though without guarantees of global optimality). We also showed that the sieve defines both lossy and lossless compression schemes. Finally, the information sieve suggests a novel conceptual framework for understanding unsupervised representation learning. Among the many deviations from standard representation learning a few properties stand out. Representations are learned incrementally and the depth and structure emerge in a data-driven way. Representations can be evaluated information-theoretically and the decomposition allows us to separately characterize the contribution of each hidden unit in the representation.

Acknowledgments

GV acknowledges support from AFOSR grant FA9550-12-1-0417 and GV and AG acknowledge support from DARPA grant W911NF-12-1-0034 and IARPA grant FA8750-15-C-0071.

\counterwithin

figuresection

Supplementary Material for “The Information Sieve”

Appendix A Detail of the optimization of

We need to optimize the following objective.

If we take the derivative of this expression (along with the constraint that should be normalized) and set it equal to zero, the following simple fixed point equation emerges.

Surprisingly, optimizing this objective over possible functions has a fixed point solution with a simple form. This leads to an iterative solution procedure that actually corresponds to a special case of the one considered in Ver Steeg & Galstyan (2015). There it is shown that each iterative update of the fixed-point equation increases the objective and that we are therefore guaranteed to converge to a local optimum of the objective. In short, we consider the empirical distribution over observed samples. For each sample, we start with a random probabilistic label. Then we use these labels to estimate the marginals, , then we use the fixed point to re-estimate , and so on until convergence.

Also, note that we can estimate the value of the objective in a simple way. The normalization term, is computed for each sample by just summing over the two values of , since is binary. The expected logarithm of , or the free energy is an estimate of the objective  Ver Steeg & Galstyan (2015).

Algorithmic details

The code implementing this optimization is included as a module in the sieve code Ver Steeg (). The algorithm is described in Alg. 1. Note we use as the discrete delta function. The complexity is , where is the number of variables, is the number of samples, and is the cardinality of the latent factor, . Because the solution only depends on estimation of marginals between and , the number of samples needed for accurate estimation is small Ver Steeg & Galstyan (2015).

  Input: Data matrix, , variables, samples.
  Specify : Cardinality of
  repeat
     Randomly initialize
     
     for  to  do
        
     end for
     
  until Convergence
Algorithm 1 Optimizing

Labeling test data

The fixed point equation above essentially gives us a simple representation of the labeling function in terms of some parameters which, in this case, just correspond to the marginal probability distributions. We simply input values of from a test set into that equation, and then round to the most likely value to generate labels.

Missing data

Note that missing data is handled quite gracefully in this scenario. Imagine that some subset of the ’s are observed. Denote the subset of indices for which we have observed data on a given sample with and the subset of random variables as . If we solved the optimization problem for this subset only, we would get a form for the solution like this:

In other words, we simply omit the contribution from unobserved variables in the product.

Appendix B Proof of Theorem 1

We begin by adopting a general definition for “representations” and recalling a useful theorem concerning them.

  • The random variables constitute a representation of if the joint distribution factorizes, ^m p(y_j—x) p(x), ∀x ∈X, ∀j ∈{1,…,m}, ∀y_j ∈Y_jp(y_j—x)

    missingmissingTheorem 1

    Basic Decomposition of Information missing

    Ver Steeg & Galstyan (2015)

    If is a representation of and we define,

    (7)

    then the following bound and decomposition holds.

    (8)
Theorem.

Incremental Decomposition of Information

Let be some (deterministic) function of and for each , is a probabilistic function of . Then the following upper and lower bounds on hold.

(9)
Proof.

We refer to Fig. 1(a) for the structure of the graphical model. We set and we will write to pick out all terms except . Note that because is a deterministic function of , we can view as a probabilistic function of or of (as required by Thm. B). Applying Thm. B, we have

On the LHS, note that , so we can re-arrange to get

(10)

The LHS is the quantity we are trying to bound, so we focus on expanding the RHS and bounding it.

First we expand . Using the chain rule for mutual information we expand the first term.

Rearranging, we take out a term equal to .

We use the chain rule again to write , where with (and ) excluded.

The conditional mutual information, . We expand the first instance of CMI in the previous expression.

Since , the first and fourth terms cancel. Finally, this leaves us with

Now we can replace all of this back in to Eq. 10, noting that the terms cancel.

(11)

First, note that total correlation, conditional total correlation, mutual information, conditional mutual information, and entropy (for discrete variables) are non-negative. Therefore we trivially have the lower bound, . All that remains is to find the upper bound. We drop the negative mutual information, expand the definition of in the first line, then drop the negative of an entropy in the second line.

The equality in the last line can be seen by just expanding all the definitions of conditional entropies and conditional mutual information. These provide the upper and lower bounds for the theorem. ∎

Figure 10: An illustration of how the remainder information, , is constructed from statistics about .
Figure 11: The same results as Fig. 7 but using samples from a test set instead of the training set.

Appendix C An algorithm for perfect reconstruction of remainder information

We will use the notation of Fig. 1(a) to construct remainder information for one variable in one layer of the sieve. The goal is to construct the remainder information, , as a probabilistic function of so that we satisfy the conditions of Lemma 2,

We need to write down a probabilistic function so that, for the observed statistics, , these conditions are satisfied. There are many ways to accomplish this, and we sketch out one solution here. The actual code we use to generate remainder information for results in this paper are available Ver Steeg ().

We start with the picture in Fig. 10 that visualizes the conditional probabilities . Note that the order of the for each value of can be arbitrary for this scheme to succeed. For concreteness, we sort the values of for each in order of descending likelihood. Next, we construct the marginal distribution, . Every time we see a split in one of the histograms of , we introduce a corresponding split for . Now, to construct , for each , for each , we find the unique value of that is directly above the histogram for . Then we set . Now, marginalizing over , , ensuring that . Visually, it can be seen that by picking a value of and and noting that it picks out a unique value of in Fig. 10.

Note that the function to construct is probabilistic. Therefore, when we construct the remainder information at the next layer of the sieve, we have to draw stochastically from this distribution. In the example in Sec. 3 the functions for the remainder information happened to be deterministic. In general, though, probabilistic functions inject some noise to ensure that correlations with are forgotten at the next level of the sieve. In Sec. 6 we point out that this scheme is detrimental for lossless compression and we point out an alternative.

Controlling the cardinality of   It is easy to imagine scenarios in Fig. 10 where the cardinality of becomes very large. What we would like is to be able to approximately satisfy conditions (i) and (ii) while keeping the cardinality of the variables, , small (so that we can accurately estimate probabilities from samples of data). To guide intuition, consider two extreme cases. First, imagine setting , regardless of . This satisfies condition (i) but maximally violates (ii). The other extreme is to set . In that case, (ii) is satisfied, but . This is only problematic if is related to to begin with. If it is, and we set , then the same dependence can be extracted at the next layer as well (since we pass to the next layer unchanged).

In practice we would like to find the best solution with a cardinality of fixed size. Note that this can be cast as an optimization problem where represent variables to optimize over if those are the respective cardinalities of the variables. Then we can minimize a nonlinear objective like over these variables. While off-the-shelf solvers will certainly return local optima for this problem, the optimization is quite slow, especially if we let ’s get big.

For the results in this paper, instead of directly solving the optimization problem above to get a representation with cardinality of fixed size, we first construct a perfect solution without limiting the cardinality. Then we modify that solution to let either (i) or (ii) grow somewhat while reducing the cardinality of to some target. To keep while reducing the cardinality of , we just pick the with the smallest probability and merge it with another value for . On the other hand, to reduce the cardinality while keeping , we again start by finding the with the lowest probability. Then we take the probability mass for for each and and add it to the that already has the highest likelihood for that combination. Note that will no longer be zero after doing so. For both of these schemes (keeping (i) fixed or keeping (ii) fixed) we reduce cardinality until we achieve some target. For the results in this paper we alway picked as the target and we always used the strategy where (ii) was satisfied and we let (i) be violated. In cases where perfect remainder information is impractical due to issues of finite data, we have to define “good remainder information” based on how well it preserves the bounds in Thm. 1. The best way to do this may depend on the application, as we saw in Sec. 6.

Appendix D More MNIST results

Fig. 11 shows the same type of results as Fig. 7 but using test data that was never seen in training. Note that no labels were used in any training.

There are several plausible to generate new, never before seen images using the sieve. Here we chose to draw the variables at the last layer of the sieve randomly and independently according to each of their marginal distributions over the training data. Then we inverted the sieve to recover hallucinated images. Some example results are shown in Fig. 12.

Figure 12: An attempt to generate new images using the sieve.

References

  1. Amari, Shun-ichi. Information geometry on hierarchy of probability distributions. Information Theory, IEEE Transactions on, 47(5):1701–1711, 2001.
  2. Barlow, Horace. Unsupervised learning. Neural computation, 1(3):295–311, 1989.
  3. Bell, Anthony J and Sejnowski, Terrence J. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129–1159, 1995.
  4. Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013.
  5. Comon, Pierre. Independent component analysis, a new concept? Signal processing, 36(3):287–314, 1994.
  6. Cover, Thomas M and Thomas, Joy A. Elements of information theory. Wiley-Interscience, 2006.
  7. Gregor, Karol and LeCun, Yann. Learning representations by maximizing compression. arXiv preprint arXiv:1108.1169, 2011.
  8. Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
  9. Hochreiter, Sepp and Schmidhuber, Jürgen. Feature extraction through lococode. Neural Computation, 11(3):679–714, 1999.
  10. Huffman, David A et al. A method for the construction of minimum redundancy codes. Proceedings of the IRE, 40(9):1098–1101, 1952.
  11. Hyvärinen, Aapo and Oja, Erkki. Independent component analysis: algorithms and applications. Neural networks, 13(4):411–430, 2000.
  12. Kraskov, Alexander, Stögbauer, Harald, Andrzejak, Ralph G, and Grassberger, Peter. Hierarchical clustering using mutual information. EPL (Europhysics Letters), 70(2):278, 2005.
  13. Linsker, Ralph. Self-organization in a perceptual network. Computer, 21(3):105–117, 1988.
  14. Madsen, Sarah, Ver Steeg, Greg, Daianu, Madelaine, Mezher, Adam, Jahanshad, Neda, Nir, Talia M., Hua, Xue, Gutman, Boris A., Galstyan, Aram, and Thompson, Paul M. Relative value of diverse brain mri and blood-based biomarkers for predicting cognitive decline in the elderly. In SPIE Medical Imaging, 2016.
  15. Painsky, Amichai, Rosset, Saharon, and Feder, Meir. Generalized binary independent component analysis. In Information Theory (ISIT), 2014 IEEE International Symposium on, pp. 1326–1330. IEEE, 2014.
  16. Pepke, Shirley and Ver Steeg, Greg. Multivariate information maximization yields hierarchies of expression components in tumors that are both biologically meaningful and prognostic. bioRxiv, 2016. doi: 10.1101/043257. URL http://www.biorxiv.org/content/early/2016/03/11/043257.
  17. Schmidhuber, Jürgen. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
  18. Shannon, C.E. A mathematical theory of communication. The Bell System Technical Journal, 27:379Ð423, 1948.
  19. Simoncelli, Eero and Olshausen, Bruno. Natural image statistics and neural representation. Annu. Rev. Neurosci., 24, 2001.
  20. Steudel, Bastian and Ay, Nihat. Information-theoretic inference of common ancestors. Entropy, 17(4):2304–2327, 2015.
  21. Stögbauer, Harald, Kraskov, Alexander, Astakhov, Sergey A, and Grassberger, Peter. Least-dependent-component analysis based on mutual information. Physical Review E, 70(6):066123, 2004.
  22. Tishby, Naftali and Zaslavsky, Noga. Deep learning and the information bottleneck principle. arXiv preprint arXiv:1503.02406, 2015.
  23. Tishby, Naftali, Pereira, Fernando C, and Bialek, William. The information bottleneck method. arXiv:physics/0004057, 2000.
  24. Ver Steeg, Greg. Open source project implementing the discrete information sieve. http://github.com/gregversteeg/discrete_sieve.
  25. Ver Steeg, Greg and Galstyan, Aram. Discovering structure in high-dimensional data through correlation explanation. In Advances in Neural Information Processing Systems (NIPS), 2014.
  26. Ver Steeg, Greg and Galstyan, Aram. Maximally informative hierarchical representations of high-dimensional data. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2015. http://arxiv.org/abs/1410.7404.
  27. Ver Steeg, Greg, Galstyan, Aram, Sha, Fei, and DeDeo, Simon. Demystifying information-theoretic clustering. In International Conference on Machine Learning, 2014.
  28. Ver Steeg, Greg, Gao, Shuyang, Reing, Kyle, and Galstyan, Aram. Sifting common information from many variables. 2016. http://arxiv.org/abs/1606.02307.
  29. Vincent, Pascal, Larochelle, Hugo, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. ACM, 2008.
  30. Watanabe, Satosi. Information theoretical analysis of multivariate correlation. IBM Journal of research and development, 4(1):66–82, 1960.
  31. Williams, P.L. and Beer, R.D. Nonnegative decomposition of multivariate information. arXiv:1004.2515, 2010.
  32. Ziv, Jacob and Lempel, Abraham. A universal algorithm for sequential data compression. IEEE Transactions on information theory, 23(3):337–343, 1977.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
50569
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description