Supervised Learning With QuantumInspired Tensor Networks
Abstract
Tensor networks are efficient representations of highdimensional tensors which have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images. For the MNIST data set we obtain less than test set classification error. We discuss how the tensor network form imparts additional structure to the learned model and suggest a possible generative interpretation.
I Introduction
The connection between machine learning and statistical physics has long been appreciated Hopfield (1982); Amit et al. (1985); Derrida et al. (1987); Amit et al. (1987); Seung et al. (1992); Engel and Van den Broeck (2001); Malzahn and Opper (2005); Hinton et al. (2006); Mezard and Montanari (2009), but deeper relationships continue to be uncovered. For example, techniques used to pretrain neural networks Hinton et al. (2006) have more recently been interpreted in terms of the renormalization group Mehta and Schwab (2014). In the other direction there has been a sharp increase in applications of machine learning to chemistry, material science, and condensed matter physics Fischer et al. (2006); Hautier et al. (2010); Rupp et al. (2012); Saad et al. (2012); Snyder et al. (2012); Pilania et al. (2013); Arsenault et al. (2014); Amin et al. (2016); Carrasquilla and Melko (2017), which are sources of highlystructured data and could be a good testing ground for machine learning techniques.
A recent trend in both physics and machine learning is an appreciation for the power of tensor methods. In machine learning, tensor decompositions can be used to solve nonconvex optimization tasks Anandkumar et al. (2014a, b) and make progress on many other important problems Phan and Cichocki (2010); Bengua et al. (2015); Novikov et al. (2015), while in physics, great strides have been made in manipulating large vectors arising in quantum mechanics by decomposing them as tensor networks Bridgeman and Chubb (2016); Schollwöck (2011); Evenbly and Vidal (2011). The most successful types of tensor networks avoid the curse of dimensionality by incorporating only loworder tensors, yet accurately reproduce very highorder tensors through a particular geometry of tensor contractions Evenbly and Vidal (2011).
Another context where very large vectors arise is in nonlinear kernel learning, where input vectors are mapped into a higher dimensional space via a feature map before being classified by a decision function
(1) 
The feature vector and weight vector can be exponentially large or even infinite. One approach to deal with such large vectors is the wellknown kernel trick, which only requires working with scalar products of feature vectors, allowing these vectors to be defined only implicitly Muller et al. (2001).
In what follows we propose a rather different approach. For certain learning tasks and a specific class of feature map , we find the optimal weight vector can be approximated as a tensor network, that is, as a contracted sequence of loworder tensors. Representing W as a tensor network and optimizing it directly (without passing to the dual representation) has many interesting consequences. Training the model scales linearly in the training set size; the cost for evaluating an input is independent of training set size. Tensor networks are also adaptive: dimensions of tensor indices internal to the network grow and shrink during training to concentrate resources on the particular correlations within the data most useful for learning. The tensor network form of W presents opportunities to extract information hidden within the trained model and to accelerate training by using techniques such as optimizing different internal tensors in parallel Stoudenmire and White (2013). Finally, the tensor network form is an additional type of regularization beyond the choice of feature map, and could have interesting consequences for generalization.
One of the best understood types of tensor networks is the matrix product state Östlund and Rommer (1995); Schollwöck (2011), also known as the tensor train decomposition Oseledets (2011). Matrix product states (MPS) have been very useful for studying quantum systems, and have recently been proposed for machine learning applications such as learning features of images Bengua et al. (2015) and compressing the weight layers of neural networks Novikov et al. (2015). Though MPS are best suited for describing onedimensional systems, they are powerful enough to be applied to higherdimensional systems as well.
There has been intense research into generalizations of MPS better suited for higher dimensions and critical systems Verstraete and Cirac (2004); Vidal (2007); Evenbly and Vidal (2009). Though our proposed approach could generalize to these other types of tensor networks, as a proof of principle we will only consider the MPS decomposition in what follows. The MPS decomposition approximates an orderN tensor by a contracted chain of N lowerorder tensors shown in Fig. 1. (Throughout we will use tensor diagram notation; for a brief review see Appendix A.)
Representing the weights of Eq. (1) as an MPS allows us to efficiently optimize these weights and adaptively change their number by varying locally a few tensors at a time, in close analogy to the density matrix renormalization group algorithm used in physics White (1992); Schollwöck (2011). Similar alternating least squares methods for tensor trains have also been explored in applied mathematics Holtz et al. (2012).
This paper is organized as follows: we propose our general approach then describe an algorithm for optimizing the weight vector in MPS form. We test our approach, both on the MNIST handwritten digit set and on twodimensional toy data to better understand the role of the local featurespace dimension . Finally, we discuss the class of functions realized by our proposed models as well as a possible generative interpretation.
Those wishing to reproduce our results can find sample codes based on the ITensor library ITe () at: https://github.com/emstoudenmire/TNML
Ii Encoding Input Data
The most successful use of tensor networks in physics so far has been in quantum mechanics, where combining independent systems corresponds to taking the tensor product of their individual state vectors. With the goal of applying similar tensor networks to machine learning, we choose a feature map of the form
(2) 
The tensor is the tensor product of the same local feature map applied to each input , where the indices run from to ; the value is known as the local dimension. Thus each is mapped to a dimensional vector, which we require to have unit norm; this implies each also has unit norm.
The full feature map can be viewed as a vector in a dimensional space or as an order tensor. The tensor diagram for is shown in Fig. 2. This type of tensor is said be rank1 since it is manifestly the product of order1 tensors. In physics terms, has the same structure as a product state or unentangled wavefunction.
For a concrete example of this type of feature map, consider inputs which are grayscale images with pixels, where each pixel value ranges from 0.0 for white to 1.0 for black. If the grayscale pixel value of the pixel is , a simple choice for the local feature map is
(3) 
and is illustrated in Fig. 3. The full image is represented as a tensor product of these local vectors. From a physics perspective, is the normalized wavefunction of a single qubit where the “up” state corresponds to a white pixel, the “down” state to a black pixel, and a superposition corresponds to a gray pixel.
While our choice of feature map was originally motivated from a physics perspective, in machine learning terms, the feature map Eq. (2) defines a kernel which is the product of local kernels, one for each component of the input data. Kernels of this type have been discussed previously (Vapnik, 2000, p. 193) and have been argued to be useful for data where no relationship is assumed between different components of the input vector prior to learning Waegeman et al. (2012).
Though we will use only the local feature map Eq. (3) in our MNIST experiment below, it would be interesting to try other local maps and to understand better the role they play in the performance of the model and the cost of optimizing the model.
Iii Multiple Label Classification
In what follows we are interested in multiclass learning, for which we choose a “oneversusall” strategy, which we take to mean generalizing the decision function Eq. (4) to a set of functions indexed by a label
(4) 
and classifying an input by choosing the label for which is largest.
Since we apply the same feature map to all input data, the only quantity that depends on the label is the weight vector . Though one can view as a collection of vectors labeled by , we will prefer to view as an order tensor where is a tensor index and is a function mapping inputs to the space of labels. The tensor diagram for evaluating for a particular input is depicted in Fig. 4.
Iv MPS Approximation
Because the weight tensor has components, where is the number of labels, we need a way to regularize and optimize this tensor efficiently. The strategy we will use is to represent this highorder tensor as a tensor network, that is, as the contracted product of lowerorder tensors.
A tensor network approximates the exponentially large set of components of a highorder tensor in terms of a much smaller set of parameters whose number grows only polynomially in the size of the input space. Various tensor network approximations impose different assumptions, or implicit priors, about the pattern of correlation of the local indices when viewing the original tensor as a distribution. For example, a MERA network can explicitly model powerlaw decaying correlations while a matrix product state (MPS) has exponentially decaying correlations Evenbly and Vidal (2011). Yet an MPS can still approximate powerlaw decays over quite long distances.
For the rest of this work, we will approximate as an MPS, which have the key advantage that methods for manipulating and optimizing them are well understood and highly efficient. Although MPS are best suited for approximating tensors with a onedimensional pattern of correlations, they can also be a powerful approach for decomposing tensors with twodimensional correlations as well Stoudenmire and White (2012).
An MPS decomposition of the weight tensor has the form
(5) 
and is illustrated in Fig. 5. Each “virtual” index has a dimension which is known as the bond dimension and is the (hyper) parameter controlling the MPS approximation. For sufficiently large an MPS can represent any tensor Verstraete et al. (2004). The name matrix product state refers to the fact that any specific component of the full tensor can be recovered efficiently by summing over the indices from left to right via a sequence of matrix products.
In the above decomposition, the label index was arbitrarily placed on the tensor, but this index can be moved to any other tensor of the MPS without changing the overall tensor it represents. To do this, one contracts the tensor with one of its neighbors, then decomposes this larger tensor using a singular value decomposition such that now belongs to the neighboring tensor—see Fig. 7(b).
In typical physics applications the MPS bond dimension can range from 10 to 10,000 or even more; for the most challenging physics systems one wants to allow as large a bond dimension as possible since a larger dimension means more accuracy. However, when using MPS in a machine learning setting, the bond dimension controls the number of parameters of the model. So in contrast to physics, taking too large a bond dimension might not be desirable as it could lead to overfitting.
V Sweeping Algorithm for Optimizing Weights
Inspired by the very successful density matrix renormalization group (DMRG) algorithm developed in physics, here we propose a similar algorithm which “sweeps” back and forth along an MPS, iteratively minimizing the cost function defining the classification task.
For concreteness, let us choose to optimize the quadratic cost
(6) 
where runs over the training inputs and is the known correct label for training input . (The symbol if equals and zero otherwise, and represents the ideal output of the function for the input .)
Our strategy for reducing this cost function will be to vary only two neighboring MPS tensors at a time within the approximation Eq. (5). We could conceivably just vary one at a time but as will become clear, varying two tensors leads to a straightforward method for adaptively changing the MPS bond dimension.
Say we want to improve the tensors at sites and which share the bond. Assume we have moved the label index to the MPS tensor. First we combine the MPS tensors and into a single “bond tensor” by contracting over the index as shown in Fig. 6(a).
Next we compute the derivative of the cost function with respect to the bond tensor in order to update it using a gradient descent step. Because the rest of the MPS tensors are kept fixed, let us show that to compute the gradient it suffices to feed, or project, each input through the fixed “wings” of the MPS as shown on the lefthand side of Fig. 6(b). Doing so produces the projected, fourindex version of the input shown on the righthand of Fig. 6(b). The current decision function can be efficiently computed from this projected input and the current bond tensor as
(7) 
or as illustrated in Fig. 6(c). Thus the leadingorder update to the tensor can be computed as
(8)  
(9)  
(10) 
Note that last expression above is a tensor with the same index structure as as shown in Fig. 6(d).
Assuming we have computed the gradient, we use it to compute a small update to , replacing it with as shown in Fig. 7(a), where is a small empirical paramater used to control convergence. (Note that for this step one could also use the conjugate gradient method to improve the performance.) Having obtained our improved , we must decompose it back into separate MPS tensors to maintain efficiency and apply our algorithm to the next bond. Assume the next bond we want to optimize is the one to the right (bond ). Then we can compute a singular value decomposition (SVD) of , treating it as a matrix with a collective row index and collective column index as shown in Fig. 7(b). Computing the SVD this way restores the MPS form, but with the index moved to the tensor on site . If the SVD of is given by
(11) 
then to proceed to the next step we define the new MPS tensor at site to be and the new tensor at site to be where a matrix multiplication over the suppressed indices is implied. Crucially at this point, only the largest singular values in are kept and the rest are truncated (along with the corresponding columns of and ) in order to control the computational cost of the algorithm. Such a truncation is guaranteed to produce an optimal approximation of the tensor ; furthermore if all of the MPS tensors to the left and right of are formed from (possibly truncated) unitary matrices similar to the definition of above, then the optimality of the truncation of applies globally to the entire MPS as well. For further background reading on these technical aspects of MPS, see Refs. Schollwöck, 2011 and Cichocki, 2014.
Finally, when proceeding to the next bond, it would be inefficient to fully project each training input over again into the configuration in Fig. 6(b). Instead it is only necessary to advance the projection by one site using the MPS tensor set from a unitary matrix after the SVD as shown in Fig. 7(c). This allows the cost of each local step of the algorithm to remain independent of the size of the input space, making the total algorithm scale only linearly with input space size.
The above algorithm highlights a key advantage of MPS and tensor networks relevant to machine learning applications. Following the SVD of the improved bond tensor , the dimension of the new MPS bond can be chosen adaptively based on number of large singular values (defined by a threshold chosen in advance). Thus the MPS form of can be compressed as much as possible, and by different amounts on each bond, while still ensuring an optimal decision function.
The scaling of the above algorithm is , where recall is the MPS bond dimension; the number of input components; the number of labels; and the number of training inputs. In practice, the cost is dominated by the large number of training inputs , so it would be very desirable to reduce this cost. One solution could be to use stochastic gradient descent, but while our experiments at blending this approach with the MPS sweeping algorithm often reached singledigit classification errors, we could not match the accuracy of the full gradient. Mixing stochastic gradient with MPS sweeping thus appears to be nontrivial but we believe it is a promising direction for further research.
Finally, we note that a related optimization algorithm was proposed for hidden Markov models in Ref. Rolfe and Cook, 2010. However, in place of our SVD above, Ref. Rolfe and Cook, 2010 uses a nonnegative matrix factorization. In a setting where negative weights are allowed, the SVD is the optimal choice because it minimizes the distance between the original tensor and product of factorized tensors. Furthermore, our framework for sharing weights across multiple labels and our use of local feature maps has interesting implications for training performance and for generalization.
Vi MNIST Handwritten Digit Test
To test the tensor network approach on a realistic task, we used the MNIST data set, which consists of grayscale images of the digits zero through nine Yann LeCun (). The calculations were implemented using the ITensor library ITe (). Each image was originally pixels, which we scaled down to by averaging clusters of four pixels; otherwise we performed no further modifications to the training or test sets. Working with smaller images reduced the time needed for training, with the tradeoff being that less information was available for learning.
To approximate the classifier tensors as MPS, one must choose a onedimensional ordering of the local indices . We chose a “zigzag” ordering shown in Fig. 8, which on average keeps spatially neighboring pixels as close to each other as possible along the onedimensional MPS path. We then mapped each grayscale image to a tensor using the local map Eq. (3).
Using the sweeping algorithm in Section V to train the weights, we found the algorithm quickly converged in the number of passes, or sweeps over the MPS. Typically only two or three sweeps were needed to see good convergence, with test error rates changing only hundreths of a percent thereafter.
Test error rates also decreased rapidly with the maximum MPS bond dimension . For we found both a training and test error of about 5%; for the error dropped to only 2%. The largest bond dimension we tried was , where after three sweeps we obtained a test error of 0.97% (97 misclassified images out of the test set of 10,000 images); the training set error was 0.05% or 32 misclassified images.
Vii TwoDimensional Toy Model
To better understand the modeling power and regularization properties of the class of models presented in Sections II and III, consider a family of toy models where the input space is twodimensional (). The hidden distribution we want to learn consists of two distributions, and , from which we generate training data points labeled or respectively. For simplicity we only consider the square region and .
To train the model, each training point is mapped to a tensor
(12) 
and the full weight tensors for are optimized directly using gradient descent.
When selecting a model, our main control parameter is the dimension of the local indices and . For the case , the local feature map is chosen as in Eq. 3. For we generalize to be a normalized component vector as described in Appendix B.
vii.1 Regularizing By Local Dimension
To understand how the flexibility of the model grows with increasing , consider the case where and are overlapping distributions. Specifically, we take each to be a multivariate Gaussian centered respectively in the lowerright and upperleft of the unit square, and to have different covariance matrices. In Fig. 9 we show the theoretically optimal decision boundary that best separates points (crosses, red region) from points (squares, blue region), defined by the condition . To make a training set, we sample 100 points from each of the two distributions.
Next, we optimize the toy model for our overlapping training set for various . The decision boundary learned by the model in Fig. 10(a) shows good agreement with the optimal one in Fig. 9. Because the two sets are nonseparable and this model is apparently well regularized, some of the training points are necessarily misclassified—these points are colored white in the figure.
The decision boundary shown in Fig. 10 begins to show evidence of overfitting. The boundary is more complicated than for and further from the optimal boundary. Finally, for a much larger local dimension there is extreme overfitting. The decision boundary is highly irregular and is more reflective of the specific sampled points than the underlying distribution. Some of the overfitting behavior reveals the structure of the model; at the bottom and top of Fig. 10(c) there are lobes of one color protruding into the other. These likely indicate that the finite local dimension still somewhat regularizes the model; otherwise it would be able to overfit even more drastically by just surrounding each point with a small patch of its correct color.
vii.2 NonLinear Decision Boundary
To test the ability of our proposed class of models to learn highly nonlinear decision boundaries, consider the spiral shaped boundary in Fig. 11(a). Here we take and to be nonoverlapping with uniform on the red region and uniform on the blue region.
In Fig. 11(b) we show the result of training a model with local dimension on 500 sampled points, 250 for each region (crosses for region , squares for region ). The learned model is able to classify every training point correctly, though with some overfitting apparent near regions with too many or too few sampled points.
Viii Interpreting Tensor Network Models
A natural question is which set of functions of the form can be realized when using a tensorproduct feature map of the form Eq. (2) and a tensornetwork decomposition of . As we will argue, the possible set of functions is quite general, but taking the tensor network structure into account provides additional insights, such as determining which features the model actually uses to perform classification.
viii.1 Representational Power
To simplify the question of which decision functions can be realized for a tensorproduct feature map of the form Eq. (2), let us fix to a single label and omit it from the notation. We will also consider to be a completely general order tensor with no tensor network constraint. Then is a function of the form
(13) 
If the functions , form a basis for a Hilbert space of functions over , then the tensor product basis
(14) 
forms a basis for a Hilbert space of functions over . Moreover, if the basis is complete, then the tensor product basis is also complete and can be any square integrable function.
Next, consider the effect of restricting the local dimension to as in the local feature map of Eq. (3) which was used to classify grayscale images in our MNIST benchmark in Section VI. Recall that for this choice of ,
(15)  
(16) 
Thus if is a black and white image with pixel values of only , then is equal to a single component of the weight tensor. Because each of these components is an independent parameter (assuming no further approximation of ), is a highly nonlinear, in fact arbitrary, function when restricted to these black and white images.
Returning to the case of grayscale images with pixels , cannot be an arbitrary function over this larger space of images for finite . For example, if one considers the feature map Eq. (3), then when considering the dependence of on only a single pixel (all other pixels being held fixed), it has the functional form where and are constants determined by the (fixed) values of the other pixels.
viii.2 Implicit Feature and Kernel Selection
Of course we have not been considering an arbitrary weight tensor but instead approximating the weight tensor as an MPS tensor network. The MPS form implies that the decision function has interesting additional structure. One way to analyze this structure is to separate the MPS into a central tensor, or core tensor on some bond and constrain all MPS site tensors to be left orthogonal for sites or right orthogonal for sites . This means has the decomposition
(17) 
as illustrated in Fig. 12(a). To say the and tensors are left or right orthogonal means when viewed as matrices and these tensors have the property and where is the identity; these orthogonality conditions can be understood more clearly in terms of the diagrams in Fig. 12(b). Any MPS can be brought into the form Eq. (17) through an efficient sequence of tensor contractions and SVD operations similar to the steps in Fig. 7(b).
The form in Eq. (17) suggests an interpretation where the decision function acts in three stages. First, an input is mapped into the exponentially larger feature space defined by . Next, the dimensional feature vector is mapped into a much smaller dimensional space by contraction with all the and site tensors of the MPS. This second step defines a new feature map with components as illustrated in Fig. 12(c). Finally, is computed by contracting with .
To justify calling a feature map, it follows from the left and rightorthogonality conditions of the and tensors of the MPS Eq. (17) that the indices and of the core tensor label an orthonormal basis for a subspace of the original feature space. The vector is the projection of into this subspace.
The above interpretation implies that training an MPS model uncovers a relatively small set of important features and simulatenously learns a decision function based only on these reduced features. This picture is similar to popular interpretations of the hidden and output layers of shallow neural network models Nielsen (2015). A similar interpretation of an MPS as learning features was first proposed in Ref. Bengua et al., 2015, though with quite a different scheme for representing data than what is used here. It is also interesting to note that an interpretation of the and tensors as combining and projecting features into only the most important combinations can be applied at any bond of the MPS. For example, the tensor tensor at site can be viewed as defining a vector of features labeled by by forming linear combinations of products of the features and the features defined by the previous tensor, similar to the contraction in Fig. 7(c).
viii.3 Generative Interpretation
Because MPS were originally introduced to represent wavefunctions of quantum systems Östlund and Rommer (1995), it is tempting to interpret a decision function with an MPS weight vector as a wavefunction. This would mean interpreting for each fixed as a probability distribution function over the set of inputs belonging to class . A major motivation for this interpretation would be that many insights from physics could be applied to machine learned models. For example, tensor networks in the same family as MPS, when viewed as defining a probability distribution, can be used to efficiently perform perfect sampling of the distribution they represent Ferris and Vidal (2012).
Let us investigate the properties of and required for a consistent interpretation of as a probability distribution. For to behave like a probability distribution for a broad class of models, we require for some integration measure that the distribution is normalized as
(18) 
no matter what weight vector the model has, as long as the weights are normalized as
(19) 
This condition is automatically satisfied for tensorproduct feature maps of the form Eq. (2) if the constituent local maps have the property
(20) 
that is, if the components of are orthonormal functions with respect to the measure . Furthermore, if one wants to demand, after mapping to feature space, that any input itself defines a normalized distribution, then we also require the local vectors to be normalized as
(21) 
for all .
Unfortunately neither the local feature map Eq. (3) nor its generalizations in Appendix B meet the first criterion Eq. (20). A different choice that satisfies both the orthogonality condition Eq. (20) and normalization condition Eq. (21) could be
(22) 
However, this map is not suitable for inputs like grayscale pixels since it is antiperiodic over the interval and would lead to a periodic probability distribution. An example of an orthogonal, normalized map which is not periodic on is
(23) 
This local feature map meets the criteria Eqs. (20) and (21) if the integration measure chosen to be .
As a basic consistency check of the above generative interpretation, we performed an experiment on our toy model of Section VII, using the local feature map Eq. (23). Recall that our toy data can have two possible labels and . To test the generative interpretation, we first generated a single, random “hidden” weight tensor . From this weight tensor we sampled data points in a two step process:

Sample a label or according to the probabilities and .

Sample a data point according to the distribution for the chosen .
For each collection of sampled points we then trained a toy model with weight tensor using the loglikelihood cost function
(24) 
where recall is the known correct label for training point .
We repeated this procedure multiple times for various sample sizes , each time computing the KullbackLiebler divergence of the learned versus exact distribution
(25) 
where and has similar definition in terms of . The resulting average as a function of number of sampled training points is shown in Fig. 13 along with a fit of the form where is a fitting parameter. The results indicate that given enough training data, the learning process can eventually recapture the original probabilistic model that generated the data.
Ix Discussion
We have introduced a framework for applying quantuminspired tensor networks to multiclass supervised learning tasks. While using an MPS ansatz for the model parameters worked well even for the twodimensional data in our MNIST experiment, other tensor networks such as PEPS, which are explicitly designed for twodimensional systems, may be more suitable and offer superior performance. Much work remains to determine the best tensor network for a given domain.
Representing the parameters of our model by a tensor network has many useful and interesting implications. It allows one to work with a family of nonlinear kernel learning models with a cost that is linear in the training set size for optimization, and independent of training set size for evaluation, despite using a very expressive feature map (recall in our setup, the dimension of feature space is exponential in the size of the input space). There is much room to improve the optimization algorithm we described, adopting it to incorporate standard tricks such as minibatches, momentum, or adaptive learning rates. It would be especially interesting to investigate unsupervised techniques for initializing the tensor network.
Additionally, while the tensor network parameterization of a model clearly regularizes it in the sense of reducing the number of parameters, it would be helpful to understand the consquences of this regularization for specific learning tasks. It could also be fruitful to include standard regularizations of the parameters of the tensor network, such as weight decay or penalties. We were surprised to find good generalization without using explicit parameter regularization. For issues of interpretability, the fact that tensor networks are composed only of linear operations could be extremely useful. For example, it is straightforward to determine directions in feature space which are orthogonal to (or projected to zero by) the weight tensor .
There exist tensor network coarsegraining approaches for purely classical systems Efrati et al. (2014); Evenbly and Vidal (2015), which could possibly be
used instead of our approach. However, mapping the data into an extremely highdimensional Hilbert space is likely advantageous
for producing models sensitive to highorder correlations among features. We believe there is great promise in investigating
the power of quantuminspired tensor networks for many other machine learning tasks.
Note: while preparing our final manuscript, Novikov et al. Novikov et al. (2016) published a related framework for parameterizing supervised learning models with MPS (tensor trains).
Acknowledgments
We would like to acknowledge helpful discussions with Juan Carrasquilla, Josh Combes, Glen Evenbly, Bohdan Kulchytskyy, Li Li, Roger Melko, Pankaj Mehta, U. N. Niranjan, Giacomo Torlai, and Steven R. White. This research was supported in part by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development & Innovation. This research was also supported in part by the Simons Foundation ManyElectron Collaboration.
Appendix A Graphical Notation for Tensor Networks
Though matrix product states (MPS) have a relatively simple structure, more powerful tensor networks, such as PEPS and MERA, have such complex structure that traditional tensor notation becomes unwieldy. For these networks, and even for MPS, it is helpful to use a graphical notation. For some more complete reviews of this notation and its uses in various tensor networks see Ref. Bridgeman and Chubb, 2016; Cichocki, 2014.
The basic graphical notation for a tensor is to represent it as a closed shape. Typically this shape is a circle, though other shapes can be used to distinguish types of tensors (there is no standard convention for the choice of shapes). Each index of the tensor is represented by a line emanating from it; an orderN tensor has N such lines. Figure 14 shows examples of diagrams for tensors of order one, two, and three.
To indicate that a certain pair of tensor indices are contracted, they are joined together by a line. For example, Fig. 15(a) shows the contraction of an an order1 tensor with the an order2 tensor; this is the usual matrixvector multiplication. Figure 15(b) shows a more general contraction of an order4 tensor with an order3 tensor.
Graphical tensor notation offers many advantages over traditional notation. In graphical form, indices do not usually require names or labels since they can be distinguished by their location in the diagram. Operations such as the outer product, tensor trace, and tensor contraction can be expressed without additional notation; for example, the outer product is just the placement of one tensor next to another. For a network of contracted tensors, the order of the final resulting tensor can be read off by simply counting the number of unpaired lines left over. For example, a complicated set of tensor contractions can be recognized as giving a scalar result if no index lines remain unpaired.
Finally, we note that a related notation for sparse or structured matrices in a directsum formalism can be used, and appears extensively in Ref. Fishman and White, 2015.
Appendix B HigherDimensional Local Feature Map
As discussed in Section II, our strategy for using tensor networks to classify input data begins by mapping each component of the input data vector to a component vector , . We always choose to be a unit vector in order to apply physics techniques which typically assume normalized wavefunctions.
For the case of we have used the mapping
(26) 
A straightforward way to generalize this mapping to larger is as follows. Define . Because , then also
(27) 
Expand the above identity using the binomial coeffiecients .
(28) 
This motivates defining to be
(29) 
where recall that runs from to . The above definition reduces to the case Eq. (26) and guarantees that for larger . (These functions are actually a special case of what are known as spin coherent states.)
Using the above mapping for larger allows the product to realize a richer class of functions. One reason is that a larger local dimension allows the weight tensor to have many more components. Also, for larger the components of contain ever higher frequency terms of the form , , …, and similar terms replacing with . The net result is that the decision functions realizable for larger are composed from a more complete basis of functions and can respond to smaller variations in .
References
 Hopfield (1982) J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” PNAS 79, 2554–2558 (1982).
 Amit et al. (1985) Daniel J. Amit, Hanoch Gutfreund, and H. Sompolinsky, “Spinglass models of neural networks,” Phys. Rev. A 32, 1007–1018 (1985).
 Derrida et al. (1987) Bernard Derrida, Elizabeth Gardner, and Anne Zippelius, “An exactly solvable asymmetric neural network model,” EPL (Europhysics Letters) 4, 167 (1987).
 Amit et al. (1987) Daniel J. Amit, Hanoch Gutfreund, and H. Sompolinsky, “Information storage in neural networks with low levels of activity,” Phys. Rev. A 35, 2293–2303 (1987).
 Seung et al. (1992) HS Seung, Haim Sompolinsky, and N Tishby, “Statistical mechanics of learning from examples,” Physical Review A 45, 6056 (1992).
 Engel and Van den Broeck (2001) Andreas Engel and Christian Van den Broeck, Statistical mechanics of learning (Cambridge University Press, 2001).
 Malzahn and Opper (2005) Dörthe Malzahn and Manfred Opper, “A statistical physics approach for the analysis of machine learning algorithms on real data,” Journal of Statistical Mechanics: Theory and Experiment 2005, P11001 (2005).
 Hinton et al. (2006) Geoffrey E. Hinton, Simon Osindero, and YeeWhye Teh, “A fast learning algorithm for deep belief nets,” Neural Computation 18, 1527–1554 (2006).
 Mezard and Montanari (2009) Marc Mezard and Andrea Montanari, Information, physics, and computation (Oxford University Press, 2009).
 Mehta and Schwab (2014) Pankaj Mehta and David Schwab, “An exact mapping between the variational renormalization group and deep learning,” arxiv:1410.3831 (2014).
 Fischer et al. (2006) Christopher C. Fischer, Kevin J. Tibbetts, Dane Morgan, and Gerbrand Ceder, “Predicting crystal structure by merging data mining with quantum mechanics,” Nat. Mater. 5, 641–646 (2006).
 Hautier et al. (2010) Geoffroy Hautier, Christopher C. Fischer, Anubhav Jain, Tim Mueller, and Gerbrand Ceder, ‘‘Finding nature’s missing ternary oxide compounds using machine learning and density functional theory,” Chemistry of Materials 22, 3762–3767 (2010), http://dx.doi.org/10.1021/cm100795d .
 Rupp et al. (2012) Matthias Rupp, Alexandre Tkatchenko, KlausRobert Müller, and O. Anatole von Lilienfeld, “Fast and accurate modeling of molecular atomization energies with machine learning,” Phys. Rev. Lett. 108, 058301 (2012).
 Saad et al. (2012) Yousef Saad, Da Gao, Thanh Ngo, Scotty Bobbitt, James R. Chelikowsky, and Wanda Andreoni, “Data mining for materials: Computational experiments with compounds,” Phys. Rev. B 85, 104104 (2012).
 Snyder et al. (2012) John C. Snyder, Matthias Rupp, Katja Hansen, KlausRobert Müller, and Kieron Burke, “Finding density functionals with machine learning,” Phys. Rev. Lett. 108, 253002 (2012).
 Pilania et al. (2013) Ghanshyam Pilania, Chenchen Wang, Xun Jiang, Sanguthevar Rajasekaran, and Ramamurthy Ramprasad, “Accelerating materials property predictions using machine learning,” Scientific Reports 3, 2810 EP – (2013).
 Arsenault et al. (2014) LouisFrançois Arsenault, Alejandro LopezBezanilla, O. Anatole von Lilienfeld, and Andrew J. Millis, “Machine learning for manybody physics: The case of the Anderson impurity model,” Phys. Rev. B 90, 155136 (2014).
 Amin et al. (2016) Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko, “Quantum Boltzmann machine,” arxiv:1601.02036 (2016).
 Carrasquilla and Melko (2017) Juan Carrasquilla and Roger G. Melko, “Machine learning phases of matter,” Nat Phys 13, 431–434 (2017).
 Anandkumar et al. (2014a) Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky, “Tensor decompositions for learning latent variable models,” Journal of Machine Learning Research 15, 2773–2832 (2014a).
 Anandkumar et al. (2014b) Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade, “A tensor approach to learning mixed membership community models,” J. Mach. Learn. Res. 15, 2239–2312 (2014b).
 Phan and Cichocki (2010) Anh Huy Phan and Andrzej Cichocki, “Tensor decompositions for feature extraction and classification of high dimensional datasets,” Nonlinear theory and its applications, IEICE 1, 37–68 (2010).
 Bengua et al. (2015) J.A. Bengua, H.N. Phien, and H.D. Tuan, “Optimal feature extraction and classification of tensors via matrix product state decomposition,” in 2015 IEEE Intl. Congress on Big Data (BigData Congress) (2015) pp. 669–672.
 Novikov et al. (2015) Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov, “Tensorizing neural networks,” arxiv:1509.06569 (2015).
 Bridgeman and Chubb (2016) Jacob C. Bridgeman and Christopher T. Chubb, “Handwaving and interpretive dance: An introductory course on tensor networks,” arxiv:1603.03039 (2016).
 Schollwöck (2011) U. Schollwöck, “The densitymatrix renormalization group in the age of matrix product states,” Annals of Physics 326, 96–192 (2011).
 Evenbly and Vidal (2011) G. Evenbly and G. Vidal, “Tensor network states and geometry,” Journal of Statistical Physics 145, 891–918 (2011).
 Muller et al. (2001) K. R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Scholkopf, ‘‘An introduction to kernelbased learning algorithms,” IEEE Transactions on Neural Networks 12, 181–201 (2001).
 Stoudenmire and White (2013) E. M. Stoudenmire and Steven R. White, “Realspace parallel density matrix renormalization group,” Phys. Rev. B 87, 155137 (2013).
 Östlund and Rommer (1995) Stellan Östlund and Stefan Rommer, “Thermodynamic limit of density matrix renormalization,” Phys. Rev. Lett. 75, 3537–3540 (1995).
 Oseledets (2011) I. Oseledets, “Tensortrain decomposition,” SIAM Journal on Scientific Computing 33, 2295–2317 (2011).
 Verstraete and Cirac (2004) F. Verstraete and J. I. Cirac, “Renormalization algorithms for quantummany body systems in two and higher dimensions,” condmat/0407066 (2004).
 Vidal (2007) G. Vidal, “Entanglement renormalization,” Phys. Rev. Lett. 99, 220405 (2007).
 Evenbly and Vidal (2009) G. Evenbly and G. Vidal, “Algorithms for entanglement renormalization,” Phys. Rev. B 79, 144108 (2009).
 White (1992) Steven R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett. 69, 2863–2866 (1992).
 Holtz et al. (2012) Sebastian Holtz, Thorsten Rohwedder, and Reinhold Schneider, “The alternating linear scheme for tensor optimization in the tensor train format,” SIAM Journal on Scientific Computing 34, A683–A713 (2012).
 (37) ITensor Library (version 2.0.7) http://itensor.org .
 Vapnik (2000) Vladimir Vapnik, The Nature of Statistical Learning Theory (SpringerVerlag New York, 2000).
 Waegeman et al. (2012) W. Waegeman, T. Pahikkala, A. Airola, T. Salakoski, M. Stock, and B. De Baets, “A kernelbased framework for learning graded relations from data,” Fuzzy Systems, IEEE Transactions on 20, 1090–1101 (2012).
 Stoudenmire and White (2012) E.M. Stoudenmire and Steven R. White, “Studying twodimensional systems with the density matrix renormalization group,” Annual Review of Condensed Matter Physics 3, 111–128 (2012).
 Verstraete et al. (2004) F. Verstraete, D. Porras, and J. I. Cirac, “Density matrix renormalization group and periodic boundary conditions: A quantum information perspective,” Phys. Rev. Lett. 93, 227205 (2004).
 Cichocki (2014) Andrzej Cichocki, “Tensor networks for big data analytics and largescale optimization problems,” arxiv:1407.3124 (2014).
 Rolfe and Cook (2010) Jason T. Rolfe and Matthew Cook, “Multifactor expectation maximization for factor graphs,” (Springer Berlin Heidelberg, Berlin, Heidelberg, 2010) pp. 267–276.
 (44) Christopher J.C. Burges Yann LeCun, Corinna Cortes, “MNIST handwritten digit database,” http://yann.lecun.com/exdb/mnist/ .
 Nielsen (2015) Michael Nielsen, Neural Networks and Deep Learning (Determination Press, 2015).
 Ferris and Vidal (2012) Andrew J. Ferris and Guifre Vidal, “Perfect sampling with unitary tensor networks,” Phys. Rev. B 85, 165146 (2012).
 Efrati et al. (2014) Efi Efrati, Zhe Wang, Amy Kolan, and Leo P Kadanoff, “Realspace renormalization in statistical mechanics,” Reviews of Modern Physics 86, 647 (2014).
 Evenbly and Vidal (2015) G. Evenbly and G. Vidal, “Tensor network renormalization,” Phys. Rev. Lett. 115, 180405 (2015).
 Novikov et al. (2016) Alexander Novikov, Mikhail Trofimov, and Ivan Oseledets, “Exponential machines,” arxiv:1605.03795 (2016).
 Fishman and White (2015) Matthew T. Fishman and Steven R. White, “Compression of correlation matrices and an efficient method for forming matrix product states of fermionic gaussian states,” Phys. Rev. B 92, 075132 (2015).