# Entanglement Entropy of Target Functions for Image Classification and Convolutional Neural Network

###### Abstract

The success of deep convolutional neural network (CNN) in computer vision especially image classification problems requests a new information theory for function of image, instead of image itself. In this article, after establishing a deep mathematical connection between image classification problem and quantum spin model, we propose to use entanglement entropy, a generalization of classical Boltzmann-Shannon entropy, as a powerful tool to characterize the information needed for representation of general function of image. We prove that there is a sub-volume-law bound for entanglement entropy of target functions of reasonable image classification problems. Therefore target functions of image classification only occupy a small subspace of the whole Hilbert space. As a result, a neural network with polynomial number of parameters is efficient for representation of such target functions of image. The concept of entanglement entropy can also be useful to characterize the expressive power of different neural networks. For example, we show that to maintain the same expressive power, number of channels in a convolutional neural network should scale with the number of convolution layers as . Therefore, deeper CNN with large is more efficient than shallow ones.

###### pacs:

Valid PACS appear here## I Introduction

Deep Convolutional Neural Network has achieved great success in computer vision LeCun et al. (1995, 1990); Krizhevsky et al. (2012); He et al. (2016). However, a complete theoretical understanding of how it works is still absent, despite efforts both numerically Zeiler and Fergus (2014); Yosinski et al. (2015) and analytically Bengio and Delalleau (2011); Eldan and Shamir (2016); Raghu et al. (2016). Especially, even for simplest black-white image with pixels, parameters should be necessary to represent a general function of image. However, in practice, convolutional neural network with parameters works quite well in image classification problems. The only way to resolve the above paradox is the following. Target functions of image classification problems occupy only a small subspace of the whole function space and CNN are designed to represent functions in this subspace. Therefore, to further understand which kind of neural network architecture is better, we should first characterize this subspace. There are also other fundamental questions: why does small convolutional kernel work well? Why is increasing the depth of the CNN more efficient than increasing number of channels at each layer? In this article, we try to answer these questions using entanglement entropy, one of the most important concepts in modern theoretical physics.

It’s well known that functions of image form a Hilbert Space. However, it’s not emphasized before that this Hilbert space has a tensor product structure because of locality of each pixel. Suppose we have a two dimensional black-white image with size . To preserve locality, we should think of an image as a two dimensional lattice, instead of a vector with dimension . In lattice representation of image, as we will shown in the main text, the Hilbert Space has a tensor product structure: , where can be thought as a two dimensional local Hilbert space at each pixel. Besides, we will show an amazing mathematical relation: this Hilbert space of functions of image is exactly isomorphic to the Hilbert space of a quantum spin model Wen (2004) in the same lattice. Basically up to some normalization factor, any function of image can be thought of as a wavefunction and then has a one to one correspondence with a quantum state of a quantum spin model.

Quantum spin model has been extensively studied over last several decades and entanglement entropy has been shown to be a powerful tool to characterize a wavefunction in the Hilbert space Eisert et al. (2010). Despite that the Hilbert Space is exponentially large, tensor network with parameters is efficient to represent a general ground state wavefunction of local Hamiltonian. The reason is because that the entanglement entropy of these wavefunctions obey an area law bound(with log corrections in some cases). It turns out that most of functions in the Hilbert space has volume law entanglement entropy and need parameters to represent. However, locality constrains interesting wavefunctions (wavefunction of ground state) to an exponentially small subspace of the whole Hilbert space. Because of this locality constraint, tensor networks are successful in approximation of these wavefunctions Orús (2014).

Because the Hilbert Space of image classification problem is mathematically equivalent to the Hilbert space of quantum spin model, we expect techniques in one field will also have useful applications in another field. Actually Matrix Product State, a special tensor network widely used in quantum physics numerical simulation, has already been shown to also work for MNIST handwritten-digit recognition classification problem Stoudenmire and Schwab (2016). Besides, restricted boltzmann machine developed in computer vision field has been proposed to be a variational ansatz of the wavefunction of a quantum spin modelCarleo and Troyer (2017); Deng et al. (2017); Gao and Duan (2017); Huang and Moore (2017). In this article, we will try to answer a more fundamental question: Can entanglement entropy also be a useful concept for image classification problems and other computer vision problems.

The Boltzmann-Shannon entropy is a key tool to characterize the information of an image in information theory Shannon (2001) . Now in the new era of artificial intelligence, we need an information theory for function of image, instead of image itself. Entanglement entropy, as a generalization of Boltzmann-Shannon entropy, can be an efficient way to characterize the information needed for representation of a function of image. First, we need to emphasize that the definition of entanglement entropy is not restricted to quantum mechanics. Actually, for any Hilbert Space with local tensor product structure, bipartite entanglement entropy is well defined mathematically. Because functions of image form such a Hilbert Space with tensor product structure, we can always define entanglement entropy. The only question is whether this concept is useful or not. In this article, we will show that entanglement entropy is a useful characterization of difficulty to represent a target function. We will show that entanglement entropy of target functions of image classification problems are bounded by a sub-volume-law(very likely to be area-law for simple problems). Therefore one pixel only entangle locally with pixels nearby. As a result, a neural network with local connection(like convolution kernel) is efficient to represent such a function and only instead of parameters are needed. Entanglement Entropy can also be a powerful tool to study the expressive power of different network architectures. For example, we will argue that entanglement entropy of a deep convolutional neural network scales as , where is the number of convolution layers and is the number of hidden channels of each layer. Therefore, to keep the entanglement entropy of CNN at the same level as the target function (thus keep the same expression power), number of channels should scale as , where is the number of channels for shallow CNN with depth .

## Ii Problem Definition and Hilbert Space

In this section we define the image classification problem and discuss the structure of the Hilbert Space of functions of image.

### ii.1 Problem Definition of Image Classification

For simplicity we consider two-class image classification problem. Multi-class classification can be transformed to multiple two-class classification problem. To be specific, we consider the problem of classify whether an image is a cat. Every image has pixels and every pixel can be either or . We define to be the set of all images and includes in total images. We also define the set of all complex value function of image as .

We assume that there exists a target function of image defined as following:

(1) |

For supervised learning, is known for all training data. Supervised learning is defined as finding a funcition which can approximate target function well by solving the following optimization problem:

(2) |

where is a functional, a function on function space , defined as:

(3) |

where is a cost function and is a probability distribution of images, which is decided by specific problem. In supervised learning, can be approximated by dataset .

(4) |

It’s hard to solve the optimization problem in Eq. 2 directly because is a functional. Thus we should represent by a specific form with finite number of parameters first. In computer vision applications, deep convolutional neural network(DCNN) shows good performance to represent :

(5) |

where, means a function specified by a deep convolutional neural network with a high dimensional parameter vector . In real application, dimension of parameter is a polynomial of : .

Then the optimization problem becomes a minimization problem of multi-variable function:

(6) |

which can be solved by gradient descent methods because is differentiable to .

### ii.2 Hilbert Space

Next we will show that space of all functions of image is actually a Hilbert Space with dimension . First, it’s obvious that this function space is a vector space with definition of addition and scalar multiplication as:

(7) |

where, and .

Then we define inner product as:

(8) |

where, stands for inner product of .

It’s easy to verify that this definition of inner product satisfies all properties of inner product. Therefore is a Hilbert Space. Next we show that its dimension is . First we define functions(vectors) in :

(9) |

where and stands for the th image in . These are linearly independent vectors in and it’s easy to show that any function is a linear combination of these :

(10) |

Besides, , so forms an orthogonal basis of . Therefore dimension of is indeed .

### ii.3 Tensor Product Structure of Hilbert Space

We have shown that functions of size image form a dimensional Hilbert Space . Next we will show the tensor product structure of this Hilbert Space. Actually any size image can be partitioned to two image and , as shown in Fig. 1. We label the set of images for and region as and . Then functions of sub-image form a Hilbert Space with dimension . Similarly, functions of image form Hilbet Space with dimension . Next we will prove that .

As we show in Section II.2, has basis vectors , each corresponds to one image , as defined in Eq. 9. Similarly, has basis vectors with corresponds to .

For each original image , it can be uniquely decomposed to two sub-images and . We label as . Then according to Eq. 9, it corresponds to a basis in , . It’s easy to verify this definition gives the tensor product structure .

We can further decompose and following the above procedure. Finally, we can think of one image as composed of images, each of which is just a pixel. As shown above, each pixel has a two dimensional Hilbert Space and the total Hilbert Space is a tensor product:

(11) |

Therefore, the Hilbert Space in the image classification problem has a tensor product structure. It can also be shown that this Hilbert Space is mathematically equivalent to the Hilbert Space of quantum spin model : . Therefore, each function defined on image set has a one to one correspondence to a wavefunction of quantum spin model. Details of this equivalence is shown in Appendix.A. Because of this amazing equivalence, we can use mathematical tools developed in quantum physics field to deal with functions of image in computer vision field.

## Iii Entanglement Entropy

As shown in Section II, both computer vision field and quantum physics field is trying to represent a vector in a dimensional Hilbert Space . Then it’s natural to ask the following question: Is it possible to represent a general vector in the dimensional Hilbert Space with number of parameters? The answer is well known in quantum physics field. It’s not possible to approximate a general vector in this Hilbert Space with parameters. We can at best approximate vectors of a subspace whose function has entanglement entropy smaller than volume-law. In practice, ground state wavefunction of local Hamiltonian can be represented by tensor network efficiently. The reason is that entanglement entropy of these functions are bounded by area law, while most of the functions in Hilbert space have volume law entanglement entropy.

Empirically deep convolutional neural network is successful in image classification problems. We label the set of target functions in image classification problems as . Inspired by quantum physics, we propose that entanglement entropy can also be useful to characterize function in . Especially, we will show that a function satisfies sub-volume-law bound (very likely to be area-law) for entanglement entropy.

### iii.1 Density Matrix

To make the following analysis easier, we will choose a simple normalization condition. We choose an image with label as benchmark, and let represent the possibility that also has label . Then both in image classification and quantum physics, for a function , we only care about ratio for any images . Therefore, for simplicity we can always normalize the norm of the function to be :

(12) |

Then for every , we can define a density matrixNielsen and Chuang (2002) as:

(13) |

where means the entry with . And means the th image in .

It’s convenient to write the density matrix in dirac notation as:

(14) |

As we have shown in Eq. 9, each basis of the Hilbert Space uniquely correspond to an image . So we use to denote this basis vector. is the corresponding covector. Therefore density matrix in Eq. 14 explicitly show that density matrix is a linear transformation in the Hilbert Space . Later we will see that properties of density matrix is independent of basis.

Because of normalization condition Eq. 12, we can easily prove that

(15) |

The density matrix can be thought as a generalization of probability distribution with . In computer vision, ratio of diagonal term can be seen as the ratio of probability , where is the probability that is a cat.

Von Neumann entanglement entropy is defined as a generalization of Boltzmann-Shannon entropy:

(16) |

It can be proven that for any ,

(17) |

So this definition of entanglement entropy seems to be meaningless. However we will show that entanglement entropy should be defined for a bipartite partition for an image. It’s actually a measure of entanglement between two sub-images and for a target function . One intuitive understanding is that it characterize nonlinearity of this function between part and .

### iii.2 Bipartite Entanglement Entropy

We define bipartite Von Neumann entanglement entropy now. We divide an image to two parts and , as shown in Fig. 1. Then total Hilbert Space can be decomposed to tensor product state , where and are two Hilbert Space defined on image A and B with dimension and .

Because any basis correspond to a basis of . We can label it as , where and . Here each can be decomposed to two parts and . and are basis of and .

As we showed above, the density matrix is basis free. Therefore it’s easy to work in the following notation:

(18) |

Then we can get a density matrix defined on by tracing the part:

(19) |

Or equivalently after using , is

(20) |

The density matrix defined on sub image has property:

(21) |

But it doesn’t satisfy anymore.

More importantly, we can also define entanglement entropy following Eq. 16 as

(22) |

We list several important properties of entanglement entropy.

###### Property 1

For any partition and , .

###### Property 2

If dimension of matrix is , .

###### Property 3

remains unchanged under unitary transformation , where is a unitary matrix.

The last property means that entanglement entropy is basis free. So we can always diagonalize because it’s hermitian. In the diagonal form, forms an ordinary probability distribution and entanglement entropy is just the Shannon entropy. Then we know , where is the number of nonzero eigenvector of .

### iii.3 Meaning of Entanglement Entropy

For general function , . To continue, we need to understand the meaning of first. Let’s understand what type of function has zero entanglement entropy as a starting point.

###### Theorem 1

Any function , if it can be written in a product form: which means that , where and , then .

It’s easy to prove that . Thus can be seen as generated from and thus . For this special form of function, part and are totally independent and there is no entanglement between these two parts.

One special case is the famous logistic regression , where is the value of pixel . From Theorem 2, we know entanglement between any bipartite partition and is zero for this function. As a result, the logistic regression is impossible to represent any function of image which has nonzero entanglement between two partitions.

For general function , it can be written in Schmidt decomposition form:

(23) |

where and and . We also have Besides, different are orthogonal .

After tracing over region , we get

(24) |

Therefore in the basis of , is a diagonal matrix with diagonal element . We have the following theorem:

###### Theorem 2

For any function and any partition of and , entanglement entropy is bounded by volume law .

The theorem comes naturally from Property 2 of Entanglement Entropy. We have the following definition.

###### Definition 1

For a function , it has volume-law entanglement entropy if for any bipartite partition and , where is the length of boundary between and .

Volume law can be intuitively understood as following. For most of functions , two partitions and are not independent and entangled. As a result, can not be generated by one single function, and is dependent on state of pixels of region . In general, part has possible states and we need independent functions to describe . Thus in the diagonal form of , there are nonzero diagonal elements and thus .

One intuitive understanding of entanglement entropy is the range of pixels entangled with one pixel. In the case of volume law, one pixel in part is entangled with every pixel in part and thus total entanglement entropy is proportional to the number of pixels, and thus a volume law. To represent a function of volume law, fully connected network is necessary and local connection like convolutional kernel is apparently impossible to represent such a function.

number of parameters is necessary to represent volume-law function. However, for function with area-law entanglement entropy, number of parameters may be enough to represent it.

###### Definition 2

For a function , it has area-law entanglement entropy if for any bipartite partition and , where is the length of boundary between and .

Area-law entanglement entropy implies that one pixel is only locally entangled with pixels in its neighborhood. Thus entanglement for part and is only from boundary and thus is proportional to . Therefore, to represent a function with area-law entanglement entropy, we only need local connections between pixels, such as convolutional kernel with small width.

### iii.4 Examples of Volume-Law and Area-Law Image Classification Problem

In this section we provide two examples of image classification. We will show that target function of one problem has volume-law entanglement entropy and the other one has area-law entanglement entropy.

#### iii.4.1 Volume-Law Example: Random Image Set

Considering the following image classification problem. We randomly generate a set of images and label these images as . Other images not in this set are labeled as . Then the image classification problem is to supervised-learning this set . One can imagine that this task is impossible for any neural network because these images in set don’t have any pattern at all.

Next we give a quantitative statement of no-pattern by showing that the corresponding target function has volume-law entanglement entropy. The target function as defined in Eq. 1 can be thought as a random vector in the Hilbert Space . It has been shown that a random chosen vector in the Hilbert Space has volume-law entanglement entropy because it correspond to a thermalized state with almost infinite temperature Page (1993). Because the target function has volume law entanglement entropy, it’s impossible to represent with a simple locally connected neural network. To represent such a function, long range connection with exponentially large number of parameters is necessary.

#### iii.4.2 Area-Law Example: Recognizing Closed Loops

We give an example of image classification problem with area-law entangled target function. The task is closed loop recognition. If an image only has closed loops, the label is . If there is any open string in the image, the label is . This task could be efficiently accomplished by training simple convolutional neural network. The target function of this problem can be analytically proven to have area-law entanglement entropy because it correspond to the famous quantum loop gas state in toric code Kitaev (2003). The intuition is that to decide whether a line is a closed loop or open string, it’s only necessary to check some local constraints. As a result, pixels entangle locally and a small convolutional kernel can be used for this problem.

## Iv Sub-Volume-Law Entanglement Entropy of Target Functions for Image Classification

We have already seen that entanglement entropy is a powerful tool to measure the difficulty for representing a function. Functions with volume law entanglement entropy generally need parameters to approximate and functions with area-law entanglement entropy are possible to be approximated by short-range connected networks with parameters. As CNN is quite successful in image classifications, it’s natural to conjecture that objective functions in image classification problems are area-law entangled. Next we will justify this conjecture.

Suppose we have a image classification problem. The target function is that if has label 1 and otherwise. We have a partition and . The boundary is included in region and the length of boundary is . We label the set of images with label as .

Then density matrix is:

(25) |

where is the number of label images. and . () are set of images in region ().

Next we trace over :

(26) |

where means that there exits possible which can generate label image by combining with both and .

Naively is a matrix. However, we can organize it to blocks with the following natural assumption.

###### Assumption 1

If two label images are exactly the same in region , they must also be the same at the boundary.

The assumption follows naturally from the continuation of part if the image classification problem is for an object which is smooth locally. With this assumption, we know that only if they have the same boundary. There are possible states of the boundary. Therefore the density matrix can be organized to blocks, each of which correspond to one state of the boundary.b

However, may not hold even if and have the same boundary. We need another assumption of the image classification. We label a subregion with range within region close to the boundary as , as shown in Fig 1. In the following, we will assume that whether and only depends on the state at this region . We label as the set of which is the same as in region and can be extended to a label image by some .

###### Assumption 2

If and , then which only depends on their state at region .

The above assumption apparently holds for . However, we expect for simple image classification problems with locality. The assumption means that whether two images can extend to a label image with the same part only depends on their states on region and doesn’t depend on inner region . The assumption is true because the image in part is smooth extension of region. With this assumption, We define the following basis:

(27) |

where is a normalization factor.

There are possible . In terms of these orthogonal basis

(28) |

is a matrix with dimension . Then we know immediately that entanglement entropy .

###### Theorem 3

For image classification problem satisfying the above two assumptions, the entanglement entropy for target function is bounded by . is a characterization of the range of entanglement of each image classification problem.

Thus can be thought of as the range of entanglement. For such a target function, a pixel only entangles with pixels within the distance of . Note that, entanglement between pixels are defined for a function . We are meaning that, to approximate such a function , we need one pixel to entangle with other pixels in the network (whatever the network is, convolutional neural network, tensor network, or a network not proposed yet).

In summary, we argued that the entanglement entropy of target functions of image classification problems are bounded by a sub-volume-law . can be seen as a characterization of the difficulty of each classification problems. For simple task like MNIST(hand-written digit recognition) data, is a reasonable estimation and the target function should have area-law entanglement entropy. Some complicated tasks may have with . But we believe volume law entanglement entropy with is very rare because of locality.

It’s hard to analytically extract for each image classification problem. But numerical calculation of entanglement entropy for each image classification problem may be possible. We leave it to future work.

## V Application to Convolutional Neural Network

After showing that entanglement entropy of target function of a image classification problem is bounded by a sub-volume-law . Next we will use entanglement entropy to characterize the expressive power of different neural network architectures.

Specifically we consider deep CNN with pooling layers and convolution layers between pooling layers. The number of channels at each layer is denoted as . For simplicity we assume each layer has the same . The architecture of CNN is very similar to Multiscale Entanglement Renormalization Ansatz(MERA) Vidal (2007). It’s reasonable that convolutional neural network is also doing entanglement renormalization as MERA. Pooling layer of CNN is similar to a block-spin renormalization group step. For image classification problem with scale invariance, we need to get a scale invariant ansatz with correlation length .

In practice it’s also found that CNN with larger works better while the size of convolution kernel can be small. We can understand the role of convolutional layer as a disentangler in MERA. Before pooling layer which reduce the size of a block to , we must extract most important features for this block and its neighbors to reduce the information lost during pooling layer(RG) process. In a formal language, we must keep the entanglement between each block and its neighbors. Because entanglement entropy of the target function is smaller than volume law(very likely to be area-law), pixels are only locally entangled. Thus a fully connected network is not necessary and we can use a small convolution kernel. The number of channels is similar to the bond dimension in MERA. As shown by numerical experiments, each channel represents a feature of original kernel at previous layer Zeiler and Fergus (2014). In quantum physics language, each channel represents a disentangled state of the corresponding kernel, which is exactly the role played by disentangler in MERA. The CNN will be trained to extract most important features of this kernel. In a quantitative language, it’s trained to change the original basis of this kernel to new vectors, which minimizes the loss of entanglement that will be lost during the following pooling process. By making analog with MERA, the entanglement entropy of a CNN scales as Vidal (2008). We want the entanglement entropy of the CNN to be at the same level of the target function. Then is needed to represent a target function with entanglement entropy . In another word, we need to keep the expression power of the CNN. It’s then obvious that increasing is much more efficient than increasing .

## Vi conclusion

In conclusion, we propose to use entanglement entropy to characterize the information needed to represent a target function for an image classification problem. We show that entanglement entropy is bounded by sub-volume law (even area-law) for target functions in image classification problems because of locality. Therefore parameters are enough to represent a target function. We can also use entanglement entropy to characterize the expressive power of a neural network architecture. Specifically, we show the entanglement entropy of a deep CNN scales as . Therefore a deeper CNN with larger is much more efficient than shallow ones.

A lot of directions are open for future work. First, numerical techniques should be developed to measure entanglement entropy for each image classification problem and other computer vision problems. Second, as we have shown a deep connection between quantum physics and image classification, ideas and methods in one field may have applications in the other field. Finally, this article is focused on problem of image, which has a spatial lattice. Time series data is involved in speech recognition and natural language processing problem. It remains an open question whether we can also characterize functions of time series using entanglement entropy or similar concepts.

## Vii Acknowledgement

We would like to thank T.Senthil, Roger G.Melko and Yijia Zhang for preview and helpful comments on the manuscript. We also thank Liujun Zou, Michael Pretko, Zhehao Dai,Yang Qi, Li Jing, Zheng Ma, Liyang Xiong, Kang Yang for useful discussions. Especially thank Yan Liu for help on making the plot. This research was supported by the Simons Foundation through a Simons Investigator Award to Senthil Todadri.

## References

- LeCun et al. (1995) Y. LeCun, Y. Bengio, et al., The handbook of brain theory and neural networks 3361, 1995 (1995).
- LeCun et al. (1990) Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, in Advances in neural information processing systems (1990) pp. 396–404.
- Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton, in Advances in neural information processing systems (2012) pp. 1097–1105.
- He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun, in Proceedings of the IEEE conference on computer vision and pattern recognition (2016) pp. 770–778.
- Zeiler and Fergus (2014) M. D. Zeiler and R. Fergus, in European conference on computer vision (Springer, 2014) pp. 818–833.
- Yosinski et al. (2015) J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, arXiv preprint arXiv:1506.06579 (2015).
- Bengio and Delalleau (2011) Y. Bengio and O. Delalleau, in Algorithmic Learning Theory (Springer, 2011) pp. 18–36.
- Eldan and Shamir (2016) R. Eldan and O. Shamir, in Conference on Learning Theory (2016) pp. 907–940.
- Raghu et al. (2016) M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. Sohl-Dickstein, arXiv preprint arXiv:1606.05336 (2016).
- Wen (2004) X.-G. Wen, Quantum field theory of many-body systems: from the origin of sound to an origin of light and electrons (Oxford University Press on Demand, 2004).
- Eisert et al. (2010) J. Eisert, M. Cramer, and M. B. Plenio, Reviews of Modern Physics 82, 277 (2010).
- Orús (2014) R. Orús, Annals of Physics 349, 117 (2014).
- Stoudenmire and Schwab (2016) E. Stoudenmire and D. J. Schwab, in Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Curran Associates, Inc., 2016) pp. 4799–4807.
- Carleo and Troyer (2017) G. Carleo and M. Troyer, Science 355, 602 (2017).
- Deng et al. (2017) D.-L. Deng, X. Li, and S. D. Sarma, Physical Review X 7, 021021 (2017).
- Gao and Duan (2017) X. Gao and L.-M. Duan, arXiv preprint arXiv:1701.05039 (2017).
- Huang and Moore (2017) Y. Huang and J. E. Moore, arXiv preprint arXiv:1701.06246 (2017).
- Shannon (2001) C. E. Shannon, ACM SIGMOBILE Mobile Computing and Communications Review 5, 3 (2001).
- Nielsen and Chuang (2002) M. A. Nielsen and I. Chuang, “Quantum computation and quantum information,” (2002).
- Page (1993) D. N. Page, Physical review letters 71, 1291 (1993).
- Kitaev (2003) A. Y. Kitaev, Annals of Physics 303, 2 (2003).
- Vidal (2007) G. Vidal, Physical review letters 99, 220405 (2007).
- Vidal (2008) G. Vidal, Physical review letters 101, 110501 (2008).

## Appendix A Hilbert Space of Quantum Spin Model

### a.1 Equivalence between and

Computer vision is dealing with a Hilbert Space with dimension . In quantum physics, quantum spin model is also on a dimensional Hilbert Space . The basis of can be thought of as a image . In this section we will further show that and are equivalent. In a more precise mathematical language,

(29) |

The isomorphism is mathematically easy to prove because two vector space with the same finite number of dimension is isomorphic. has orthogonal basis and has orthogonal basis . We can define a linear transformation as:

(30) |

Under this definition, a vector transforms to a under :

(31) |

We can also show that inner product doesn’t change under :

(32) |

where we used the fact .

So indeed is an isomorphic transformation. And then are equivalent to . Techniques dealing with one Hilbert Space can be directly applied to deal with the other one.

As a result of equivalence between and , a target function for any supervised learning problem in computer vision can be encoded into a golden quantum state:

(33) |