Quantum Decimation in Hilbert Space:
CoarseGraining without Structure
Abstract
We present a technique to coarsegrain quantum states in a finitedimensional Hilbert space. Our method is distinguished from other approaches by not relying on structures such as a preferred factorization of Hilbert space or a preferred set of operators (local or otherwise) in an associated algebra. Rather, we use the data corresponding to a given set of states, either specified independently or constructed from a single state evolving in time. Our technique is based on principle component analysis (PCA), and the resulting coarsegrained quantum states live in a lower dimensional Hilbert space whose basis is defined using the underlying (isometric embedding) transformation of the set of finegrained states we wish to coarsegrain. Physically, the transformation can be interpreted to be an “entanglement coarsegraining” scheme that retains most of the global, useful entanglement structure of each state, while needing fewer degrees of freedom for its reconstruction. This scheme could be useful for efficiently describing collections of states whose number is much smaller than the dimension of Hilbert space, or a single state evolving over time.
CALTTH2017045
I Introduction
One of the challenges of doing practical calculations in quantum mechanics is that Hilbert space is very big: the number of dimensions is exponential in the number of degrees of freedom. Furthermore, not all degrees of freedom are created equal; some might be microscopic or highenergy and hard to access, while others may be irrelevant to certain dynamical questions. It is therefore very often useful to coarsegrain, modeling quantum systems defined by states on some Hilbert space by states in some lowerdimensional Hilbert space , under conditions where the coarsegrained dynamics suffices to capture important properties of the system.
In practice, coarsegraining procedures typically rely on the existence of structure in Hilbert space that exists as part of the quantum system under consideration, and uses that structure to define a renormalization group (RG) flow kadanoff1967 (); fisher1998 (); wilson1975 (); fisher1974 (); cardy1996 (); maris+kadanoff1978 (). For example, there might be a notion of emergent space cao_etal2017 (); Cotler:2017abq () and associated locality. We imagine some decomposition of Hilbert space into local factors,
(1) 
where the factors come equipped with some nearestneighbor structure (specifying when and are nearby), typically characterized by the form of interactions between factors in the Hamiltonian. Then it makes sense to coarsegrain spatially, grouping together nearby factors, as in the classic blockspin approach to the Ising model kadanoff1967 (); migdal1975 (). Alternatively, one might appeal to the energy spectrum of the Hamiltonian, constructing an effective theory of lowenergy states by integrating out highenergy ones.
In quantum information theory, data compression has received a lot of attention over the last few decades and a considerable amount of work has been done. A lot of motivation for such techniques comes from quantum computation, and many different approaches have been suggested, including but not limited to Schumacher’s data compression schumacher (), oneshot compression techniques oneshot (), the JohnsonLindenstrauss lemma johnson_lindenstrauss (); RSA:RSA10073 () and its limitations in quantum dimensional reduction harrow_JL (), by application of elementary quantum gates plesch_etal (), and even considering overlapping qubits 2017arXiv170101062C (), amongst others rozema_etal (); qdatecompression_vaccaro_etal () (and references therein).
In this paper we pursue a different road to coarsegraining. We imagine that we are given some particular set of states (or one state as a function of time) in Hilbert space, but no preferred notion of locality or energy, or a preferred factorization into individual degrees of freedom. Our specific motivation comes from quantum gravity and quantum cosmology, where notions of locality and energy are more subtle than in traditional laboratory settings, but the technique might be of wider applicability. Our method represents another technique in the literature on compression quantum information and coarsegraining, but with an emphasis on the fact that the construction is based solely on structure of a set of given quantum states, without relying on any additional, preferred structure in Hilbert space. In particular, we use principle component analysis (PCA) pcaref () to use a set of states to define a vector subspace , such that a coarsegraining map onto captures the most important information about the original states (in a sense we make precise below). We refer to this procedure as “quantum decimation in Hilbert space”: a scheme where we coarsegrain quantum states by decimating or discarding irrelevant features determined by the structure of the states themselves, without presuming additional structure on Hilbert space. (Our method is distinct from past usage of the term “quantum decimation” in the literature chen+tuthill1985 (); castellani1981 (); matsubara+totsuji+thompson1991 (), which refers to application of RG ideas to spin chains, Hubbard models and the like.) The idea of the PCA is to express the information contained in the original states in the most efficient way possible, by identifying the basis vectors along which most of the variation occurs, and attaching a systematic notion of the relative importance of different basis vectors. This helps us identify global, important features of the state (determined by the states themselves) and physically one can relate this to preserving most of the relevant entanglement structure of the states (in any arbitrarily associated tensor factorization to Hilbert space).
The paper is organized as follows. In Section (II), we construct the principle component basis for the set of finegrained states we wish to coarsegrain, and define a PCA compression map which removes redundancy in the basis used to describe our states. In Section (III), we develop the details of the coarsegraining isometry based on decimation of the PCA expansion and discuss the physical question of “what are we coarsegraining?” and a possible application of the procedure in coarsegraining time evolution of a system. In section (IV), we compare with other conventional coarsegraining schemes and data compression techniques in quantum information, and conclude.
Ii Constructing the Principle Component Basis
ii.1 Setting the Stage
Consider a finitedimensional Hilbert space of dimension , equipped with a global basis with . “Global” indicates that the basis spans all of , and this choice of this basis is left arbitrary at this stage. Typically, this global basis can be identified with a tensor product structure which identifies degrees of freedom corresponding to subsystems. While in any practical setup such as manybody theory or quantum computation, one typically assumes a highly nongeneric and preferred tensor factorization of based on the Hamitonian Tegmark:2014kka (); carroll+singh_toappear () where features like locality and classical emergence might be manifest, we do not assume any such preferred structure. Our technique, at the “datacompression” stage, works even without associating a tensor product structure to Hilbert space, but its interpretation (which we offer in section III.3) relies on the existence of an arbitrary factorization , not necessarily corresponding to a quasiclassical one. A normalized state can be expanded in this global basis,
(2) 
with and .
Now imagine that we are given states in , labeled by , , which we will call the specifying states. Our goal is to harness the structure of these specifying states in to construct a coarsegraining procedure that will allow us to project them down to a subspace that preserves as much relevant information as possible. Each be expanded in the chosen global basis,
(3) 
with . It will be convenient to package these components as a column vector, which we call ,
(4) 
We now bundle together the coefficients of all of the specifying states into a matrix , of order , which we call our augmented matrix:
(5) 

Hilbert space of (finegrained) dimension . 

Global basis of , .  
Set of specifying states in , .  
Hilbert space of (coarsegrained) dimension with and .  
Columnvector containing coefficients of in the global basis.  
Augmented Matrix containing all specifying states as column vectors.  
Mean value of the coefficients of in the global basis.  
Unnormalized, uniform superposition state in .  
Augmented Matrix of deviation of each state from its mean value.  
Set of nonzero singular values of and .  
Unnormalized PCA basis vectors organized as a matrix.  
Unnormalized PCA weights organized as a matrix.  
Normalized PCA basis vectors organized as a matrix.  
Normalized PCA weights organized as a matrix.  
PCA basis vectors in with .  
Projection Map from finegrained states in to coarsegrained states in .  
Truncation matrix of order with  
Net coarsegraining transformation defined as . 
The basic idea of coarsegraining is to reduce the effective dimensionality of Hilbert space, thus giving an effective description of the state, while retaining the global or largescale physics of the state. Our current structure does not assume any notion of space or any associated notion of locality, or indeed any specific Hamiltonian. All we are working in is Hilbert space and an associated global basis. An idea of coarsegraining in such a setup would need to be equipped with the understanding of “What are we coarsegraining?” and “What are we losing under such a transformation?” since our regular ideas of spatial scales, lattices and locality are not present in the current scheme. This allows us to construct a more general prescription using Hilbert space ideas, which does not assume any preferred decomposition into subsystems or preferred observables, local or otherwise. These ideas are further discussed in Sections (III.3) and (IV).
We propose to perform principle component analysis (PCA) on the specifying states as a technique to reduce the dimensionality of our Hilbert space, thus resulting in a coarsegrained description for the specifying states. The resulting PCA coarsegraining prescription will be useful to coarsegrain the same set of specifying states only (unless there is some relationship between a separate state and the specifying states). The PCA transforms the input into a set of linearly uncorrelated principle components, thus reducing any redundancy in describing the specifying states. As is common in any PCA application, the first step is to remove the columnwise mean of the matrix , which helps to isolate the sources of variance in the set of specifying states. A meansubtracted input allows the PCA components to have variance in reconstruction over and above the mean in a systematic way, where the th component is more important in adding back variance as compared to the st component. It is worth pointing out at this stage and as we will see, in our use of the PCA, the mean subtraction will be an important step in our physical interpretation of the coarsegraining transformation.
Let us begin by subtracting off the columnwise mean from the structure of our specifying states described by the augmented matrix . Let be the mean of the column vector , which is also the th column of ,
(6) 
We also define an unnormalized, uniform superposition state whose representation in the global basis is the column vector with all entries equal to unity,
(7) 
While this uniform state is basisdependent, we will argue in Section (III.2) that the relative inner product structure between the specifying states will be invariant under the coarsegraining for any choice of global basis. Each choice of basis lends its own features which will be taken into account by the coarsegraining prescription, while at the same time, keeping the relative structure of the states invariant and offering a uniform interpretation in terms of entanglement coarsegraining for any associated tensor product structure to the chosen basis.
Based on this, one can define the mean augmented matrix as the following matrix,
(8) 
and thus, the th column of is simply,
(9) 
One can now define the deviation of each of the specifying states from their respective means as,
(10) 
which will serve as a description of our states based on the deviations of the coefficients from the mean of each of the specifying states.
ii.2 Implementing the Principle Component Analysis
Starting with specifying states in the dimensional Hilbert space , we have decomposed them into a set of mean values organized into a matrix and a set of deviations . In what follows, we focus on the case with (the “+1” to become clear later), i.e. with fewer states than the dimension of the space they live in. This is usually the relevant case, since state vectors describing physical systems commonly live in very large Hilbert spaces and the number of states one might wish to understand is much smaller. In the other limit with more states than dimensions, one would generically need the full support of the Hilbert space to describe them and a PCA based coarsegraining technique may not be very useful. The matrix captures all the information there is in our set of specifying states in the choice of basis, modulo the mean of each state which just adds a uniform contribution along each of the basis directions. We can think of as characterizing the deviation of the state from being a uniform superposition (in the average sense), which as we will see, will be important in interpreting the technique as an entanglement coarsegraining under any associated tensor product structure
We now perform a principle component analysis on the matrix , which is implemented via a singular value decomposition (SVD). While one can directly perform a PCA on the coefficient matrix and work out the technique on similar lines as described below, we feel that delineating these different contributions makes the process rather clear and better physically motivated. We decompose as,
(11) 
where and are unitary matrices and is a diagonal matrix with nonzero singular values of on the diagonal,
(12) 
These nonzero singular values are the square roots of the nonzero eigenvalues of and . Following standard PCA procedure, we arrange the singular values on the diagonal in in descending order, which helps capture the systematic addition of variance by the PCA,
(13) 
It is most convenient to write the deviations from the mean as
(14) 
where the matrix
(15) 
defines the PCA basis, and the matrix defines the (unnormalized) PCA weights. The hat symbol here is used to stress that the variable is not normalized. The use of hat to denote operators, whenever used, will be clear from context. Written explicitly,
(16) 
The columns are the components in the original global basis, of the new PCA basis vectors, and is the th unnormalized PCA weight for the specifying state .
Thus, the deviation from the mean of can be reconstructed as,
(17) 
The columns of , which we denote as form an orthonormal basis for the global Hilbert space , since is unitary, while just the first states in selected by the nonzero singular values are needed to form a complete basis for our specifying states we wish to coarsegrain. This step forms the information compression step: we have chosen a smaller set of linearly independent vectors who span a vector subspace that includes all of our specifying states . However, the scaling of each of these columns with the singular values to get renders the basis vectors unnormalized. Once this compression step is done, we can normalize our PCA basis states by associating the singular values with the PCA weights, by defining
(18) 
This lets us define the normalized PCA basis vectors as simply the first columns of the unitary ,
(19) 
Thus, we have mapped the coefficients of each state to coefficients of the PCA expansion in the PCA basis as obtained above, in addition to the mean coefficient of each state. To reconstruct the full state , we add the mean multiplied by to obtain back ,
(20) 
In what follows, to avoid clutter in our equations, we drop the explicit use of the square brackets , which we have been using to denote matrices so far.
The th state is normalized, hence we obtain the matrix representation of the normalization condition to be the following,
(21) 
Before we further simplify the normalization condition, consider contracting the state in Eq. (20) with ,
(22) 
One can now use the fact that and to get,
(23) 
In addition to this, due to the mean subtraction in each column in Eq. (10), each of the PCA basis vectors has a zero mean . Hence, not only is the summand of Eq. (23) zero, but each term vanishes separately. The PCA basis vectors are the columns of a unitary matrix, and are therefore orthonormal, . We can therefore use Eq. (23) to get the normalization condition for the th state as,
(24) 
Thus, we have mapped the coefficients of each state in the global basis to a mean value and coefficients in the PCA basis , thus needing coefficients in this new basis to characterize the state.
At this stage that we have captured the full information of each specifying state in the coefficients and the constructed PCA basis. The dimensional reduction is not a result of integrating out small scale physics, rather it is simply a smart choice of basis, which minimizes redundancy in the description of our specifying states . We also know that , making it orthogonal to all of the other PCA basis vectors, and is hence a linearly independent vector whose contribution is needed to reconstruct from the coefficients. This motivates us to identify the “zeroth” component of the PCA basis and the corresponding PCA weight to be the mean contribution,
(25) 
The PCA basis now has basis and each contribution (mean, and otherwise) is treated homogeneously and one can express the basis set as . Thus we have (notice the sum runs from zero now),
(26) 
Notice, we have added a factor of to keep normalized like the other PCA basis vectors. Normalization of the state is now simply written as, following Eq. (24),
(27) 
ii.3 Mapping onto the PCA Subspace
The PCA procedure discussed above provides us with vectors (the PCA basis) , which span and act as a basis in a vector subspace containing our specifying states. Let us denote this subspace as with . For each of the PCA basis vector we can identify the corresponding state vector . This set of PCA vectors forms a complete, orthonormal basis set for and our specifying states can be expanded in this basis for following Eq. (26),
(28) 
The jth basis state is embedded in the larger Ddimensional space and is connected to its representation in the global basis via its matrix representation of Eq. (19).
Once this subspace has been defined and its basis identified, one can work with the specifying states exclusively in this subspace by mapping the state from the larger space to by using an operator . To understand the action of on the specifying states, we first connect the PCA weights with the global expansion coefficients . For , by contracting both sides of by and using the orthonormality of the PCA basis , we find
(29) 
Thus, mapping to the space is achieved by,
(30) 
This of course keeps the specifying states unaltered, while mapping them onto the subspace with their expansion in the PCA basis, thus compressing the support needed to describe them. Also, any other vector can be similarly mapped down from a Ddimensional to an dimensional space. While arbitrary states in not completely supported on can be mapped to using , but such a map will nonsystematically, and perhaps nondesirably, alter the structure of the state.
Our focus in this paper is to coarsegrain the specifying states: the PCA map acts as the dimension compression step which can now be coarsegrained as described in Section (III).
Iii CoarseGraining via Decimation
iii.1 Truncation of the PCA Expansion and CoarseGraining
With this technology in hand, we can now explore how to systematically coarsegrain our states to further lowerdimensional Hilbert spaces. With the PCA basis alone, we have already reduced the effective dimensionality of the underlying vector space from to using the PCA map without any loss in the description of the state, since the PCA simply chooses a smart basis which removes redundancy in their description. We now discuss the decimation prescription, in which we truncate the PCA expansion of Eq. (28) as a method of coarsegraining, explicitly reducing the dimensionality of Hilbert space at the expense of throwing away certain features of the state.
Currently, a state is expanded in the PCA basis , as done in Eq. (26) in the matrix representation describing its reconstruction in the global Ddimensional space . The nonzero singular values are arranged in descending order in the diagonal matrix in Eq. (11). The PCA endows us with a systematic control of the contribution of different PCA components in reconstruction of the state. Thus, the component of the PCA, , carries maximum variance in reconstructing the state over and above the zeroth component () i.e. the state mean . The next orthonormal component has lesser variance than the component, and so on. The th component is more important that the st component in adding back variance over and above the mean to reconstruct the state.
Since the tailing PCA components contribute little to the reconstruction of the state as compared to the preceding components, one could, depending on the required accuracy of reconstruction, neglect some of these tailing terms in the series to obtain an effective, coarsegrained description of the state. To better understand relative importance of different PCA components in reconstructing the specifying states, one can look at the fractional contribution/importance () of the th PCA component,
(31) 
Thus, in addition to the mean term , one could choose the next PCA terms with in the expansion as a coarsegrained of the state,
(32) 
where the contributions of the to components have been truncated and neglected. In the above equation and in what follows, “” indicates that the state has been coarsegrained (CG) to a dimensional reconstruction. The choice of can be made depending on the various fractional contributions (Eq. 31) of the PCA basis and the required accuracy of the coarsegrained description. We have thus effectively mapped the coefficients of the state in the original (global) basis to components in the truncated/coarsegrained PCA basis.
Following the discussion in subsection (II.3), we now construct a dimensional vector subspace with , which covers the support of the coarsegrained specifying states. The first PCA vectors form an orthonormal basis for and can be identified with their corresponding set of basis state vectors . Before we construct the coarsegraining map, it is important to notice that truncating the PCA series renders the states unnormalized. Since we would like our coarsegrained vectors to be good quantum states satisfying probability summing to unity, we normalize the states by hand. A coarsegraining map can be constructed which projects and coarsegrains the state to and normalizes it as well,
(33)  
(34) 
As before, the basis states are embedded in the original space via their matrix representations (Eq. (19)). We see , and is the most coarsegrained description of the specifying states as effective qubits, whereas the other limit takes it back to the full noncoarsegrained, albeit PCAcompressed description, as discussed in subsection II.3. One can also define a series of nested subspaces
(35) 
and a corresponding sequence of maps , which progressively coarsegrain from just the PCA compression to a maximally coarsegrained description as an effective qubit .
One can also consider a coarsegraining application where we admit nonnormalized coarsegrained states, possibly due to inaccuracies in experimental setups or numerical precision. In that case, we can choose the coarsegrained dimension such that,
(36) 
for some small enough to not be detected experimentally or within numerical errors.
iii.2 The CoarseGraining Isometry and Expectation Values
Let us recap what be have accomplished so far. We have coarsegrained each of our specifying states from a dimensional description in to a state living in the dimensional Hilbert space with , and identified the expansion coefficients in the (truncated) PCA basis in . Each of these basis states is connected to the finegrained dimensional embedding in via its matrix representation, as found in Section (II.2). In this section, we aim to package our results and formally define a transformation that directly relates the coarsegrained coefficients to the finegrained coefficients.
The PCA compression of lives in and is described by coefficients . Let us denote this column by , which is connected to the finegrained description of the state via the PCA basis following inversion of Eq. (26) as,
(37) 
The PCA basis matrix, whose columns form an orthonormal basis in , defines an isometric embedding, , but in general as expected, where is the dimensional identity. However, acts as the identity in the subspace where our specifying states reside. This is tantamount to saying that the PCA projection leaves the specifying states invariant,
(38) 
Before describing truncation of the PCA series as an effective coarsegrained description of the state, it is instructive to understand how inner products of states are related in the two descriptions. Combining Eqs. (37) and (38), it is easily seen that the inner product is preserved while transforming from the global dimensional to the PCA dimensional description,
(39) 
At this stage, one might worry about the basisdependence of the PCA prescription outlined in Section (II), since the uniform, unnormalized state is a basisdependent construction. Under different choices of global basis that lead to different augmented matrices , one would end up with a different set of PCA basis vectors and weights, with the zeroth vector always identified as the uniform superposition state. However, this is not an issue since the relative inner product structure of the specifying states is invariant under change of global basis by a unitary transformation. This can be easily verified by using Eqs. (37) and (38) for two different choices of global basis where the coefficients of the specifying states are connected by some unitary transformation . The PCA compression, while preserving overlaps between our set of specifying states in any arbitrary choice of basis, then also preserves the pairwise distances between states,
(40) 
and under truncation of the PCA expansion of Eq. (32), we preserve these overlaps and pairwise distances upto some error scale determined by the choice of the coarsegrained subspace.
The next step of coarsegraining via truncating the PCA expansion to the first coefficients of as a coarsegrained description of the state can be achieved by a truncation matrix that is of order and is a diagonal matrix with ones on the diagonal, . Using this truncation matrix, the coefficients of the unnormalized coarsegrained state , which we call , can be obtained as,
(41) 
where we have defined the net coarsegraining transformation as , which satisfies . This transformation captures both the PCA basis change and the truncation to retain the first components. Normalization of the coarsegrained state can be done by hand, as described in subsection (III).
Another aspect is the behavior of expectation values of hermitian operators under our coarsegraining transformation. Consider a hermitian operator , which in the global basis for has a matrix representation , whose expectation in the th state is
(42) 
where the subscript is to emphasize that we compute this expectation in the finegrained, global description in . One can construct the coarsegrained matrix representation of using our coarsegraining transformation as follows,
(43) 
whose expectation value is computed with respect to the coarsegrained state ,
(44) 
Depending on how well/worse we decide to coarsegrain by choosing , and the details and correlations in the specifying states, the coarsegrain expectations will differ from the finegrained value, though the coarsegrained expectation approaches the finegrained value as , and they are equal when .
iii.3 Decimation and Entanglement
Having developed a coarsegraining prescription based on a PCA transformation and further truncation of the expansion, our next task is to better understand what microscopic information is lost in the course of this transformation. Ours is an unconventional coarsegraining prescription, since it is solely founded on the details of the quantum state given in some global basis. Most coarsegraining schemes assume more structure than this, be it a preferred split of the Hilbert space into tensor factors, a notion of locality in space, or energy modes beyond a certain cutoff that are to be integrated out. All we have is Hilbert space, a notion of a basis and a set of quantum states. A brief comparison of our PCA prescription with other conventional schemes will be done in Section (IV).
The basic question we wish to answer in this section is, what are we really losing when we perform the PCA and truncate the state description to retain the first components? What information are we discarding with the remaining components?
To understand this, let us refer to the tensor product structure associated with the global finegrained Hilbert space . In most physical applications, one has a notion of subsystems, and correspondingly the global Hilbert space can be factorized preferentially as a tensor product of Hilbert spaces of each such subsystem. In what follows, we minimally assume some arbitrary tensor factorization of , not necessarily equipped with some preferred decomposition governed by the Hamiltonian carroll+singh_toappear (); Tegmark:2014kka (); Piazza:2005wm () that might have notions of emergent space, locality, classical equations of motion, and the like. Our interpretation of the technique as an entanglement coarsegrainin just uses the existence of such a tensor product structure, and not it being special in any particular way, though since we are working on more general grounds, our method can be adapted to more physically familiar cases.
For concreteness, let us associate a tensor product structure with of such that it can be thought of as the Hilbert space of qubits , where is the Hilbert space of a qubit. The argument which follows does not hinge on such a qubit factorization, but will work for any arbitrary factorization chosen. Let us write down the reconstruction of the th specifying state by explicitly writing out the mean term, the next terms being retained, and the truncated terms,
(45) 
The mean term has by construction all the same entries. A state of qubits represents a completely separable (product) state of the qubits, and thus has no entanglement between the constituent subfactors. Thus, the mean state or the contribution sets a baseline state with the property of having no entanglement amongst its components. One can think of a different tensor structure to in terms of qudits, but the mean term still represents an unentangled state of the constituent subfactors.
The next terms in the PCA expansion of Eq. (45), add most of the variance over and above the mean in reconstructing the (resultant, unnormalized) state. Thus, this sum of terms adds most of the relevant entanglement structure of the state in the chosen tensor factorization of . Of course, one may choose a factorization of under which the th specifying state may be unentangled to begin with and this argument of adding back relevant entanglement would not not be particularly useful. But for a generic decomposition, this understanding of entanglement coarsegraining would be a good notion of what our prescription is coarsegraining. The higherorder terms for have a negligible (up to the coarsegraining scale set by choice of ) contribution in adding back variance to reconstruct the state, and hence also add minimal entanglement to the structure of the state in the chosen Hilbert space factorization.
As an example, we numerically constructed specifying states of dimension . Each coefficient of these states was chosen from a pseudorandom distribution and then normalized. Following this, we performed our PCAdecimation procedure and reduced the dimensionality of each state to under the map (hence our coarsegrained states are normalized). The coarsegraining dimension was varied from , corresponding to retaining only the separable term, to , corresponding to no truncation, only PCA compression. Now one can think of each state to be composed out of qubits, and we can quantify the entanglement structure by looking at the von Neumann entanglement entropy for these qubits in each of the specifying states. For instance, in the th specifying state , one can compute the entanglement entropy of the th qubit, (number of qubits) as
(46) 
where
(47) 
Figure (1) plots the cumulative von Neumann entanglement entropy of a chosen, constituent qubit in one of the constructed states (again, chosen as a representative) as a function of the number of PCA components retained in reconstructing the state. The idea of the plot is the saturation of the added entanglement entropy as one goes to more number of PCA components used to reconstruct the state. It is seen that only a few components are required to capture most of the entanglement structure while the higher orders have smaller contribution. By retaining the first components in the expansion, we recover the state within error bounds (determined by the choice of ) in a way that we preserve the global or most relevant entanglement structure of the constituent subfactors and lose irrelevant entanglement by truncating off the higher components.
The amount of correlations between the specifying states will be an important factor in determining how quickly the entanglement curve (as in Fig. 1) saturates. In general, for higher correlations amongst the specifying states, fewer PCA components would be required for most of the reconstruction, and one will expect a quick saturation in the entanglement buildup. In this sense, our PCA decimation coarsegraining prescription is related to entanglement coarsegraining, and is in the spirit of ignoring microscopic degrees of freedom and retaining large scale/global features; in this case, throwing away small, irrelevant entanglement while holding on the basic large scale structure of the state.
iii.4 CoarseGrained Time Evolution of a Quantum System
We have based our coarsegraining prescription on very little structure in Hilbert space: equipped only with a global basis and a set of specifying states, our PCA decimation procedure maps states from a dimensional Hilbert space to dimensional space while retaining most of the global, relevant entanglement structure in the state (in some associated factorization). It is natural to ask in what setups can one adapt and put this coarsegraining prescription to use.
One possible application involves coarsegraining the discretized time evolution of a given initial state of dimension with a global basis . The dynamics of states in are governed by some known and specified Hamiltonian . Consider unitarily evolving the initial state at time steps governed by some specified/chosen time step , such that the state at the th time step , is given by (we take units in which ),
(48) 
We thus have a collection of states living in a dimensional Hilbert space . In the case when the number of time evolved states satisfy , these states can act as our set of specifying states to undergo the PCA decimation prescription to be coarsegrained to a lower dimensional Hilbert space,
(49) 
If the Hamiltonian has desirable physical features such as locality, and if the time step is not too large, one would expect a high amount of correlation amongst the time evolved states. In this case, only a very few number of PCA basis components would be required to reconstruct the state. One can also find a coarsegrained representation of the Hamiltonian in the lowerdimensional space,
(50) 
where is the matrix representation of in the global basis in . Thus, using our PCA decimation prescription, one can compute a coarsegrained version of the time evolution of the state and use it as a proxy to study timedependent features of the quantum system under consideration.
Iv Epilogue and Conclusion
Coarsegraining is a very important theme in understanding the behavior of realistic quantum systems which live in large Hilbert spaces of very large dimension. Many quantum coarse coarsegraining schemes white1992dmrg (); vidal2007 (); matteo+soto+leuchs+grassl2017 (); busch+quadt1993 (); quadt+busch1994 (); faist2016 (); teo+rehacek+hradil2013 (); agon+balasubramanian+kasko+lawrence2014 () integrate out or eliminate irrelevant degrees of freedom to produce a coarsegrained description of the system. Renormalization Group techniques kadanoff1967 (); fisher1998 (); wilson1975 (); fisher1974 (); cardy1996 (); maris+kadanoff1978 () have been the cornerstone of coarsegraining ideas, and have proven to be extremely powerful and useful tools in physics. In particular, popular quantum coarsegraining schemes include Density Matrix Renormalization Group (DMRG) white1992dmrg (); schollwock2005 () and Entanglement Renormalization vidal2007 () and their numerical implementations white1993 (); white+scalapino1998a (); white+scalapino1998b (); vidal2008 (); vidal2010 (); gaite2001 (); gaite2003 (); latorre+rico+vidal2004 (); osborne+nielson2002 (). These, and many other coarsegraining schemes, assume substantial structure on Hilbert space. For instance, techniques like DMRG define an RG flow on the space of density matrices and serve as an effective truncation of Hilbert space of strongly correlated quantum manybody systems. Focusing on the lowenergy properties of a system with a known Hamiltonian, one assumes a notion of spatial locality and factorizability into state spaces on the lattice and numerical implementations further assume a preferred split into a system and an environment over which the trace is carried out to compute the properties at the level of the system. Similarly, in Entanglement Renormalization and its numerical implementations like MERA vidal2007 (), one has a local lattice structure and aims to compute ground state properties for the system by defining a real space RG to dispose off smalldistance degrees of freedom and entanglement (by the use of disentangling isometries, followed by blockdecimation prescriptions). All coarsegraining schemes come equipped with an understanding of what global properties of the system one aims to retain, such as optimizing observable expectation values or correlation functions or entanglement between subsystems; and which features are discarded which usually correspond to small scale entanglement, or highenergy modes etc.
Techniques in quantum information theory to compress data and allow for dimensional reduction also form an interesting set of ideas to coarsegrain quantum information. Such schemes depend on the context at hand: for example, focusing on a typical subspace and ignoring its orthogonal complement, without much loss of fidelity, such as in Schumacher’s noiseless quantum coding theorem, or compressing quantum information in a collection of qubits using elementary quantum circuit operations. Each technique has a specific aim and contextual validity, like the JohnsonLindenstrauss lemma allows us to preserve pairwise distances up to a certain specified error tolerance and the dimension of the reduced subspace is then determined by this specified error and the number of points in the data set, and not on the dimensionality of the original space. Constructive implementations of the JohnsonLindenstrauss lemma can be done via random projection and heavily relies on the Euclidean norm to measure pairwise distances, while on the other hand, dimensional reduction using PCA relies on specification of the dimension of the reduced subspace and projects onto a linear subspace. Thus, each technique has its range of validity and can be used depending on the physical system at hand.
While such methods are very useful, it is interesting to ask how one might coarsegrain a set of given quantum states in a Hilbert space which may or may not be associated with a Hamiltonian or the usual assumed structure on the space. In an effort in this direction, motivated by questions in quantum spacetime and emergent classicality, we have developed a coarsegraining prescription which uses Principle Component Analysis to first compress the dimensionality of Hilbert space by identifying a nonredundant basis (the PCA basis), followed by truncation of the last few PCA terms which contribute very little in reconstruction of the state. Physically, one can interpret this scheme as an entanglement coarsegraining (in some arbitrary associated factorization to Hilbert space) where, upon discarding the low importance terms, one only looses little and irrelevant entanglement structure of the state, while retaining major features in the reconstruction. One expects similarities between our PCA decimation scheme and other conventional coarsegraining prescriptions in the addition of appropriate structure. We feel this prescription is of a general nature, developed on a Hilbert space with very little structure, and can serve as a reliable means of firstprinciples quantum coarsegraining.
Acknowledgments
We would like to thank Ning Bao, ChunJun (Charles) Cao, and Jess Riedel for helpful discussions during the course of this project. We are also thankful to an anonymous reviewer for their comments to help improve the manuscript. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DESC0011632, as well as by the Walter Burke Institute for Theoretical Physics at Caltech and the Foundational Questions Institute.
References
 (1) L. Kadanoff, W. Gotze, D. Hamblen, R. Hecht, E. Lewis, V. V. Palciauskas, M. Rayl, J. Swift, D. Aspnes, and J. Kane, “Static Phenomena Near Critical Points: Theory and Experiment,” Rev. Mod. Phys. 39 (1967) 395–431.
 (2) M. E. Fisher, “Renormalization group theory: Its basis and formulation in statistical physics,” Rev. Mod. Phys. 70 (1998) 653–681.
 (3) K. G. Wilson, “The Renormalization Group: Critical Phenomena and the Kondo Problem,” Rev. Mod. Phys. 47 (1975) 773.
 (4) M. E. Fisher, “The renormalization group in the theory of critical behavior,” Rev. Mod. Phys. 46 (1974) 597–616. [Erratum: Rev. Mod. Phys.47,543(1975)].
 (5) J. L. Cardy, Scaling and renormalization in statistical physics. 1996.
 (6) H. J. Maris and L. P. Kadanoff, “Teaching the renormalization group,” American Journal of Physics 46 no. 6, (1978) 652–657, http://dx.doi.org/10.1119/1.11224. http://dx.doi.org/10.1119/1.11224.
 (7) C. Cao, S. M. Carroll, and S. Michalakis, “Space from Hilbert Space: Recovering Geometry from Bulk Entanglement,” Phys. Rev. D95 no. 2, (2017) 024031, arXiv:1606.08444 [hepth].
 (8) J. S. Cotler, G. R. Penington, and D. H. Ranard, “Locality from the Spectrum,” arXiv:1702.06142 [quantph].
 (9) A. A. Migdal, “Gauge Transitions in Gauge and Spin Lattice Systems,” Sov. Phys. JETP 42 (1975) 743. [Zh. Eksp. Teor. Fiz.69,1457(1975)].
 (10) B. Schumacher, “Quantum coding,” Phys. Rev. A 51 (Apr, 1995) 2738–2747. https://link.aps.org/doi/10.1103/PhysRevA.51.2738.
 (11) N. Datta, J. M. Renes, R. Renner, and M. M. Wilde, “Oneshot lossy quantum data compression,” IEEE Transactions on Information Theory 59 no. 12, (Dec, 2013) 8057–8076.
 (12) W. B. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space,” Conference on Modern Analysis and Probability 26 (1984) 189–206. http://dx.doi.org/10.1090/conm/026/737400.
 (13) S. Dasgupta and A. Gupta, “An elementary proof of a theorem of johnson and lindenstrauss,” Random Structures and Algorithms 22 no. 1, (2003) 60–65. http://dx.doi.org/10.1002/rsa.10073.
 (14) A. W. Harrow, A. Montanaro, and A. J. Short, “Limitations on quantum dimensionality reduction,” in Automata, Languages and Programming, L. Aceto, M. Henzinger, and J. Sgall, eds., pp. 86–97. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
 (15) M. Plesch and V. Bužek, “Efficient compression of quantum information,” Phys. Rev. A 81 (Mar, 2010) 032317. https://link.aps.org/doi/10.1103/PhysRevA.81.032317.
 (16) R. Chao, B. W. Reichardt, C. Sutherland, and T. Vidick, “Overlapping qubits,” ArXiv eprints (Jan., 2017) , arXiv:1701.01062 [quantph].
 (17) L. A. Rozema, D. H. Mahler, A. Hayat, P. S. Turner, and A. M. Steinberg, “Quantum data compression of a qubit ensemble,” Phys. Rev. Lett. 113 (Oct, 2014) 160504. https://link.aps.org/doi/10.1103/PhysRevLett.113.160504.
 (18) J. A. Vaccaro, Y. Mitsumori, S. M. Barnett, E. Andersson, A. Hasegawa, M. Takeoka, and M. Sasaki, “Quantum data compression,” in Stochastic Algorithms: Foundations and Applications, A. Albrecht and K. Steinhöfel, eds., pp. 98–107. Springer Berlin Heidelberg, Berlin, Heidelberg, 2003.
 (19) J. Shlens, “A tutorial on principle component analysis: Derivation, discussion and singular value decomposition,”. https://www.cs.princeton.edu/picasso/mats/PCATutorialIntuition_jp.pdf.
 (20) C. XiYao and G. F. Tuthill, “Quantum decimation for spin(1/2) chains in a magnetic field,” Phys. Rev. B 32 (Dec, 1985) 7280–7289. http://link.aps.org/doi/10.1103/PhysRevB.32.7280.
 (21) C. Castellani, C. Di Castro, and J. Ranninger, “Decimation Approach in Quantum Systems,” Nucl. Phys. B200 (1982) 45–60.
 (22) T. Matsubara, C. Totsuji, and C. J. Thompson, “Quantum cluster decimation,” Journal of Physics A: Mathematical and General 24 no. 19, (1991) 4599. http://stacks.iop.org/03054470/24/i=19/a=023.
 (23) M. Tegmark, “Consciousness as a State of Matter,” Chaos Solitons Fractals 76 (2015) 238–270, arXiv:1401.1219 [quantph].
 (24) S. M. Carroll and A. Singh, “Quantum Mereology: Factorizing Hilbert Space Into Subsystems with QuasiClassical Dynamics,” In preparation .
 (25) F. Piazza, “Glimmers of a pregeometric perspective,” Found. Phys. 40 (2010) 239–266, arXiv:hepth/0506124 [hepth].
 (26) S. R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett. 69 (1992) 2863–2866.
 (27) G. Vidal, “Entanglement Renormalization,” Phys. Rev. Lett. 99 no. 22, (2007) 220405, arXiv:condmat/0512165 [condmat].
 (28) O. D. Matteo, L. L. SanchezSoto, G. Leuchs, and M. Grassl, “Coarse graining the phase space of n qubits,” https://arxiv.org/abs/1701.08630.
 (29) P. Busch and R. Quadt, “Concepts of coarse graining in quantum mechanics,” International Journal of Theoretical Physics 32 no. 12, (1993) 2261–2269. http://dx.doi.org/10.1007/BF00672998.
 (30) R. Quadt and P. Busch, “Coarse graining and the quantum—classical connection,” Open Systems & Information Dynamics 2 no. 2, (1994) 129–155. http://dx.doi.org/10.1007/BF02228961.
 (31) P. Faist, “Quantum coarsegraining: An informationtheoretic approach to thermodynamics,” https://arxiv.org/abs/1607.03104.
 (32) Y. S. Teo, J. Řeháček, and Z. c. v. Hradil, “Coarsegrained quantum state estimation for noisy measurements,” Phys. Rev. A 88 (Aug, 2013) 022111. http://link.aps.org/doi/10.1103/PhysRevA.88.022111.
 (33) C. Agon, V. Balasubramanian, S. Kasko, and A. Lawrence, “Coarse Grained Quantum Dynamics,” arXiv:1412.3148 [hepth].
 (34) U. Schollwock, “The densitymatrix renormalization group,” Rev. Mod. Phys. 77 (2005) 259–315.
 (35) S. R. White, “Densitymatrix algorithms for quantum renormalization groups,” Phys. Rev. B48 (1993) 10345–10356.
 (36) S. R. White and D. J. Scalapino, “Density matrix renormalization group study of the striped phase in the 2d model,” Phys. Rev. Lett. 80 (Feb, 1998) 1272–1275. http://link.aps.org/doi/10.1103/PhysRevLett.80.1272.
 (37) S. R. White and D. J. Scalapino, “Energetics of domain walls in the 2d model,” Phys. Rev. Lett. 81 (Oct, 1998) 3227–3230. http://link.aps.org/doi/10.1103/PhysRevLett.81.3227.
 (38) G. Vidal, “Class of Quantum ManyBody States That Can Be Efficiently Simulated,” Phys. Rev. Lett. 101 (2008) 110501.
 (39) G. Vidal, Understanding Quantum Phase Transitions, ch. Entanglement Renormalization: an introduction. Taylor and Francis, 2010. http://www.arxiv.org/abs/0912.1651.
 (40) J. C. Gaite, “Angular quantization and the density matrix renormalization group,” Mod. Phys. Lett. A16 (2001) 1109–1116, arXiv:condmat/0106049 [condmat].
 (41) J. C. Gaite, “Entanglement entropy and the density matrix renormalization group,” https://arxiv.org/abs/quantph/0301120.
 (42) J. I. Latorre, E. Rico, and G. Vidal, “Ground state entanglement in quantum spin chains,” Quant. Inf. Comput. 4 (2004) 48–92, arXiv:quantph/0304098 [quantph].
 (43) T. J. Osborne and M. A. Nielsen, “Entanglement, quantum phase transitions, and density matrix renormalization,” Quantum Information Processing 1 no. 1, (2002) 45–53. http://dx.doi.org/10.1023/A:1019601218492.