Finite representations of continuum environments

Finite representations of continuum environments

Abstract

Understanding dissipative and decohering processes is fundamental to the study of quantum systems. An accurate and generic method for investigating these processes is to simulate both the system and environment, which, however, is computationally very demanding. We develop a novel approach to constructing finite representations of the environment based on the influence of different frequency scales on the system’s dynamics. As an illustration, we analyze a solvable model of an optical mode decaying into a reservoir. The influence of the environment modes is constant for small frequencies, but drops off rapidly for large frequencies, allowing for a very sparse representation at high frequencies that gives a significant computational speedup in simulating the environment. This approach provides a general framework for simulating open quantum systems.

pacs:
02.70.-c,05.10.Cc,05.90.+m

How quantum systems behave when in contact with an environment has been the subject of a tremendous amount of research (Weiss, 1993; Leggett et al., 1987; Zurek, 2003). Many interesting physical scenarios can be cast in this form, such as transport through nanoscale systems (Chen and Di Ventra, 2005), interaction between local magnetic moments and conduction electrons in the Kondo problem (Kirchner and Si, 2008), charge transfer in biomolecules (Zwolak and Di Ventra, 2008), decoherence of qubits (Merkli et al., 2007), relaxation and decay (Longhi, 2006), dissipative quantum phase transitions (Meidan et al., 2007), and the quantum-to-classical transition (Zurek, 2003). Techniques to study “open” systems thus have a wide range of applicability and influence.

Analytical techniques to study these systems are generally based on integrating out the environment (Keldysh, 1965; Kadanoff and Baym, 1962; Feynman and Vernon, 1963). However, to find solutions more often than not one has to resort to a series of uncontrolled approximations. Numerical techniques, then, are the key to accurate results for many physical systems. On the one hand, Monte Carlo simulations can be used to calculate system properties directly from the path-integral representation where the environment has been integrated out (Werner and Troyer, 2005). On the other hand, the numerical renormalization group (NRG) approach is based on simulating the system and environment by choosing a finite representation of the continuum environment (Wilson, 1975; Bulla et al., 2008). This technique uses a logarithmic discretization of the environment’s spectral density (Wilson, 1975; Bulla et al., 2008; Campo and Oliveira, 2005), which enforces a flow of the low energy spectrum as one successively incorporates lower energy degrees of freedom of the environment. However, since a variational matrix-product state (MPS) approach (Verstraete et al., 2005) does not require separating energy scales, one can ask a very fundamental question: how do different fractions of the environment influence the system’s dynamics and how can this be used to construct efficient representations of the environment?

Figure 1: Schematic of a system connected to an environment. The system (top sphere) is connected to a continuum, or very finely spaced, environment, as shown on the lowest level. However, high frequency modes have less influence over the system’s dynamics, allowing for a very sparse representation by grouping modes together.

We get insight into the answer by developing a novel approach for constructing a finite representation of the environment based on the influence of environment modes (hereon just referred to as modes) at different frequency scales. We illustrate this strategy by examining a solvable model of an optical cavity mode decaying into a reservoir, where we can obtain exactly the dynamics of the system connected to a discrete environment composed of evenly spaced modes, see Fig. (1). We show that there is a frequency window centered around the system’s frequency where the mode influence is constant. However, outside this window, the mode influence drops off as , where is the mode spacing and is the mode frequency. Thus, instead of using evenly spaced modes, we use an alternative, frequency dependent discretization that enforces the influence of modes at large frequencies to be constant. With this discretization, the computational cost of the simulation is significantly reduced.

We begin with a general description of the problem. A generic Hamiltonian of a system connected to an environment is

(1)

where is the system Hamiltonian and is some system operator. This represents a system connected to a non-interacting set of environment modes linearly in the environment operators. For this type of connection, the spectral function

(2)

completely defines the couplings and modes of the environment. Typically the spectral function is taken as some continuous function of frequency to indicate that for all practical purposes the environment is infinite compared to the system. To make simulating the system and environment a viable technique, one needs a controlled procedure for performing the mapping , where the set of environment modes is finite and each mode has the associated coupling constant . Our starting point to find an efficient mapping is to define a measure of error for an approximation of the environment. Quite generally one is interested in system properties only, we thus use the error measure

(3)

where is the simulation time, is the reduced density matrix of the system in the presence of the finite representation (of size ) of the environment, and is the exact reduced density matrix of the system in the presence of the continuum environment 1. We also define a measure of mode influence as

(4)

where is the reduced density matrix in the presence of all but one, at frequency , of the modes. The intuition behind using Eq. (4) is that modes with a small influence should be removable in a controllable manner 2. Having a guide such as Eq. (4) is crucial because an efficient choice of modes is going to be dependent on many factors - whether one wants equilibrium or real-time properties, the time/temperature of the simulation, the initial conditions (such as the initial excitation of the system and temperature of the environment), the Hamiltonian under consideration, etc. As a case in point, recently we showed that environments which give polynomially decaying memory kernels naturally motivate a logarithmic discretization of the kernel in time, and a property called increasing-smoothness provides a guide for more generally determining an efficient discretization (Zwolak, 2008).

To further describe and illustrate the approach, let us analyze an example of a single optical cavity mode (Carmichael, 1993) decaying into a reservoir with the Hamiltonian 3

(5)

where . We consider the initial state defined by the correlation functions and , and follow the evolution of the number of particles in the system, , which determines the reduced density matrix of the system by . We also take a constant spectral function . The exact solution in the continuum limit is . The nice property of this model, however, is that we can solve exactly the system dynamics in the presence of discrete, evenly spaced modes (Milonni et al., 1983). The solution for the system connected to an infinite set of modes with spacing is

(6)

where is the step-function and the function is a sum of correction terms of order one. That is, for an infinite set of evenly spaced modes, the dynamics of the system is exact up to time . Based on this nice property we can now examine two approaches to constructing a finite representation of the environment.

Linear Discretization - Since the simulation is exact for evenly spaced modes with , the error incurred in constructing a finite representation is going to be due to the choice of cutoff . In the continuum limit, imposing a cutoff gives an error, from Eq. (3),

(7)

to highest order in and the second expression is with the largest spacing possible to get a controllable error, . This gives a direct approach to constructing a finite representation of the environment. Given the simulation time , one simply uses a mode spacing of and uses the bandwidth as a control parameter to approach the exact dynamics within time . We will see below this error behavior in comparison to an alternative, influence discretization (ID).

Influence Discretization - The previous approach treats all of the modes within the bandwidth on equal footing, i.e., they all are coupled with the same strength to the system. However, it does implicitly recognize that high-frequency modes matter less, and can be truncated with a controllable error. Thus, we want to develop a way that explicitly recognizes that some frequency scales matter less, i.e., have a smaller influence on the system dynamics. To do this, let us solve for the influence, . In the presence of an infinite bandwidth of evenly spaced modes, the equation of motion for , valid up to time , is

(8)

If we remove a single mode at frequency , the equation of motion becomes

(9)

which is again valid up to time and the constant is the square of the coupling constant to mode . The solution is

(10)

where and . Since we have the time evolution of the operator , we can now directly calculate the influence in Eq. (4). We plot this influence as a function of frequency for several times in Figure (2).

Figure 2: Single mode influence versus frequency for several simulation times. For large frequencies the influence drops off as and for small frequencies it is constant within some frequency window. The inset shows the size of this frequency window, which drops off as for short times and approaches a constant for long times.

There are two crucial observations: (i) the influence is constant for small frequencies within a frequency window of width proportional to , and (ii) for large frequencies the influence drops off as . This suggests an uneven spacing of modes that is approximately constant at small frequencies and switches over to a spacing proportional to for large frequencies. The simplest mode spacing 4 that has this behavior is

(11)

This spacing results in a constant influence proportional to at large frequencies 5. It also enables the treatment of both the truncated modes and other high frequency modes on a similar footing. That is, beyond a frequency one can not choose any more modes and thus the parameter defines a natural cutoff, which, as we decrease , we both increase the number of modes that are approximately evenly spaced at low frequency and increase the cutoff. From Eq. 7, the error due to the cutoff, which corresponds to its influence, is also proportional to . Thus, the choice of mode spacing (11) enforces the high-frequency modes to be treated equally based on influence rather than based on coupling.

We can also obtain an expression for the error behavior of choosing a number of modes based on Eq. (11). A spacing given by Eq. (11) results in modes between and (), compared with evenly spaced modes for the same frequency cutoff. In the process of sparsifying the high frequency modes (see Fig. (1)), groupings of modes have to be performed, each with an error , which are of the same order as the truncation error. Thus, the error of the ID is . Starting from the error of the evenly spaced discretization, , one obtains

(12)

We show the results of simulations in Fig. (3) 6. We obtain the error scalings from Eqs. (7) and (12), and we further see that except for very short simulation times and very small number of modes, the ID needs significantly fewer modes to achieve the same accuracy. In these simulations, the computational cost to simulate an environment of modes is . Thus, given a desired error, the ID reduces the computational cost from to .

Figure 3: Simulation results for the error (Eq. (3)) versus the number of modes for several simulation times. For all the times, the evenly spaced approach (squares) gives an error that behaves as , whereas using the ID (circles) gives .

Beyond solvable models - The above approach provides a general framework for simulating “open,” many-body Hamiltonians. Ultimately one wants to have an efficient, finite representation of the environment and a controllable procedure for taking the continuum limit. Therefore, we suggest taking the general Hamiltonian (1) and first examining a solvable version of it, e.g., either take the quadratic part or the part of that commutes with the system-environment interaction. With this solvable version, one can compute exactly the mode influence under similar conditions (dynamics/temperatures) to be studied with the full . Then use this to construct a discretization and continuum limit procedure that will be used in the many-body case 7. The latter ensures that even if the discretization is not as efficient, there will still be control over the errors. Then, as with NRG, one can use the Wilson-chain construction (Wilson, 1975) directly with MPS simulations (Vidal, 2003, 2004; Zwolak and Vidal, 2004; Verstraete et al., 2004), where one can deal with both evenly or unevenly spaced modes due to the variational nature of the MPS algorithms (Verstraete et al., 2005). This will be discussed in more detail in a later publication.

Conclusions - We have developed a novel approach to constructing finite representations of continuum environments by using the single-mode influence (4) as a guide to choosing a distribution of modes. In an illustrative case, we found that the influence drops off as , which suggests a mode spacing . This treats the high-frequency modes, including the truncation of modes beyond the artificially imposed cutoff, on a similar footing. For this case 8, we showed that the computational cost to achieve a given error is reduced from to . It may be possible to further increase the computational efficiency by adding Markovian reservoirs to the environmental modes, where in some cases this exactly replicates the continuous environment (Garraway, 1997), or by including classical degrees of freedom in the dynamics (Sergi, 2005, 2007).

Acknowledgements.
We thank W. Zurek, G. Refael, G. Smith, F. M. Cucchietti, and P. Milonni for helpful comments. This research was supported in part by a Gordon and Betty Moore Fellowship at Caltech and by the U.S. Department of Energy through the LANL/LDRD Program.

Footnotes

  1. Alternative measures are appropriate if one is interested in a particular observable or the relative error.
  2. Since we plan to remove many modes, the spectral density of these modes must be included with the remaining modes. Another question one can ask is if we remove two modes and replace them with one, what error is incurred? This error has similar behavior to the influence in Eq. (4).
  3. We remove the conserved term and take the system frequency to be large.
  4. The spectral function appears with as the product , suggesting a more general spacing .
  5. We set since low frequency modes will give large recurrence errors if their spacing is larger than .
  6. An additional, numerical comparison with shows that the spacing (11) is more efficient.
  7. We conjecture that including a time-dependent field within the solvable model will mimic the many-body interaction by creating a range of accessible states. Likewise, we stress that the spacing is going to be dependent on the particulars of the system, including how the environment modifies the system, e.g., whether it shifts energies, mixes states, etc.
  8. There will be a corresponding reduction in computational cost for MPS simulations that is harder to assess since the effect on the MPS dimension is unclear.

References

  1. U. Weiss, Quantum Dissipative Systems (World Scientific Publishing, 1993).
  2. A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zwerger, Rev. Mod. Phys. 59, 1 (1987).
  3. W. H. Zurek, Rev. Mod. Phys. 75, 715 (2003).
  4. Y.-C. Chen and M. Di Ventra, Phys. Rev. Lett. 95, 166802 (2005).
  5. S. Kirchner and Q. Si, Phys. Rev. Lett. 100, 026403 (2008).
  6. M. Zwolak and M. Di Ventra, Rev. Mod. Phys. 80, 141 (2008).
  7. M. Merkli, I. M. Sigal, and G. P. Berman, Phys. Rev. Lett. 98, 130401 (2007).
  8. S. Longhi, Phys. Rev. Lett. 97, 110402 (2006).
  9. D. Meidan, Y. Oreg, and G. Refael, Phys. Rev. Lett. 98, 187001 (2007).
  10. L. V. Keldysh, Sov. Phys. JETP 20, 1018 (1965).
  11. L. P. Kadanoff and G. Baym, Quantum Statistical Mechanics (W. A. Benjamin, Inc., New York, 1962).
  12. R. P. Feynman and F. L. Vernon, Ann. Phys. - New York 24, 118 (1963).
  13. P. Werner and M. Troyer, Phys. Rev. Lett. 95, 060201 (2005).
  14. K. G. Wilson, Rev. Mod. Phys. 47, 773 (1975).
  15. R. Bulla, T. A. Costi, and T. Pruschke, Rev. Mod. Phys. 80, 395 (2008).
  16. V. L. Campo and L. N. Oliveira, Phys. Rev. B 72, 104432 (2005).
  17. F. Verstraete, A. Weichselbaum, U. Schollwöck, J. I. Cirac, and J. von Delft, cond-mat/0504305 (2005).
  18. M. Zwolak, cond-mat/0611412, to appear in Computational Science and Discovery (2008).
  19. H. J. Carmichael, An Open Systems Approach to Quantum Optics (Springer-Verlag, Berlin, 1993).
  20. P. W. Milonni, J. R. Ackerhalt, H. W. Galbraith, and M.-L. Shih, Phys. Rev. A 28, 32 (1983).
  21. G. Vidal, Phys. Rev. Lett. 91, 147902 (2003).
  22. G. Vidal, Phys. Rev. Lett. 93, 040502 (2004).
  23. M. Zwolak and G. Vidal, Phys. Rev. Lett. 93, 207205 (2004).
  24. F. Verstraete, J. J. Garcia-Ripoll, and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004).
  25. B. M. Garraway, Phys. Rev. A 55, 4636 (1997).
  26. A. Sergi, Phys. Rev. E 72, 066125 (2005).
  27. A. Sergi, J. Phys. A: Math. Theor. 40, F347 (2007).
72361
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description