Quantum algorithms for simulated annealing
1 Problem Definition
This problem is concerned with the development of quantum methods to speed up classical algorithms based on simulated annealing (SA).
SA is a well known and powerful strategy to solve discrete combinatorial optimization problems . The search space consists of configurations and the goal is to find the (optimal) configuration that corresponds to the global minimum of a given cost function . Monte Carlo implementations of SA generate a stochastic sequence of configurations via a sequence of Markov processes that converges to the low-temperature Gibbs (probability) distribution, . If is sufficiently large, sampling from the Gibbs distribution outputs an optimal configuration with large probability, thus solving the combinatorial optimization problem. The annealing process depends on the choice of an annealing schedule, which consists of a sequence of stochastic matrices (transition rules) . Such matrices are determined, e.g., by using Metropolis-Hastings . The real parameters denote a sequence of “inverse temperatures”. The implementation complexity of SA is given by , the number of times that transition rules must be applied to converge to the desired Gibbs distribution (within arbitrary precision). Commonly, the stochastic matrices are sparse and each list of nonzero conditional probabilities and corresponding configurations, , can be efficiently computed on input . This implies an efficient Monte Carlo implementation of each Markov process. When a lower bound on the spectral gap of the stochastic matrices (i.e., the difference between the two largest eigenvalues) is known and given by , one can choose and , . is an upper bound on . The constants of proportionality depend on the error probability , which is the probability of not finding an optimal solution after the transition rules have been applied. These choices result in a complexity for SA .
Quantum computers can theoretically solve some problems, such as integer factorization, more efficiently than classical computers . This work addresses the question of whether quantum computers could also solve combinatorial optimization problems more efficiently or not. The answer is satisfactory in terms of (Key Results). The complexity of a quantum algorithm is determined by the number of elementary steps needed to prepare a quantum state that allows one to sample from the Gibbs distribution after measurement. Similar to SA, such a complexity is given by the number of times a unitary corresponding to the stochastic matrix is used. For simplicity, we assume that the stochastic matrices are sparse and disregard the cost of computing each list of nonzero conditional probabilities and configurations, as well as the cost of computing . We also assume and the space of configurations is represented by -bit strings. Some assumptions can be relaxed.
INPUT: An objective function
, sparse stochastic matrices satisfying the detailed balance condition,
a lower bound on the spectral gap of , an error probability .
OUTPUT: A random configuration such that , where is the set of optimal configurations that minimize .
2 Key Results
The main result is a quantum algorithm, referred to as quantum simulated annealing (QSA), that solves a combinatorial optimization problem with high probability using unitaries corresponding to the stochastic matrices . The quantum speedup is in the spectral gap, as when .
Computationally hard combinatorial optimization problems are typically manifest in a spectral gap that decreases exponentially fast in , the problem size. The quadratic improvement in the gap is then most significant in hard instances. The former QSA is based on ideas and techniques from quantum walks and the quantum Zeno effect, where the latter can be implemented by evolution randomization . Nevertheless, recent results on “spectral gap amplification” allow for other quantum algorithms that result in a similar complexity scaling .
2.1 Quantum walks for QSA
The configuration represents a simple configuration, e.g., (the -bit string), and are the entries of the stochastic matrix . The other unitary matrices used by QSA are , the permutation (swap) operator that transforms into , and , the reflection operator over .
The quantum walk is and the detailed balance condition implies 
where are the probabilities given by the Gibbs distribution. (, , and also depend on .) The goal of QSA is to prepare the corresponding eigenstate of in Eq. 2, within certain precision , and for inverse temperature . A projective quantum measurement of on such a state outputs an optimal solution in the set with probability .
2.2 Evolution randomization and QSA implementation
The QSA is based on the idea of adiabatic state transformations [6, 11]. For , the initial eigenstate of is , which can be prepared easily on a quantum computer. The purpose of QSA is then to drive this initial state towards the eigenstate of for inverse temperature , within given precision. This is achieved by applying the sequence of unitary operations to the initial state (Fig. 1). In contrast to SA, , but the initial and final inverse temperatures are also and . This implies that the number of different inverse temperatures in QSA is , where the constant of proportionality depends on . The nonnegative integers can be sampled randomly according to several distributions . One way is to obtain after sampling multiple (but constant) times from a uniform distribution on integers between and , where . The average cost of QSA is then . One can use Markov’s inequality to avoid those (improbable) instances where the cost is significantly greater than the average cost. The QSA and the values of the constants are given in detail in Fig. 1.
Analytical properties of :
The quantum walk has eigenvalues , for , in the relevant subspace. In particular, and [5, 7, 8, 9]. This implies that the relevant spectral gap for methods based on quantum adiabatic state transformations is of order . The quantum speedup follows from the fact that the complexity of such methods, recently discussed in [6, 11, 13, 14], depends on the inverse of the relevant gap.
Like SA, QSA can be applied to solve general discrete combinatorial optimization problems . QSA is often more efficient than exhaustive search in finding the optimal configuration. Examples of problems where QSA can be powerful include the simulation of equilibrium states of Ising spin glasses or Potts models, solving satisfiability problems, or solving the traveling salesman problem.
4 Open Problems
Some (classical) Monte Carlo implementations do not require varying an inverse temperature and apply the same (time-independent) transition rule to converge to the Gibbs distribution. The number of times the transition rule must be applied is the so-called mixing time, which depends on the inverse spectral gap of . The development of quantum algorithms to speed up this type of Monte Carlo algorithms remains open. Also, the technique of spectral gap amplification outputs a Hamiltonian on input . The relevant eigenvalue of such a Hamiltonian is zero, and the remaining eigenvalues are , where . This opens the door to a quantum adiabatic version of the QSA, in which is changed slowly and the quantum system remains in an “excited” eigenstate of eigenvalue zero at all times. The speedup is also due to the increase in the eigenvalue gap. Nevertheless, finding a different Hamiltonian path with the same gap, where the adiabatic evolution occurs within the lowest energy eigenstates of the Hamiltonians, is an open problem.
- Kirkpatrick S, Gelett CD, Vecchi MP (1983) Science 220:671
- Hastings WK (1970) Biometrika 57 (1): 97–109.
- Aldous DJ (1982) J Lond Math Soc s2-25:564
- Shor P (1994) Proc 35th Annual Symp Found Comp Science
- Somma R, Boixo S, Barnum H, Knill E (2008) Phys Rev Lett 101:130504
- Boixo S, Knill E, Somma R (2009) Quantum Inf and Comp 9:0833
- Somma R, Boixo S (2013) SIAM J Comp 42:593
- Ambainis A (2004) Proc 45th Symp Found Comp Science
- Szegedy M (2004) Proc 45th IEEE Symp Found Comp Science
- Magniez F, Nayak A, Roland J, Santha M (2007) Proc 39th Annual ACM Symp Theo Comp
- Chiang HT, Xu G, Somma R (2014) Phys Rev A 89:012314
- Cook WJ, Cunningham WH, Pulleyblank WR (1998) Combinatorial Optimization. J. Wiley and Sons, New York.
- Wocjan P, Abeyensinghe (2008) Phys Rev A 78:042336
- Boixo S, Knill E, Somma R (2010), arXiv:1005.3034
- C.f., Levin DA, Peres Y, Wilmer EL, Markov Chains and Mixing Times. Available at: http://research.microsoft.com/en-us/um/people/peres/markovmixing.pdf