Using a Recurrent Neural Network to Reconstruct Quantum Dynamics of a Superconducting Qubit from Physical Observations

Using a Recurrent Neural Network to Reconstruct Quantum Dynamics
of a Superconducting Qubit from Physical Observations

E. Flurin emmanuel.flurin@cea.fr    L. S. Martin    S. Hacohen-Gourgy    I. Siddiqi Department of Physics, University of California, Berkeley, CA 94720, U.S.A.
Center for Quantum Coherent Science, University of California, Berkeley CA 94720, USA.
Department of Physics, Technion - Israel Institute of Technology, Haifa 32000 Israel.
Quantronics Group, SPEC, CEA, CNRS, Université Paris-Saclay, CEA-Saclay, 91191 Gif-sur-Yvette, France.
December 29, 2018
Abstract

At its core, Quantum Mechanics is a theory developed to describe fundamental observations in the spectroscopy of solids and gases. Despite these practical roots, however, quantum theory is infamous for being highly counterintuitive, largely due to its intrinsically probabilistic nature. Neural networks have recently emerged as a powerful tool that can extract non-trivial correlations in vast datasets. They routinely outperform state-of-the-art techniques in language translation, medical diagnosis and image recognition. It remains to be seen if neural networks can be trained to predict stochastic quantum evolution without a priori specifying the rules of quantum theory. Here, we demonstrate that a recurrent neural network can be trained in real time to infer the individual quantum trajectories associated with the evolution of a superconducting qubit under unitary evolution, decoherence and continuous measurement from raw observations only. The network extracts the system Hamiltonian, measurement operators and physical parameters. It is also able to perform tomography of an unknown initial state without any prior calibration. This method has potential to greatly simplify and enhance tasks in quantum systems such as noise characterization, parameter estimation, feedback and optimization of quantum control.

Figure 1: Recurrent Neural Network training from raw data set a. Schematic of the superconducting qubit dispersively coupled to a microwave cavity monitored by a high quantum efficiency Josephson Parametric Amplifier (JPA). The qubit is simultaneously driven on resonance at a Rabi rate and dispersively monitored with a strength near the cavity resonance frequency. b. Data collected from the experimental system, comprising preparation, measurement outcomes and continuous measurement record of the qubit, are directly streamed to a RNN, which provides a prediction of the measurement outcome. The weights of the RNN are updated at each iteration through a stochastic gradient descent. c. The stochastic gradient descent aims at minimizing the cross-entropy loss function which evaluates the distance between the prediction and the measurement outcome.

Quantum mechanics breaks dramatically with classical intuition, contradicting determinism and introducing many highly counterintuitive concepts, such as contextuality, non-classical correlations and the uncertainty principle. Despite its abstract mathematical framework, quantum mechanics can be formulated operationally as an extended information theory Chiribella2011 (), where the physical system is treated as a black box in which preparation and measurement combine to give the probabilities of experimental outcomes. The physical parameters are then estimated by averaging measurement outcomes on a large ensemble.

The time evolution of the state of an isolated quantum mechanical system is governed by the Schrödinger equation. However realistic system cannot be isolated perfectly, and the coupling to an environment brings about qualitatively different behavior that cannot be accounted for via the Schrödinger equation alone. If the system is monitored continuously, the dynamics of the system is perturbed by the inevitable back-action induced by measurement. Although the system’s evolution under measurement is stochastic, the measurement record faithfully reports the perturbation of the system with respect to the unperturbed coherent evolution. Consequently, the observer’s knowledge of the wave-function can be updated using quantum filtering - the extraction of quantum information from a noisy signal. The stochastic time evolution of the wave function is the so called quantum trajectory. Under certain approximations, this task can be performed by integrating the stochastic quantum master-equation, provided that the Hamiltonian, dissipation and measurement operators are precisely calibrated Murch2013 (); Weber2014 (); Hacohen2016 (); Ficheux2018 ().

On the other hand, Recurrent Neural Networks (RNN) are a powerful class of machine learning tools able to extract hidden correlations from large datasets Schuster1997 (). They are most commonly applied to time-binned data, and as such achieve excellent performance on difficult problems such as language translation Mikolov2010 () and speech recognition Graves2013 (). RNN training is driven by examples and performed without specifying dictionaries or linguistic rules. Interestingly, quantum filtering Bouten2007 () can be seen as a similar task in which noisy experimental signals must be translated into meaningful quantum information. Last year, various architectures of neural networks have been used in the realm of quantum physics for the prediction the theoretical quantum behavior of strongly correlated phases of matter Torlai2018 (); Wang2016 (); Carrasquilla2017 (); van2017 (); Carleo2017 (), the design of efficient quantum error correction code Fosel2018 (), the decoding of large topological error correcting codes Torlai2017 (); Krastanov2017 (); Baireuther2018 () and the optimization of dynamical decoupling schemes for quantum memories August2017 ().

In this Letter, we show that neural networks can be trained to predict stochastic quantum evolution from raw observation without specifying quantum mechanics a priori. We demonstrate that the RNN reproduces the stochastic quantum evolution for a continuously monitored superconducting qubit under a Rabi Hamiltonian. Rather than providing a black-box model, we use the neural network to robustly extract all physical parameters required for quantum filtering. Moreover, while RNNs are temporally oriented, they are routinely trained both in the forward and backward time ordering, so that the network may exploit both past and future information. In the present application, the use of past and future continuous measurement outcomes improves the estimation accuracy of quantum trajectories at a given time through a process called quantum smoothing Guevara2015 (); Tsang2009 (). We train a bidirectional RNN to perform forward-backward analysis of trajectories, enabling quantum smoothing of predictions and the faithful tomography of an unknown initial state. By treating preparation and measurement on the same footing, the RNN structure highlights the time symmetry underlying the stochastic quantum evolution.

I Experimental system

Our experiment consists of a superconducting transmon qubit Koch2007 () dispersively coupled to a superconducting waveguide cavity Paik2011 (). In the interaction picture and rotating wave approximation, our system is described by the Hamiltonian ,

(1)
(2)

where is the reduced Plank’s constant, is the creation (annihilation) operator for the cavity mode, and are qubit Pauli operators. describes a microwave drive at the qubit transition frequency which induces unitary evolution of the qubit state characterized by the Rabi frequency . is the interaction term, characterized by the dispersive coupling rate . This term describes a qubit state-dependent frequency shift of the cavity, which we use to perform quantum state measurement of our qubit. The cavity is coupled to the transmission line at a rate . A microwave tone that probes the cavity near its resonance frequency will acquire a qubit state-dependent phase shift. If the measurement tone is very weak, quantum fluctuations of the electromagnetic mode fundamentally obscure this phase shift, resulting in a partial or weak measurement of the qubit state Murch2013 (). We use a near-quantum-limited parametric amplifier Hatridge2011 () to amplify the quadrature of the reflected signal which is proportional to the qubit state-dependent phase shift. After further amplification, we digitize the signal in time steps, yielding a measurement record .

We begin each run of the experiment by heralding the ground state of the qubit using the above readout technique. We then prepare the qubit along one of the 6 cardinal points of the Bloch sphere by applying a preparation pulse. Next, a measurement tone at the cavity frequency of continuously probes the cavity for a variable time between and , which weakly measures the qubit in the basis. Concurrently, we apply the Rabi Hamiltonian . Finally, we apply pulses to perform qubit rotations and a projective measurement, yielding a single shot measurement of a desired qubit operator , or .

Figure 2: RNN prediction of the quantum evolution a. Blue-scale histogram of the normalized measurement records extracted from the experiment, traces plotted in color show representative instances. a. Red-scale histograms of RNN prediction for the measurement basis b=, and in the driven case, beginning from in the preparation basis a=. Traces plotted in color show representative instances. c. Training validation; Ensemble of RNN prediction leading to at indicated by the red maker. d. Comparison of the RNN prediction with the tomography —averaged measurement outcome , Inset - Ensemble of projective measurement for the predicted ensemble .

Ii Quantum trajectories

To allow the neural network to operate as generally as possible, we formulate system inputs and outputs symmetrically, and avoid passing it objects such as a wave function that encode information about the structure of quantum theory. The role of the wave-function in quantum mechanics is to provide the probability of a measurement outcome given the preparation and evolution of the system at earlier times . In the case of a continuously monitored quantum bit, the preparation and measurement outcome are each a binary variable extracted through a projective readout performed at the initial and final times respectively; the preparation and measurement configurations, labeled and , encode microwave pulses performing qubit rotations for state preparation and tomography respectively in the , and basis. The stochastic measurement record is collected with a high quantum efficiency parametric amplifier during the qubit evolution. Quantum trajectory theory describes how an observer’s state of knowledge evolves given a measurement record Gambetta2008 (). Therefore, quantum trajectories are specified by , the probability of measuring the outcome with the measurement parameter given the initial measurement in the preparation parameter and the stochastic measurement outcome up to a time . Tracking this quantum evolution can be understood as a translation of the measurement records into a quantum state evolution. Fig.2 a. shows the distribution of measurement records obtained for the preparation setting (, a=).

Quantum trajectories are typically extracted from continuous measurement by integrating the stochastic master equation (SME) governing the evolution of the density matrix

(3)

where is the Lindblad superoperator describing the qubit dephasing induced by the measurement of strength , is a measurement superoperator describing the backaction of the measurement on the quantum state for a quantum efficiency and is a Gaussian distributed variable with variance extracted from measurement record normalized appropriately using

(4)

The probability distribution for the projective outcome is then given by the Born rule . The integrated stochastic master equation provides faithful predictions when experimental parameters are precisely known from independent calibration under the assumption that the cavity decay rate is much larger than the qubit measurement rate . Fig.2 a. shows two representative trajectories extracted from the measurement records based on the stochastic master equation.

Iii Recurrent Neural Network

Based solely on a large set of labeled examples directly extracted from the experimental system, we now demonstrate that the network can be trained to predict the probability of the observing the measurement outcome given the history of the quantum evolution accessible to the observer, in other words the best knowledge of the qubit wave-function.

We use a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) gers2000 () schematically depicted in Fig.1b. These typically consist of a layer of virtual neurons-like nodes recurrently updated in time. The state of the neuron’s layer at a time is encoded in a -dimensional vector . It is computed as a weighted linear combination of the neuron’s layer state at a previous time combined with the measurement record at a time and passed through a non-linear activation function such that where and are the weights of the connections between the neurons, and the biases respectively, which are determined during the training stage. The probability of the getting the outcome given the measurement setting is computed at each time step as a linear combination of the neuron layer state passed through the activation function given by . The preparation settings and the initial qubit state (input bit ) are specified in the initial state of the neuron layer. The neural network is trained to minimize a loss function by strengthening or weakening connections between neuron layers encoded in the weight matrices , as shown in Fig.1b. The cross-entropy loss function is minimized when the prediction and the distribution of experimental outcomes for a given measurement setting match. Crucially, the function implemented by the neural network is differentiable, and therefore the weight matrices can be updated at each iteration of the training by differentiating the loss-function and applying a gradient-descent minimization step: where is the learning rate. The training process ends once the weight matrices have converged toward a minimum of the loss function. The effectiveness of neural networks lies in their ability to converge toward a minimum of a very high dimensional non-linear loss landscape through gradient back-propagation as illustrated in Fig.1c.

Figure 3: RNN prediction and retrodiction of the quantum evolution a. Red-scale histograms of RNN prediction for the measurement basis and in the driven case beginning from in the preparation basis , traces plotted in color show representative instances. b. Blue-scale histogram of the normalized measurement records extracted from the experiment, traces plotted in color show representative instances. c. Red-scale histograms of RNN retrodiction for the same measurement record. d. Comparison of the backward RNN prediction with the tomography —averaged measurement outcome . e. Red-scale histograms of smoothed RNN predictions based on the forward-backward analysis given by Eq.(5) for the same measurement records.

Iv Training

The Long Short Term Memory recurrent neural network comprises 64 neurons with rectified linear unit activation function. This specific RNN architecture evades the exploding/vanishing gradient problem of standard RNN architecture, improving the learning of long-term dependencies Hochreiter2001 (). The neural network is implemented with the Tensorflow library abadi2016 () developed by Google and optimized for a Graphics Processing Unit (Nvidia Tesla K80 GPU), which enables a speed up of the training. The data are fed to the network in batches, each containing 1024 measurement records on which a step of the gradient descent is preformed using ADAM optimizer Kingma2014 (). The measurement records is split in two data set. traces are used for the training and randomly chosen traces are used for the evaluation and displayed in the manuscript. The training data can be re-injected several time to the network in order to improve the model accuracy, each of these training cycle corresponds to a training epoch, in practice up to 10 training epochs have been performed. At each training epoch, the learning rate is lowered from to . In order to improve the training robustness, of the neurons are dropped out randomly during the first epoch. The fraction of dropped out neurons is gradually lowered to with each subsequent training epoch. This method prevents the network from over-fitting and helps the generalization abilities of the model Srivastava2014 (). Note that the training quality does not strongly depend on the details of these parameters. A key feature of the training is that it can be performed in real-time directly from raw data data collected from the experimental system, the training cycle is per trace, which is on par with the experimental repetition time. Therefore, the traces are produced and fed to the RNN in . 6 preparation settings and 6 measurement settings are used. In practice, we perform the preparation and measurement with the following rotations of the qubit , , , , and which correspond to the cardinal points of the Bloch sphere. The associated preparation labels and measurement labels are then given respectively by (), (), (), (), () and () with . The total time evolution is varied over 20 values within , and the measurement record is acquired during the qubit evolution with a sampling time of . Once the training achieved, the RNN returns the prediction which corresponding to the probability of measuring the qubit at a time along the measurement axis , and .

V Validation

Once the RNN is trained, the predictions of the measurement outcomes form an ensemble of trajectories for each of the measurement setting as shown on Fig.2b. The prediction of the neural network are in good agreement with the representative trajectories integrated from the stochastic master equation. In this section, we demonstrate that the remaining discrepancies between predictions are in favor of the neural network. The accuracy of the training can be evaluated self-consistently on the evaluation dataset not used during the training. This method has been previously used to benchmark the prediction of the stochastic master equation Murch2013 (); Weber2014 (); Hacohen2016 (); Ficheux2018 (). We select the subset of the trajectories leading to the same prediction within a small such that . Fig.2c displays the agreement between the ensemble of trajectories ending in and the histogram of the final measurement value. If the prediction is accurate, it should agree with the the final tomographic measurement average on the sub-set , defined as with the number of trajectories in , such that . The overall agreement between prediction and the tomography values can be quantified as a relative error where the total number of trajectories. As shown in Fig. 2d, the RNN prediction gives relative error lower than for all-measurement axis. As a comparison, using the same evaluation data set, the prediction of the stochastic master equation based on the independently calibrated experimental parameters gives a higher relative error along the and axis. Such a discrepancy can be attributed to small calibration errors and experimental drifts. This self-consistent evaluation demonstrates the prediction power of the trained RNN and its robustness against calibration errors of physical parameters.

Vi Bidirectional RNN

RNNs are inherently time oriented; the prediction at a time only depends on the measurement record at earlier times. A common feature used to improve the prediction power of a RNN, for translation application in particular, is to combine the prediction of two RNNs trained respectively forward and backward in time, exploiting the same data in both directions Schuster1997 (). The forward prediction provides the trajectory given the past measurement record and the preparation settings : while the backward prediction provides the trajectory given the "future" measurement record played backward and the measurement settings : . As shown in Fig.3a, the RNN provides an ensemble of backward trajectories. The accuracy of backward prediction are evaluated using the same validation method than the forward prediction, the subset of backward trajectory giving the same prediction must agree on average with the preparation measurement such that . The accuracy of the backward prediction is shown in Fig.3 b, where the relative error for the preparation settings , and for the backward predictions are , and , the overall accuracy is comparable to the forward prediction. Remarkably, the backward and forward predictions do not necessarily agree at a given , indeed these predictions are based on distinct parts of the measurement records. They provide complementary information from the past and future evolution of the system. Theses predictions can therefore be combined to enhance the knowledge of the quantum state based on the full measurement record. Backward-forward analysis is a well-established postprocessing method with recurrent neural network Schuster1997 () as well as hidden markov chain methods Rabiner1989 (). Time-reversal symmetry underlies quantum evolution and exchange the role of state preparation and state measurement Aharonov1964 (). In a sense, backward-forward analysis naturally translates into quantum regime as the prediction and retrodiction of quantum trajectories Gammelmark2013 (); Campagne2014 (); Tan2015 (). Quantum prediction and retrodiction can be combined based on quantum smoothing techniques Tsang2009 (); Guevara2015 () enabling an enhancement of physical parameter estimation Rybarczyk2015 (); Tan2016 (). The forward and backward predictions can be combined into a smoothed prediction by:

(5)

As depicted in Fig.3c, the smoothed trajectories combine the backward and forward information such that it dismisses the least informative predictions () and strengthen the most informative ones (). By removing ambiguities in the qubit evolution, we access information which is blurred by statistical uncertainties in the standard approach, and we observe an improved temporal resolution on quantum jumps undergone by the qubit. The forward-backward analysis demonstrates how bidirectional RNNs naturally combines causal and anti-causal correlations hidden in the measurement records.

Vii Initial State estimation

The role of the preparation and measurement are treated symmetrically in the forward and backward prediction. Hence while the forward RNN predicts the outcome of the final projective measurement, the backward RNN provides an estimation of the initial state of the system given the measurement record. These predictions can be therefore exploited to perform initial state tomography, this task is reminiscent of the enhanced readout discrimination by machine-learning demonstrated in Ref. magesan2015 (). For the state estimation, we do not specify the final projective measurement and we initialize the backward network with a maximally unknown state ( for , and ). Each backward trajectory provides up to 1 bit of information about the initial state Holevo1973 (). Combining this information using maximum-likelihood methods allows for reconstructing the initial state . Here, the optimization consists in minimizing the following likelihood function over the probability of the initial state following Ref. Six2016 ()

(6)

As shown in Fig.4 a, we find an agreement between the initial state estimation and prepation within the confidence interval estimated with bootstraping method. It demonstrates that despite the complicated dynamics, the combination of RNN backward predictions performs as a faithful qubit state tomography.

Figure 4: Parameter estimation of the quantum master equation and initial state tomography a. - State estimation. Estimation of 6 initial state preparations (red circles) using maximum likelihood estimation on backward RNN predictions ( trajectories each) initialized from an undetermined projective measurement outcome, the circle radius gives the confidence interval extracted from bootstrapping methods. b. Distribution of the RNN predictions in the and measurement basis for all time. c. Average drift of individual trajectories in the Bloch sphere: The vector map of the averaged evolution of RNN predictions in the and measurement basis between two consecutive time steps. This map captures the Hamiltonian evolution and the Linbladian dissipation d. Average diffusion of individual trajectories in the Bloch sphere: computed vector map associated with the covariance of the prediction between two consecutive time steps in the and measurement basis. This map captures the measurement induced backaction.

Viii Parameter estimation

The trajectories predicted by the trained RNN can be exploited to estimate physical parameters of the experimental system. In Fig.4 b, we plot the distribution of the forward RNN prediction in the , plane for all times. This distribution exhibits a tilted ellipse shape within the Bloch sphere (white circle), the great axis of the ellipse is along the axis showing that the quantum trajectories tends to collapse toward the poles of the Bloch sphere, corresponding to the pointer states of the measurement operator. In the equatorial plane, the distribution is squeezed, indicating that the quantum state experiences a larger dephasing and loses purity. By performing a statistical analysis of the forward RNN prediction, we are able to reconstruct the physical parameters associated with the stochastic master equation describing the quantum evolution under continuous measurement. The stochastic master equation has two main contributions Gambetta2008 () ; on one hand the dissipative evolution encodes the Hamiltonian evolution along with the decoherence, while on the other hand the measurement back-action describes the update of the quantum state given the stochastic measurement record. The dissipative evolution can be extracted from the forward prediction of the RNN by evaluating the average drift of individual trajectories. We compute the ensemble averaged prediction change over intervals of , with , versus position on the Bloch sphere depicted in Fig. 4c. We observe a drift vector map in the Bloch sphere describing a rotation of the qubit state along the X-axis of the Bloch sphere, corresponding to a Rabi frequency of . An additional collapse of the state toward the Z-axis corresponds to measurement-induced dephasing rate of . The measurement-induced disturbance can also be extracted from the prediction of the RNN by evaluating the average diffusion of the individual trajectories Hacohen2016 (). We compute the covariance matrix associated with the prediction change over intervals of , . The diffusion vector map is given by the eigenvectors of the covariance matrix weighted by its eigenvalues versus position in the Bloch sphere as depicted in Fig. 4b. This vector map describes the magnitude and the direction of the disturbance induced by the measurement in the Bloch sphere. We observe that the disturbance is maximal along the equatorial plane of the Bloch sphere and vanishes at the poles. From this map, we extract a measurement rate of along the Z-axis of the Bloch sphere. The quantum efficiency of our measurement defined as the ratio of the measurement induced dephasing and the measurement rate gives .Note that the quantum efficiency is usually challenging to estimate and required several steps of calibrations. The estimated experimental parameters differ sightly from the calibrations which is attributed to residual detuning of the Rabi drive with respect to the qubit frequency.

Ix Conclusion

We demonstrate that a recurrent neural network can be trained to provide a model-independent prediction of the outcome of fully general quantum evolution based only on raw observation. The ensemble of predictions can be compared to quantum models such as the stochastic master equation to extract physical parameters without additional calibration. By considering causal and retrocausal evolution, we show that initial state tomography can be carried out even for non-trivial quantum evolution. The black box approach of this work is an illustration of the fact that quantum mechanics is an operational theory, in which states and measurement outcomes can be predicted from raw observation without the mathematical abstraction of a Hilbert space. The model-agnostic nature of the RNN is therefore readily generalized to larger quantum system. Such networks could excel at finding efficient state representations for larger systems, which could prove useful for real-time modelling, filtering and parameter estimation. The robust, model-independent nature of prediction is a promising tool for the calibration of future quantum processors and will enable characterization of imperfections outside of the scope of the usual approximation, such as correlated errors or non-Markovian noise, and may even be suited for identifying and quantifying effects initially unknown to the experimenter.

X Acknowledgements

We acknowledge M. Devoret, V. Ramasesh, J. Colless and M. Blok for helpful discussions. LSM acknowledge funding via NSF graduate student fellowships. This research is supported in part by the U.S. Army Research Office (ARO) under grant no. W911NF-15-1-0496 and by the AFOSR under grant no. FA9550-12-1-0378.

References

  • (1) G. Chiribella, G. M. D’Ariano, and P. Perinotti, “Informational derivation of quantum theory,” Physical Review A, vol. 84, no. 1, p. 012311, 2011.
  • (2) K. Murch, S. Weber, C. Macklin, and I. Siddiqi, “Observing single quantum trajectories of a superconducting quantum bit,” Nature, vol. 502, no. 7470, p. 211, 2013.
  • (3) S. Weber, A. Chantasri, J. Dressel, A. N. Jordan, K. Murch, and I. Siddiqi, “Mapping the optimal route between two quantum states,” Nature, vol. 511, no. 7511, p. 570, 2014.
  • (4) S. Hacohen-Gourgy, L. S. Martin, E. Flurin, V. V. Ramasesh, K. B. Whaley, and I. Siddiqi, “Quantum dynamics of simultaneously measured non-commuting observables,” Nature, vol. 538, no. 7626, p. 491, 2016.
  • (5) Q. Ficheux, S. Jezouin, Z. Leghtas, and B. Huard, “Dynamics of a qubit while simultaneously monitoring its relaxation and dephasing,” Nature communications, vol. 9, no. 1, p. 1926, 2018.
  • (6) M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
  • (7) T. Mikolov, M. Karafiát, L. Burget, J. Černockỳ, and S. Khudanpur, “Recurrent neural network based language model,” in Eleventh Annual Conference of the International Speech Communication Association, 2010.
  • (8) A. Graves, A.-r. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pp. 6645–6649, IEEE, 2013.
  • (9) L. Bouten, R. Van Handel, and M. R. James, “An introduction to quantum filtering,” SIAM Journal on Control and Optimization, vol. 46, no. 6, pp. 2199–2241, 2007.
  • (10) G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko, and G. Carleo, “Neural-network quantum state tomography,” Nature Physics, vol. 14, no. 5, p. 447, 2018.
  • (11) L. Wang, “Discovering phase transitions with unsupervised learning,” Physical Review B, vol. 94, no. 19, p. 195105, 2016.
  • (12) J. Carrasquilla and R. G. Melko, “Machine learning phases of matter,” Nature Physics, vol. 13, no. 5, p. 431, 2017.
  • (13) E. P. Van Nieuwenburg, Y.-H. Liu, and S. D. Huber, “Learning phase transitions by confusion,” Nature Physics, vol. 13, no. 5, p. 435, 2017.
  • (14) G. Carleo and M. Troyer, “Solving the quantum many-body problem with artificial neural networks,” Science, vol. 355, no. 6325, pp. 602–606, 2017.
  • (15) T. Fösel, P. Tighineanu, T. Weiss, and F. Marquardt, “Reinforcement learning with neural networks for quantum feedback,” arXiv preprint arXiv:1802.05267, 2018.
  • (16) G. Torlai and R. G. Melko, “Neural decoder for topological codes,” Physical review letters, vol. 119, no. 3, p. 030501, 2017.
  • (17) S. Krastanov and L. Jiang, “Deep neural network probabilistic decoder for stabilizer codes,” Scientific reports, vol. 7, no. 1, p. 11003, 2017.
  • (18) P. Baireuther, T. E. O’Brien, B. Tarasinski, and C. W. Beenakker, “Machine-learning-assisted correction of correlated qubit errors in a topological code,” Quantum, vol. 2, p. 48, 2018.
  • (19) M. August and X. Ni, “Using recurrent neural networks to optimize dynamical decoupling for quantum memory,” Physical Review A, vol. 95, no. 1, p. 012335, 2017.
  • (20) I. Guevara and H. Wiseman, “Quantum state smoothing,” Physical review letters, vol. 115, no. 18, p. 180407, 2015.
  • (21) M. Tsang, “Time-symmetric quantum theory of smoothing,” Physical Review Letters, vol. 102, no. 25, p. 250403, 2009.
  • (22) J. Koch, M. Y. Terri, J. Gambetta, A. A. Houck, D. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, “Charge-insensitive qubit design derived from the cooper pair box,” Physical Review A, vol. 76, no. 4, p. 042319, 2007.
  • (23) H. Paik, D. Schuster, L. S. Bishop, G. Kirchmair, G. Catelani, A. Sears, B. Johnson, M. Reagor, L. Frunzio, L. Glazman, et al., “Observation of high coherence in josephson junction qubits measured in a three-dimensional circuit qed architecture,” Physical Review Letters, vol. 107, no. 24, p. 240501, 2011.
  • (24) M. Hatridge, R. Vijay, D. Slichter, J. Clarke, and I. Siddiqi, “Dispersive magnetometry with a quantum limited squid parametric amplifier,” Physical Review B, vol. 83, no. 13, p. 134501, 2011.
  • (25) J. Gambetta, A. Blais, M. Boissonneault, A. A. Houck, D. Schuster, and S. M. Girvin, “Quantum trajectory approach to circuit qed: Quantum jumps and the zeno effect,” Physical Review A, vol. 77, no. 1, p. 012112, 2008.
  • (26) F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with lstm,” Neural Computation, vol. 12, no. 10, pp. 2451–2471, 2000.
  • (27) S. Hochreiter, Y. Bengio, P. Frasconi, J. Schmidhuber, et al., “Gradient flow in recurrent nets: the difficulty of learning long-term dependencies,” 2001.
  • (28) M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: a system for large-scale machine learning.,” in OSDI, vol. 16, pp. 265–283, 2016.
  • (29) D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • (30) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
  • (31) L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989.
  • (32) Y. Aharonov, P. G. Bergmann, and J. L. Lebowitz, “Time symmetry in the quantum process of measurement,” Physical Review, vol. 134, no. 6B, p. B1410, 1964.
  • (33) S. Gammelmark, B. Julsgaard, and K. Mølmer, “Past quantum states of a monitored system,” Physical review letters, vol. 111, no. 16, p. 160401, 2013.
  • (34) P. Campagne-Ibarcq, L. Bretheau, E. Flurin, A. Auffèves, F. Mallet, and B. Huard, “Observing interferences between past and future quantum states in resonance fluorescence,” Physical review letters, vol. 112, no. 18, p. 180402, 2014.
  • (35) D. Tan, S. Weber, I. Siddiqi, K. Mølmer, and K. Murch, “Prediction and retrodiction for a continuously monitored superconducting qubit,” Physical review letters, vol. 114, no. 9, p. 090403, 2015.
  • (36) T. Rybarczyk, B. Peaudecerf, M. Penasa, S. Gerlich, B. Julsgaard, K. Mølmer, S. Gleyzes, M. Brune, J. Raimond, S. Haroche, et al., “Forward-backward analysis of the photon-number evolution in a cavity,” Physical Review A, vol. 91, no. 6, p. 062116, 2015.
  • (37) D. Tan, M. Naghiloo, K. Mölmer, and K. Murch, “Quantum smoothing for classical mixtures,” Physical Review A, vol. 94, no. 5, p. 050102, 2016.
  • (38) E. Magesan, J. M. Gambetta, A. D. Córcoles, and J. M. Chow, “Machine learning for discriminating quantum measurement trajectories and improving readout,” Physical review letters, vol. 114, no. 20, p. 200501, 2015.
  • (39) A. S. Holevo, “Bounds for the quantity of information transmitted by a quantum communication channel,” Problemy Peredachi Informatsii, vol. 9, no. 3, pp. 3–11, 1973.
  • (40) P. Six, P. Campagne-Ibarcq, I. Dotsenko, A. Sarlette, B. Huard, and P. Rouchon, “Quantum state tomography with noninstantaneous measurements, imperfections, and decoherence,” Physical Review A, vol. 93, no. 1, p. 012109, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
326936
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description