Machine learning for discriminating quantum measurement trajectories and improving readout

Machine learning for discriminating quantum measurement trajectories and improving readout

Easwar Magesan IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA    Jay M. Gambetta IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA    A.D. Córcoles IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA    Jerry M. Chow IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA

High-fidelity measurements are important for the implementation of quantum information protocols. Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that non-linear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity achievable under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with processes and show these are the main source of discrepancy between our experimental and achievable fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.

The ability to perform accurate measurements is important for maximizing the information one can extract from a physical system. This is especially true in experimental quantum information processing since quantum systems are highly susceptible to noise effects and error rates of quantum operations and measurements must be small for fault-tolerant quantum computation to be a reality Sho96 (). Our goal is to provide methods for diagnosing measurement errors and increasing fidelities by using classification and clustering algorithms in machine learning (ML). While we apply our methods in a superconducting qubit measurement system, we anticipate that the generality of these techniques can be useful in a broader class of systems.

Superconducting quantum bits (qubits) have become a promising candidate for building a fault-tolerant quantum computer due to their long coherence times Paik2011 (); chang_improved_2013 (); Barends2013 (), high-fidelity multi-qubit gate operations Barends_superconducting_2014 (), and the abilitiy to perform single-shot measurements Mallet09 (); Bergeal10 (); Johnson12 (); Riste12 (), in a circuit QED architecture BHW04 (). Remarkable progress has been made in reducing error-rates of these operations however considerable work is still needed to implement fault-tolerant quantum computation in large networks of qubits. In circuit quantum electrodynamics (cQED) a superconducting anharmonic oscillator, such as a transmon Koch07 (), is coupled to a resonator, producing a state-dependent shift of the resonator frequency. This allows for qubit measurements by driving the resonator and recording the output trajectory gambetta_trajectories_2008 () in phase (I-Q) space. In practice there are significant sources of random noise and systematic effects, such as processes where the system spontaneously decays to its ground state, that can make single-shot trajectories appear complex and difficult to distinguish.

Our experimental system is a single qubit (Q4) in a planar lattice of four superconducting qubits Corcoles2014 (). We show current methods for assigning outcomes to measurement trajectories in this system produce assignment fidelities (defined below in Eq. (29)) that are much lower than the predicted achievable value derived under ideal noise conditions . We utilize various ML algorithms to obtain deeper insight into the behavior of the trajectories and bring fidelities up closer to our expected values. We find a total increase in assignment fidelity from 0.9586 using current methods to 0.9821 ( increase) using non-linear ML classifiers. The strong performance of non-linear classifiers indicates systematic effects, such as heating and events, could be a significant source of error in our measurements.

To verify this, we use ML clustering methods to group the data into naturally occurring subsets. We find a large cluster consisting of events whose size is consistent with the experimentally measured time. Replacing this cluster with random non- events gives assignment fidelities approaching 0.995 which is much closer to the achievable value of 0.9999. Going to higher orders we find a much small cluster corresponding to heating of the ground state into the excited state. Before moving on, let us make a few points about using ML methods for trajectory discrimination and improving measurements. First, the methods we present here can be useful in a much broader context. Any measurement scheme that produces patterns in a geometric space can potentially benefit from more advanced ML methods. Investigating the applicability to different systems will depend on the details of each situation. Second, these methods are applicable even if we are trying to improve higher fidelity measurements than that of this paper. The key is that these methods can be tailored according to the types of noise present. Third, ML methods have also been applied to other problems in quantum information such as phase estimation Hentschel2010 () and asymptotic state estimation Guta2010 ().

The standard metric for characterizing how well a single-qubit measurement assigns outcomes is the assignment fidelity


Here () is the probability of obtaining outcome “0” (“1”) given the system was prepared in (). Hence and ideally . Our measurement framework is the dispersive limit of cQED, where observing the resonator output voltage provides a quantum non-demolition qubit measurement. For outcome “0” (1) the voltage leaving the cavity represents a coherent state I-Q trajectory, and single-shot trajectories are obtained by amplifying the cavity signal. The main parameters of our system are given in Supp () and complete experimental information is given in Ref. Corcoles2014 ().

Our data consists of 51200 single-shot trajectories (shots), half initially prepared in and the other half in (denote these classes by and ). The trajectories are time-ordered and the first half is used as a training set to predict the second half for classification. The mean trajectories for each class, denoted and , as well as examples of single-shot trajectories, are given in Fig. 1. We see that each shot can be complicated but there are enough shots to ensure smooth, well-separated means. The total measurement time is s and is discretized into 163 time-points so trajectories are represented by vectors (let ) where the first (last) 163 entries correspond to the real (imaginary) parts of the trajectory.

Figure 1: Mean trajectories and single-shots for (blue) and (red) preparations (color online). The () single-shot trajectory (blue (red)-dotted) has arrow pointing up (down) and to the left (right). The mean trajectories of (blue-solid) and ) (red-solid) have steady-states of (-0.07,-0.02) and (-0.01,-0.07).

The current method of classifying trajectories ryan_inprep2013 () is to integrate each trajectory with a filter (kernel,weight) function . Formally, if is a single trajectory, under the assumption that the covariance matrices and of each class are equal, Gaussian, and diagonal, the optimal is equal to the ratio of the difference between the mean trajectories and and the variance . If is the diagonal covariance matrix then


The assignment fidelity using this method is .

The “achievable” assignment fidelity for our experiment, , is the fidelity we would obtain under ideal noise conditions. By “ideal noise” we mean the noise satisfies the assumptions above for the method of ryan_inprep2013 () to hold. The details of this calculation, along with a brief introduction to measurements and amplifiers, is contained in Supp (). We obtain


and so there is a large discrepancy between and that is due to a wide variety of factors such as state-preparation errors and non-Gaussian/non-linear effects. This discrepancy provides the motivation for investigating better methods for classifying trajectories.

The idea behind machine learning (ML) classification is to obtain a classifying (discriminating) surface in under constraints such as the form of the noise. For Gaussian noise the optimal discriminator is a quadratic surface Cover65 () (quadratic discriminant analysis-QDA) given by


and the threshold is


where represents the determinant. If the quadratic term in Eq. (4) disappears and the surface reduces to a hyperplane Fisher36 () (linear discriminant analysis-LDA),


Comparing with Eq. (2) we see the current method of ryan_inprep2013 () is equivalent to LDA with the added assumption of diagonal covariance matrices.

QDA can achieve a more accurate value of as it allows for and thus a quadratic (non-linear) discriminating surface. We computed using the “fitcdiscr” function in Matlab for four different methods: LDAd, LDA, QDAd, and QDA (“d” represents diagonal covariance matrix and LDAd is the method of ryan_inprep2013 ()). The results are in the second column of Table 1. Not surprisingly, we find QDAd improves upon LDAd and allowing non-diagonal covariance matrices produces higher . The values in Table 1 are the sample means from 100 repetitions. The sample variances are typically on the order of indicating stable and reproducible results.

A value of for QDA was not attainable due to singular covariance matrices, which is a result of overfitting the data (having more variables than required from the correlation time in the trajectories). To remedy this problem, we first perform dimensionality reduction using principal component analysis (PCA) Pearson01 () and find of the variance in the data can be accounted for in a subspace of dimension . Loosely speaking, this implies correlation times of ns. The results with a PCA pre-processing step (using “princomp” in Matlab) are in the third column of Table 1. A value of QDA can now be computed and as expected it provides the highest out of all cases considered.

Method All time-points PCA
LDAd 0.9586 0.9557
LDA 0.9701 0.9586
QDAd 0.9627 0.9648
QDA 0.9712
Table 1: Assignment fidelities for various discriminant analysis methods. See text for details.

These classification methods have assumed Gaussian noise. More robust methods are needed as we expect non-Gaussian behavior. We approach this in two ways. The first is via the support vector machine (SVM) Boser92 (); Cortes_Vapkin_95 (), which makes no assumption on the form of the noise and can be extended to extremely general non-linear discriminating surfaces. The second is to utilize “clustering” methods in ML to naturally goup the data into clusters from which we perform multi-class classification. We first describe the SVM method.

The linear SVM is a quadratic program based on maximizing the minimum margins of the data, where the margin of a data point is its distance to a separating hyperplane Supp (). The non-linear SVM is derived from the dual form of the linear SVM by defining a kernel that maps the data to a higher-dimensional space. The key is that the linear SVM in the higher-dimensional space allows for non-linear discrimination in . Due to its generality and simplicity, we chose a radial basis (Gaussian) function kernel. We implemented the SVM using the “fitcsvm” function in Matlab. The classification was repeated 100 times and the mean values with the optimal soft-margin parameter are contained in Table 2 (see Supp () for details). The sample variances in are approximately indicating stable results. The non-linear SVM produces the highest assignment fidelity out of all methods considered thus far, indicating non-linear effects are present.

Method All time-points PCA
Linear SVM 0.9753 0.9571
Non-linear SVM 0.9821 0.9739
Table 2: Assignment fidelities for SVM methods. See text for details.

Our second method for implementing a non-linear classifier combines classification and clustering algorithms. Clustering naturally groups the data into subsets and is “unsupervised” since it requires no training data. We utilize k-means clustering Lloyd82 () since it has features that suit our purposes well, however we anticipate similar results can be obtained with other standard clustering algorithms such as heirarchical methods. For an explicit formulation of the k-means clustering problem see Supp ().

We used the Matlab “kmeans” function to find clusters in each of and . We chose to take into account both variance and systematic effects. The mean trajectories and size of the six subclasses are given in Fig. 2. We see is split relatively evenly into the subclasses , , that capture variance in the trajectories. We do not see a subclass of corresponding to heating of the ground state, however we implemented k-means for larger and found a heating subclass of size for (see Fig. 5).

has strikingly different properties as subclass is comprised of processes. and are similar in size and capture variance in the trajectories. The key point is we have found explicit shot indices for events. We verified that is comprised of trajectories by performing k-means with . We found that the and subclasses remain virtually the same while the subclass is now split into two according to variance in these trajectories (see Fig. 6). From Fig. 2, of the preparations result in a event, which is consistent with the percentage calculated from system parameters Supp (), .

Figure 2: Subclasses found from k-means algorithm (color online). and have three subclasses, the trajectory representing each subclass is the mean over all subclass trajectories. The subclasses of (yellow-dashed, green-solid, light blue-dotted) have paths that initially move up and left. The subclasses of (red-dashed, blue-solid, black-dotted) have paths that initially move down and right. The subclass (blue-solid) of initially moves down and right but abruptly changes its path to move up and left. The legend numbers are subclass sizes.

To perform classification, we lift the subclass to a class of its own, redefine , keep as before, and perform multi-class classification on , , and . We implemented four multi-class algorithms in Matlab; multi-class LDA, multi-class SVM, “TotalBoost”, and “RUSBoost”. The latter two are examples of boosting algorithms which assemble an ensemble of weak learners (classifiers) in a network to create a final strong learner by iteratively re-weighting data points according to previous results Bishop07 (). The RUSBoost method Seiffert10 () is particularly useful since it is tailored to the case of one class (here ) being significantly smaller than the rest.

The results are in Table 3. We again see an increase in assignment fidelities over the discriminant analysis methods of Table 1. Not surprisingly, RUSBoost provides the most significant increase. We repeated the k-means algorithm 50 times with random initializations and found it to be relatively stable (sample variance of ). We repeated this using fixed initialization of the means and obtained a variance of 0.

Out of all methods considered, non-linear SVM’s produce the greatest increase in (0.9586 to 0.9821). We also note all methods are relatively stable with reproducible assignment fidelities (each method was repeated times; the sample means of are the table values and the sample variances are ).

Method All time-points PCA
Multi-LDA 0.9768 0.9689
Multi-SVM 0.9784 0.9717
TotalBoost 0.9527 0.9413
RUSBoost 0.9788 0.9723
Table 3: Assignment fidelities from multi-class classification. See text for details.

While we have improved to 0.9821, we are still far from . It is possible much of the remaining discrepancy comes from events. To investigate this we propose the simple error diagnosis test of replacing each event found from the k-means algorithm with a random element from . This provides a measure of when is negligible. The means of 100 samples for each method are contained in Table 4 (variances are ). Non-linear SVM produces the highest value of however for all methods , which is more consistent with . This confirms events are the significant reason for lower values than expected.

Method All time-points PCA
LDAd 0.9920 0.9909
LDA 0.9921 0.9928
QDAd 0.9918 0.9908
QDA 0.9927
Linear SVM 0.9936 0.9943
Non-linear SVM 0.9945 0.9949
Table 4: Assignment fidelities with replacement of events. See text for details..

One attempt to reduce the significance of is to reduce , however this implies the trajectories will spend less time near their steady states and assignment errors due to variance will increase.To observe this, we truncated the trajectories to different and calculated using the non-linear SVM. From Fig. 3 we see s appears close to optimal. Moreover, a much shorter measurement time of s (not shown in Fig. 3) is needed to achieve from LDA. This is a strong message that better classifiers can allow for shorter measurement times. Longer measurement times than the current s decrease due to an increase in events.

Figure 3: Varying measurement time.

To conclude, we have utilized ML to understand and improve the readout in a superconducting system. We find more sophisticated classification algorithms can potentially allow for shorter measurement times and increase assignment fidelities. Non-linear SVM’s provided the largest increase in assignment fidelity from 0.9586 to 0.9821 (). Clustering helped diagnose the prevalence of systematic effects by finding clusters in the data corresponding to single-shot identification of heating and effects. We verified events are a significant source of error as the assignment fidelity increases from 0.9821 to 0.9945 when the cluster is replaced with typical trajectories. This is more consistent with our achievable fidelity and the remaining discrepancy can be due to effects such as heating and state-preparation errors. Moving forward, we expect these methods will help provide insight for improving readout, especially when non-linear and non-Gaussian effects are present.

We acknowledge support from ARO under contract W911NF-14-1-0124 and IARPA under contract W911NF-10-1-0324. We acknowledge helpful discussions with Oliver Dial, Stefan Filipp, Blake Johnson, Jim Rozen, Colm Ryan, Marcus Silva, and Matthias Steffen.


  • (1) P. Shor, in Proceedings of the 37’th Annual Symposium on Foundations of Computer Science (FOCS) (IEEE Press, Burlington, VT, 1996).
  • (2) H. Paik et al., Phys. Rev. Lett. 107, 240501 (2011).
  • (3) J. B. Chang et al., Applied Physics Letters 103, 012602 (2013).
  • (4) R. Barends et al., Physical Review Letters 111, (2013).
  • (5) R. Barends et al., Nature 508, 500 (2014).
  • (6) F. Mallet et al., Nat Phys 5, 791 (2009).
  • (7) N. Bergeal et al., Nature 465, 64 (2010).
  • (8) J. E. Johnson et al., Phys. Rev. Lett. 109, 050506 (2012).
  • (9) D. Ristè et al., Phys. Rev. Lett. 109, 050507 (2012).
  • (10) A. Blais et al., Phys. Rev. A 69, 062320 (2004).
  • (11) J. Koch et al., Phys. Rev. A 76, 042319 (2007).
  • (12) J. Gambetta et al., Phys. Rev. A 77, 012112 (2008).
  • (13) A. D. Córcoles et al., arXiv:1410.6419 (2014).
  • (14) A. Hentschel and B. C. Sanders, Phys. Rev. Lett. 104, 063603 (2010).
  • (15) M. Guţă and W. Kotłowski, New Journal of Physics 12, 123032 (2010).
  • (16) See Supplemental Material for further details.
  • (17) C. A. Ryan et al., arXiv:1310.6448 (2013).
  • (18) T. M. Cover, Electronic Computers, IEEE Transactions on EC-14, 326 (1965).
  • (19) R. A. Fisher, Annals of Eugenics 7, 179 (1936).
  • (20) K. Pearson, Philosophical Magazine Series 6 2, 559 (1901).
  • (21) C. Cortes and V. Vapnik, Machine Learning 20, 273 (1995).
  • (22) B. E. Boser, I. M. Guyon, and V. N. Vapnik, in Proceedings of the Fifth Annual Workshop on Computational Learning Theory (ACM, New York, NY, USA, 1992), COLT ’92, pp. 144–152.
  • (23) S. Lloyd, Information Theory, IEEE Transactions on 28, 129 (1982).
  • (24) C. Bishop, Pattern Recognition and Machine Learning (Springer, 2007).
  • (25) C. Seiffert et al., Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 40, 185 (2010).

Appendix A Supplemental Material

a.1 Measurement and linear amplification in circuit QED

In the dispersive regime of circuit-QED the resonator frequency depends on the qubit state so that driving the cavity and observing the output of the cavity corresponds to a quantum non-demolition measurement. For outcome “0” (1) the mean voltage leaving the cavity represents a time-dependent coherent state, denoted , that is typically small in magnitude. The phase-space evolution of is determined by the deterministic differential equation


where is the measurement drive and is the cavity decay rate.

For our experiment, the qubit transition frequency is GHz and the readout resonator frequency is GHz. The qubit anharmonicity is MHz with (energy relaxation) time of s and coherence time s. The dispersive shift and line width of the readout resonator are measured to be MHz and kHz, respectively so that .

The output mode from the resonator can be related to the field inside the resonator by


where is the separation between the pointer states, is the mean value of the coherent states and represents the qubit information with and being the probability of the qubit being in state 0 and 1 respectively gambetta_trajectories_2008 (). Typically this field is amplified by a linear phase-preserving amplifier to the output mode ,


where is the power gain and is the extra noise added by the amplifier. Here we have made an assumption that the bandwidth of the amplifier is constant and larger than the bandwidth of the signal being measured. Since the commutators must satisfy


the noise must satisfy


which, from the generalized uncertainty principle of an arbitrary operator ,


implies and . Using Eq. 9 the noise in the output mode is




is the added noise normalized by the gain Caves82 (). In our view this is the best number to quantify an amplifier and for the quantum limit it takes the value 1/2 for a phase preserving amplifier. Other useful quantities are the instantaneous input and output signal-to-noise ratios of the amplifier defined by


where is the efficiency of the amplifier and represents how well the input field is mapped to the output field. Another useful quantity is the noise-figure of the amplifier, which is the ratio . We see that which is 2 for a quantum limited amplifier.

In circuit QED the information about the qubit state is contained in a single quadrature. From Eq. 8 this is the quadrature set by and as a result when we subtract the mean value we obtain


This overestimates the noise as the information is only in one quadrature. Defining the measurement quadrature it is simple to show that


giving an instantaneous SNR


where is the efficiency of measuring information in a single quadrature for a linear phase preserving amplifier. Note this is a factor of two less than the efficiency of the amplifier.

A general linear amplifier can be described by the output mode ,


and from preservation of the commutation relations


we have . Setting results in . To amplify a single quadrature (phase sensitive amplifier) we set giving


where is the power gain. From the generalized uncertainty principle


where is the gain normalized added noise. The instantaneous SNR for the quadrature is


where . That is, by using a phase sensitive amplifier tuned to the correct phase the effective SNR can be a factor of two better then a phase preserving amplifier.

a.2 Filtering protocol for ideal noise and the achievable fidelity

In a typical measurement protocol the measurement outcome is the integration of the signal from the amplifier with a weighting kernel (filter),


where is the measurement time and is the kernel. Under the assumption that the noise is symmetric a useful measure to quantify the measurement is the separation


where 0 and 1 label the two states of the qubit. In Ref. ryan_inprep2013 () it was shown that the optimal kernel under the additional assumptions of Gaussian and diagonal covariance matrices is found by maximizing and is given by


where is the separation of the pointer states and .

The achievable fidelity is the value of the assignment fidelity assuming that the noise in our system is ideal (symmetric and diagonal Gaussian covariance matrices). In reality, the noise does not satisfy these properties so we must fit it to an ideal noise model. To do this, we implemented the above filtering method on our data and plotted the resulting distributions for the different classes ( and ) of in Fig. 4. If the noise was ideal we should obtain two Gaussian histograms with identical variance. In reality, we see there are significant non-Gaussian statistics and so we fit the histograms to double Gaussian distributions of equal variance to obtain the means and standard deviation we would expect in the ideal noise case. We obtain and , and a standard deviation of , which gives a value for (defined in Eq. 27) of .

Figure 4: Projected data and double Gaussian fits of the (blue-left) and (red-right) preparation classes (color online).

We can now compute by combining the ideal noise assumption with the definition of the assignment fidelity,


Here () is the probability of obtaining outcome “1” (“0”) given the system was prepared in (). Since the noise is ideal takes the form


where is defined in Eq. 27. From this expression we obtain


for our system. We note that the separation in the time independent limit can be related to the signal-to-noise defined above by SNR. This is straightforward to show by direct substitution of the quantities in Eq. 27. is used here as it is a standard measure of separation in ML Fisher36 ().

a.3 SVM’s and k-means clustering

The quadratic program for the SVM Cortes_Vapkin_95 () is given by the equation

subject to

where is the expected outcome (taken to be -1 or 1). This has a quadratic objective function with linear inequality constraints. The soft-margin formalism adds slack variables representing the degree of misclassification to the constraints in Eq. (32) and modifies the objective function to include a mislabelling cost term. The modified quadratic program that includes a soft-margin is given by the equation

subject to

In the dual form of this problem, the variables vanish and is a “box constraint” that bounds the Lagrange multipliers. We implemented 10-fold cross-validation on the training set and found a misclassification error of 0.0163 which agrees well with . This implies large value is likely not needed and varying between 0 and 100 gave an optimal value of close to 1.

k-means clustering Lloyd82 () is formulated as the following optimization problem


Here there are sets with means and so the goal is to partition the data into sets that minimize the within-set distance to the mean . The k-means algorithm only has guaranteed convergence to a local minimum and finding the global optimum is an NP-hard problem Aloise09 (). Therefore initialization of the subclass means in Eq. 34 can be important to find meaningful solutions. Typically initialization is a random process that can lead to small variation in the solutions. This can be circumvented by explicitly defining the initial means to be the average of the output subclass means over many realizations. This helps ensure the algorithm is reproducible and more stable.

The plot of the heating subclass of found from the k-means algorithm with is given in Fig. 5 and the plot of the four subclasses of found from the k-means algorithm with is given in Fig. 6.

Figure 5: Heating subclass of (green-dotted) found from a k-means algorithm with superimposed on means of (blue-dashed) and (red-solid) classes (color online).
Figure 6: Four subclasses of found from a k-means algorithm. There are two subclasses (green-dashed and red-dash-dotted) that split the subclass found from . The other subclasses (black-dotted and blue-solid) are comprised of typical trajectories of a state (color online).


  • (26) J. Gambetta et al., Phys. Rev. A 77, 012112 (2008).
  • (27) C. M. Caves, Phys. Rev. D 26, 1817 (1982).
  • (28) C. A. Ryan et al., arXiv:1310.6448 (2013).
  • (29) R. A. Fisher, Annals of Eugenics 7, 179 (1936).
  • (30) C. Cortes and V. Vapnik, Machine Learning 20, 273 (1995).
  • (31) S. Lloyd, Information Theory, IEEE Transactions on 28, 129 (1982).
  • (32) D. Aloise et al., Machine Learning 75, 245 (2009).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description