The effect of noise correlations on randomized benchmarking
Among the most popular and well studied quantum characterization, verification and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in ”Pauli space.” Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a gamma distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes – consistent with existing experimental data – highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the Filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.
Quantum verification and validation protocols are a vital tool for characterizing quantum devices. These take many forms WhiteStateTomography (); WhiteProcessTomography (); Emerson2005 (); Cory2007 (); Pablo2008 (); CoryCorrelations (); Walther2013 (); Magnesan2013 (), but one of the most popular due to its efficiency is randomized benchmarking (RB). In this approach, developed originally by Knill et al. Knill2008 (), and expanded theoretically by various authors Emerson2011 (); MagesanInterleaved (); Flammia2014 (); Epstein2014 (), the average error probability of a quantum gate (e.g. a bit flip) is estimated by implementing a randomly sampled gate sequence from the set of Clifford operations, and measuring the difference between the ideal transformation and the actual result. Averaging over many randomized sequences yields information about the underlying gate fidelity.
RB has become so important in the experimental community Knill2008 (); BiercukQIC (); Porto2010 (); GaeblerPRL2012 (); BrownPRA2011 (); ChowPRL2012 (); MartinisOptimized (); Lucas2014 (); Saffman2015 (); Laflamme2015 (); Morello2015 () that despite the experimental complexity it is now common to simply quote a measured gate error, , resulting from a RB measurement for a particular experimental system, and relate this number to tight bounds such as fault-tolerance error thresholds in quantum error correction NC (). Of late, reported values of have compared favorably with these thresholds, and have been used to justify the scalability of particular experimental platforms. Underlying this entire approach is the subtle but central assumption that errors are uncorrelated Knill2008 (): an assumption which is generally violated by a wide variety of realistic error processes with long temporal correlations BiercukQIC (); Bylander2011 (); Clarke2013 (). Violation of this assumption has been noted previously as a cause of increased variance of outcomes over randomizations, but until now there has not been a quantitative means to understand the impact of such temporal correlations, a detailed physical mechanism to explain why such distortion of RB results can appear, or a clear understanding of the impact of this observation on the applicability of RB.
In this manuscript we examine the impact of relaxing the Markovian-error assumption by studying its effect on the distribution of measurement outcomes over randomizations. We find that while all randomizations (meeting certain criteria) are valid within the RB framework, they exhibit starkly different susceptibility to error when those errors exhibit temporal correlations over multiple individual Clifford operations. We provide a detailed mathematical treatment of error accumulation in RB, using a general model treated in the specific case of dephasing errors. We demonstrate how the reduction of fidelity for a particular sequence may be given a geometric interpretation, taking the form of a random walk in a 3D Cartesian coordinate system. The steps of this walk correspond to the appearance of Pauli errors derived from interleaved dephasing operators in the sequence of Clifford operations, and the overall statistics are determined by the underlying correlations in the noise process. Our treatment includes both extremal cases of uncorrelated (Markovian) and systematic (DC) errors, as well as generic correlations interpolating between these limiting cases and captured through a noise power spectral density. Our results provide simple analytic forms for the probability density functions of fidelity outcomes over RB randomizations as a gamma distribution, building connections to engineering literature on failure analysis. We describe the impact of these observations on the interpretation of RB measurement outcomes in various noise environments and highlight the disconnect between measured RB error and metrics relevant to fault-tolerance such as worst-case errors.
Ii Randomized Benchmarking Procedure
A common experimental implementation of single-qubit RB involves measuring the fidelity of net operations composed from random sequences of Clifford operations. For a sequence of length , the net operation is written as the product
of Clifford operators (see Table 3) indexed by the sequence , where the are random variables uniformly sampled from the set labelling the elements of the Clifford group. Typically one makes the constraint on that, absent any error processes, the ideal RB sequence performs the identity operation . Of the total sequence combinations for a given , only a small subset of random Clifford sequences is implemented in practice. Each sequence is repeated (with appropriate qubit reinitialization) times to build an ensemble of fidelity measurements sampling and the underlying error process. These measurement outcomes are then averaged together to obtain useful information from projective qubit measurements, with the mean fidelity retained for each random sequence.
We represent the space of measurement outcomes by the () matrix
where element is the measured fidelity when implementing the th Clifford sequence in the presence of the th realization of the error process, denoted by . For clarity, we make the distinction between the measured fidelity (e.g. obtained in experiment via projective qubit measurements) and the calculated fidelity (see Eq. 9) serving as a proxy for in our analytic framework. The process of repeating each and averaging the fidelity outcomes over the finite sample of noise realizations therefore corresponds to a measurement of the “noise-averaged” fidelity for the th sequence, represented by the quantity
where the angle brackets in the second subscript denote averaging over the columns of . Similarly the mean fidelity averaged over both Clifford sequences and errors is denoted
where angle brackets in both subscripts indicate averaging over both rows and columns (that is, all elements) of . The quantity is therefore an estimator of the true mean fidelity , formally obtained as the expectation over all possible fidelity outcomes defined on the support of the random variables and .
In the standard RB procedure, measurements of for increasing are fitted to an exponential decay from which the mean gate error is extracted as the decay constant. Implicit in this procedure, however, is the assumption that, for any random set of Clifford sequences, the resulting distribution of fidelity outcomes represents the underlying error process fairly, and the total mean is reasonably representative of any individual sequence. That is, the distribution of values is symmetric about with small relative variance for any random set .
In this paper we show to be an unbiased and effective estimator only when the error process is truly Markovian. That is, it possesses no temporal correlations. We provide a detailed analytic calculation of the impact of realistic, correlated noise on the distribution of outcomes over different randomizations. To examine this we introduce a physical model compatible with the experimental procedure, and which permits accounting for the presence of noise correlations.
Iii Physical model
In this section we develop the mathematical framework used to model and investigate the impact of noise correlations on RB. Our model assumes a Unitary error process with temporal correlations, reflecting typical experimental conditions where qubit rotations, e.g. from a fluctuating classical field, dominate. This process is generically described by a power spectral density (PSD) capturing the various correlation timescales. Our error model is fully general and considers universal (multi-axis) errors. For simplicity of technical presentation we treat single-axis dephasing in the main text and provide the full presentation of universal noise in Appendix D.
The main result of this section is the geometric interpretation of error accumulation in RB sequences in terms of a random walk in 3 dimensions (a step taken for each Clifford), with step lengths and directionality inheriting the correlation structure of the error process. This result is independent of any choice of representation of the Clifford group, though our numeric simulations later used to verify it do rely on a particular representation (see Table 3).
iii.1 Noise-averaged fidelity
We begin by introducing our metric to quantify the reduction in fidelity for a given and . We employ the trace fidelity
capturing the overlap between ideal, , and noise-affected sequences, , via the Hilbert-Schmidt inner product. For inputs and this provides us with a computational metric corresponding to measurements obtained in experiment. Consequently our proxy for the measured noise-averaged fidelity takes the computational form
where denotes an ensemble average over realizations of , and explicit reference to and in the argument of has been dropped. Henceforth we also drop the subscripts and denote the calculated noise-averaged fidelity simply by . In the following sections we proceed with our main task: deriving the probability density function (PDF) of , and its dependence on the correlation structure of .
iii.2 Error model
To reiterate, a Clifford sequence implements the operation , where is the Clifford generator. The random variables are uniformly sampled from the set , and are assumed to be independent and identically distributed (i.i.d.), subject to the technical constraint . Unitary errors are implemented by interleaving with a sequence of stochastic qubit rotations, yielding the noise-affected operation
This approach holds for arbitrary unitaries, , enacting rotations in any Cartesian direction; a full treatment for universal noise models appears in Appendix D. However, in the following we restrict ourselves to the subset of unitaries enacting a sequence of dephasing rotations, , parameterized by the time-series vector . We assume the error process is wide-sense stationary (i.e. the for errors and the two-point correlation function depends only on the time difference ), and has zero mean.
Temporal noise correlations are established by introducing correlations between the elements of . In this work we treat three distinct cases
Markovian process: Elements are i.i.d. Gaussian-distributed errors with zero mean and variance variance , and completely uncorrelated between distinct Clifford gates in any sequence (correlation length ).
DC process: Elements are identical over a given sequence (maximally correlated, correlation length ), but are i.i.d. (uncorrelated) Gaussian-distributed errors with zero mean and variance variance over different instances.
Generically-correlated process: Correlations between elements of separated by a time interval of “ gates” in are generically specified by an autocorrelation function , in terms of which may be described by a power spectral density (PSD) .
As we will show, the noise interaction “steers” the operator product away from the identity gate performed by the ideal sequence and reduces the operational fidelity in some way characteristic of the correlation structure of .
iii.3 Analytic Expression for Sequence Fidelity
iii.3.1 Series expansion for Sequence Fidelity
We begin by obtaining an approximation for . As RB is intended to estimate errors too small to resolve in a single gate implementation, we assume the noise strength per gate, , is small. Over long gate sequences the accumulation of errors may be quantified by , which can be large for sufficiently large . Assuming , however, we can make analytical progress by expressing the error unitaries in terms of a power series truncated at . The noise-affected sequence Eq. 11 is therefore approximated by
The full expansion of Eq. 12 generates products of order up to . Let denote the product due to cross-multiplying terms like , where . Retaining terms only up to fourth order (), consistent with the order of our original Taylor approximation, we thereby obtain
The first term in this expansion is identical to the ideal Clifford sequence, . Higher-order terms capture successive error contributions and are composed of blocks, or subsequences, within interrupted by one or more errors, indicated by the subscripts. To evaluate Eq. 9 we must obtain expressions for the quantities . Detailed derivations for these terms are given in Appendix C.
iii.3.2 Mapping from Clifford Sequence to “Pauli space”
We compute the terms in the power series by relating a given Clifford sequence to a random walk in “Pauli space”. For illustrative purposes, we consider in detail only the quadratic terms:
where the ordered summand runs over , and we have defined the Clifford subsequence operators
We now define the cumulative operators giving the product of the first Clifford operations
where , and is also a Clifford operator (since it is a product of Clifford operators). Any subsequence therefore “factorizes” as
allowing us to rewrite
where the operators on the right hand side are defined by
for , and are always signed Pauli operators (since the Clifford group is the normalizer of the Pauli group). We may therefore express the operators in the basis of Pauli operators as
where subject to the constraint (only one nonzero coefficient). The unit vector defined by
therefore points uniformly at random along one of the principle Cartesian axes , and maps the “direction” of the operator in Pauli space. Thus we have constructed a map from a given random Clifford sequence of length , to a set of unit vectors each oriented at random along the cartesian axes.
With these insights the error contributions may be recast into more convenient expressions by moving to vector notation. In particular, taking the trace over Eq. 18 and using the cyclic composition properties of the Pauli matrices, we find
Observing the quantity is invariant under exchange of indices, we may recast the restricted sum over into an unrestricted sum, picking up a residual term (see Appendix C), to obtain
Consequently, is a random walk in “Pauli space”.
This follows from (a) the cyclic property of the trace, (b) the technical constraint , and (c) is either or depending on whether is odd or even respectively. Using Eq. 25 we find and , leading to the simplified expression
Substituting this into Eq. 10 and retaining only terms up to we obtain
To a good approximation may be treated as a small correction in the form of a small constant, with the statistical distribution properties residing in the leading-order term, the random variable . In arriving at these expressions we have used the assumption that the error process has zero mean, from which it follows only terms with noise random variables raised to even powers, or those summed over terms raised to even powers, survive the ensemble average with the others reducing to zero. In particular, (see Appendix C).
Iv Probability Distribution Functions for RB Outcomes via a Geometric Interpretation of Error Accumulation
In the expressions above we see that the key metric capturing the reduction of fidelity in a randomized benchmarking sequence is
which may be interpreted as a random-walk in “Pauli space” generated by adding randomly-oriented steps along the principle Cartesian axes, with step lengths specified by . Walks which terminate far from the origin correspond to sequences with large net infidelities while those which ultimately end near the origin have small infidelities. These observations form a key contribution of this work, as we will show.
This geometric picture provides a unique insight into how error accumulates in RB sequences, and facilitates calculation of the distribution of incorporating the effects of correlations in the error process. In particular, the calculation of sequence fidelities maps onto the distribution of random walk, with possible correlations manifesting the random step lengths.
In the following sections we explore how correlations in affect the distributions of the terms in Eq. 27, and hence the probability density function (PDF) of the noise-averaged fidelity . Our calculations demonstrate that under all error models treated here takes the form of a gamma distribution, which is well known in statistics and provides the significant benefit in that explicit analytic forms are available for both the moments of the distribution and the moment-generating functions (see Table 1). Interestingly the gamma distribution is known to be useful for failure analysis and life-testing in engineering Papoulis () which bears some similarity to the notion of error accumulation in RB.
iv.1 PDF for Markovian Processes
In the Markovian limit (i.e. uncorrelated noise), we assume all noise random variables are i.i.d. Hence corresponds to a -length unbiased random walk with step lengths sampled from the normal distribution . Since these step lengths are symmetrically distributed about zero, the distributions of the components of the walk vector are invariant with respect to the sign of the coefficients in all Cartesian directions . Ignoring the signs we therefore treat the the coefficients as binaries , where the zero event simply reduces the number of steps taken in that direction. Let
count the total number of nonzero components in each Cartesian direction over the sequence of walk vectors . Then
where the superscripts in indicate summing only over the subset of for which the coefficients are nonzero. Thus we have
where we have defined for . Since all are i.i.d., the new random variables are also independent and normally distributed, with zero mean and variance . The distribution of the sum of squares of non-identical Gaussians is generally complicated to write down, requiring a generalized chi-square distribution. To avoid this we make the following modest approximation. Since the vectors are uniformly-distributed there is a probability of being parallel to any given Cartesian axis. The probability of finding any particular combination is therefore given by the multinomial distribution
For , however, this is sufficiently peaked around that we may simply regard these values as fixed without significant error. In this case reduce to i.i.d. random variables. The distribution of consequently reduces to chi-square distribution with 3 degrees of freedom. It is more convenient, however, to express this in more general terms as a member of the two-parameter family of gamma distributions (see Eq. 55), of which the chi-square is a special case. Specifically, we obtain
with shape parameter and scale parameter . An ensemble average over independent noise realizations is therefore specified by
where the are i.i.d. gamma-distributed random variables. But the sample mean over gamma-distributed random variables simply yields a rescaled gamma distribution with and (see Eq. 57). Consequently
with expectation and variance .
Higher-order contributions may be included by computing the terms in . For full derivations see Appendix E; here we sketch the result. From the known distribution of in Eq 159, the PDF for is given by a transformation allowing us to compute the expectation and variance. Taking an ensemble average over noise realizations, applying the central limit theorem, and observing the narrowness of the resulting distribution compared to the leading-order term, we make the approximation . The remaining terms yield values and . Substituting these into Eq. 28 we obtain the 4th order correction and the noise-averaged fidelity reduces to
inheriting the gamma distribution of described in Eq. 36. We linearly transform this expression to produce a final PDF for noise-averaged fidelity in the Markovian regime:
where is the gamma function, and the quantities , and are defined in Table 1.
iv.2 PDF for DC (quasi-static) processes
In the DC limit (i.e. quasi-static noise) we assume all noise random variables are identical (maximally correlated) over a given sequence . However over separate instances is sampled from the normal distribution . Then corresponds to a -step unbiased random walk with fixed step length directed along the Cartesian axes. In this case the noise random variables and Clifford-dependent random variables factorize, allowing us to express
where defines an unbiased random walk on a 3D lattice generated by adding unit-length steps. Thus, in contrast with the Markovian case, here the random walk in Pauli space is unaffected (step-by-step) by the noise interactions. Rather, in a given run, the noise variables effectively scale the random walk generated by the Clifford sequence, up to a sign. Since we are interested in the norm square , however, any sign dependence of vanishes. Performing a finite ensemble average over noise randomizations we therefore obtain , and Eq. 27 yields
In this case, , includes a term which is now highly correlated with the leading-order contribution, so we cannot use the expectation value as a proxy for the whole distribution. However corrections from these terms are , so we ignore these terms, and formally study the limit . Numerical evidence indicates that this approximation works well up to .
Since is normally distributed, is chi-square-distributed which is a special case of the gamma distribution, with shape parameter and scale parameter . Taking the sample mean over independent gamma-distributed variables again yields a rescaled gamma distribution with and
with expectation and variance .
The random-walk behavior of (Eq. 39) represents a well-studied problem in diffusion statistics. Let the random variable be the distance from the origin in a symmetric (Bernoulli) 3D random walk after steps. It is straightforward to show that the PDF for is
This expression describes a random walk of unit step length, and is derived assuming that is uniformly and continuously sampled from all directions in , however Eq. 42 is a good approximation for the PDF of a walk on a 3D lattice Note1 (). The distribution of the distance square is then given by the transformation (see Eq. 53)
where and , and is the gamma function; again this is a gamma distribution (see Eq. 55) with shape parameter and scale parameter . Consequently
Thus, to leading order, the PDF for with DC noise is specified by the product of two independent gamma-distributed random variables. The closed-form expression can be calculated by direct integration (see Appendix F), however for moderate ensemble sizes it is sufficient to approximate as strongly peaked around its mean, , such that . In this case the PDF reduces to a linear-transformed gamma distribution associated with and possesses the same form as Eq. 38 but with different values of the parameters , and , as given defined in Table 1. This is a remarkable observation given the substantial differences in error correlations between these extreme limits.
iv.3 PDF for Generically-Correlated Processes
For both limiting cases of Markovian and DC noise correlations treated above we showed is described, to first order, by the gamma distribution with shape and scale parameters dependent on the correlation structure. Assuming continuity of the distribution we also expect to be approximately gamma-distributed for generic, intermediate correlation structures. We can show this formally for a specific class of correlated noise models, in which the noise is block-correlated; i.e. the error random variables are constant over blocks, or subsequences, of Clifford operators of fixed length , and there is no correlation between distinct blocks. The full derivation of the PDF for this case can be found in Appendix G. We find block-correlated noise yields a gamma-distributed fidelity, with parameters , , enabling us to interpolate between the Markovian () and DC () limits for arbitrary correlation length (see Table 1).
While block-correlated noise is not stationary (i.e. it does not have a stationary power spectrum), it simply and explicitly captures the notion of a correlation length, . The correlation length thus manifests itself in the distribution of fidelities. With these insights, for brevity we assume that generic noise correlations also give rise to gamma-distributed infidelities. This is supported by the quantitative calculations and qualitative arguments in Appendix G, and by comparison with Monte Carlo numerics. Consequently, the shape and scale factors can be inferred from the mean and variance of the distribution, which we calculate directly.
For simplicity, we assume the error process is sufficiently non-Markovian that the noise ensemble size and the contributions may be ignored without introducing large errors, as in the DC case. Thus, we write
From the first two moments of the gamma distribution, the expectation and variance of the random-walk variable are and , so the shape and scale parameters are given by
The task of defining the gamma distribution therefore reduces to computing the expectation and variance of , as a function of the generic correlation structure of the error process, .
An arbitrary time-series may be characterized by an autocorrelation function where the ensemble average is over all , and is the time difference between measurements. In this case, invoking the Wiener-Khintchine theorem, the power spectral-density (PSD) is given by the Fourier transform . We make use of these relations by discretizing the time-series. Let the elements of be defined by for where is the time taken to perform a Clifford operation. The underlying error process is thereby discretely “sampled” by the Clifford sequence , and correlations between elements of separated by a time interval of “ gates” are specified by the (discrete) autocorrelation function
In experiment one typically has access to the PSD rather than the autocorrelation function. However this easily maps to our framework via an inverse Fourier transform
With these expressions we now have a complete analytic representation of the distribution of measured noise-averaged fidelities over different RB sequences. In the next section we verify these results with numeric Monte Carlo simulations and discuss the differences in distribution characteristics depending on noise correlations.
iv.4 Comparing PDFs for Various Noise Correlations
The analytic forms for derived above serve as a tool to analyze the impact of temporal noise correlations on RB experiments. In all cases (Markovian, DC, and Intermediate) the PDF is -distributed, with differences captured in the values of the shape parameter and scale parameter . This result is derived from statistics of a 3D random walk.
Plotting the PDF in Fig. 1 immediately reveals substantial differences in the distribution of outcomes for the two limiting cases. While the distributions in both Markovian and DC cases yield the same value for the mean (the statistic currently used in RB protocols), the higher order moments diverge significantly. For DC noise is skewed towards high-fidelities and possesses a variance significantly larger than that for Markovian errors of equivalent strength, parameterized by the value of . Averaging over a large noise ensemble results in the mode converging to the mean in the Markovian regime, but maintaining a fixed higher value of fidelity for the DC regime. By comparison, the variance and skew for Markovian noise diminish with increasing noise averaging, but remain fixed and nonzero in the DC case.
To compare our analytic PDFs with the true distributions we directly simulate the fidelity outcomes associated with the metric in Eq. 9. For a given , we generate an ensemble of random Clifford sequences. The first elements in each sequence are uniformly and independently sampled from the set , with the final element chosen such that the total operator product performs the identity . For each sequence we then generate an ensemble of random and independent realizations of the error process, where each is a sequence of noise random variables generated by Monte Carlo sampling from the appropriate correlated-error model. For Markovian and DC processes, this is fully described in Section III. For the intermediate case, random sequences with a desired PSD and autocorrelation function may be generated by uniformly-phase-randomized Fourier synthesis as described in SoarePRA2014 () (see Appendix I).
For each pair and the operator product in Eq. 11 is computed, using Table 3 to determine the () unitary matrix representing each Clifford operator. For each pair the trace fidelity is then calculated using Eq. 9, generating the array shown in Eq. 6, simulating the measured fidelity outcomes. Averaging over columns, as in Eq. 7, the array reduces to a column vector containing noise-averaged fidelities , one for each , which we finally plot as a normalized histogram and compare against our analytic PDFs.
Fig. 1(c) and (d) compares the numerically generated histograms of fidelity against the analytic results, Eq. 38, for Markovian and DC noise respectively. These are are in excellent agreement for the different error processes considered. The characteristic long-tailed distribution peaked near high fidelities in the DC limit reproduces key features observed in recent experiments BrownPRA2011 (). Agreement with analytics is good (with no free parameters) for , beyond which higher order error terms contribute to the distribution.
For correlated noise, similarly good agreement is obtained. We have validated the block-correlated noise model (not shown). For quasi- noise, the fidelities are also well described by the gamma distribution, accounting for correlation length, as shown in Fig. 1(e). The dominance of low-frequency components in this PSD (see inset) skews toward higher fidelities than the mean, but the presence of higher-frequency components (shorter correlation times) reduces the variance and partially restores symmetry. In the limit of power spectra containing substantial high-frequency noise, small shifts due to higher-order terms in Eq. 28 become important.
The underlying physical reason for the differences in the two limiting error models is revealed by examining the filter-transfer functions for the various Clifford randomizations GreenNJP2013 (). The noise-averaged fidelity is given by , quantifying the susceptibility to error over a given frequency band, and has demonstrated experimental applications SoareNatPhys2014 (). Using techniques outlined in Refs. GreenPRL2012 (); GreenNJP2013 () we calculate for random sequences , shown in Fig. 2. In the low-frequency regime we observe variations over several orders-of-magnitude in the vertical offset of and also variations in slope at higher frequencies, indicative of partial error cancellation and hence substantial variations in susceptibility to correlated errors. The corresponding distribution of fidelities agrees well with for DC noise (see inset).
These observations arise from the fact that some RB sequences contain coherent, error-suppressing subsequences. In fact RB sequences bear resemblance Laflamme2009 () to randomized dynamical decoupling protocols known to suppress errors in certain limits of correlated noise ViolaRDD (). This would lead to “artificially” small measured error for correlated noise, thereby increasing the variance and skew in towards high fidelities for sufficiently low-frequency-dominated noise. Furthermore, this explicit link between the form of and underlying symmetries in suggests we may downselect RB sequences using, e.g. the filter function or appropriate entropy measures on the Clifford sequences to ensure coherent averaging properties are minimized. To the best of our knowledge this is the first quantitative validation of the mechanism underlying the shifts in for temporally-correlated errors, and the first application of the filter-transfer function formalism for quantum control at the algorithmic level.
V Discussion and Conclusion
Our primary observation is that the form of – and in particular the moments of the distribution – can exhibit strikingly different behaviors in different error regimes. This is true despite the fact that the expectation approximately converges for Markovian and DC cases for weak noise (see Table 1). This has a variety of important impacts in the application and interpretation of RB experiments for quantum information.
For instance, due to the form of for low frequency noise, the sample estimator obtained from an ensemble of instances of can differ from the true expectation , generally leading to overestimation of the mean fidelity Flammia2014 (); Epstein2014 (). This risks systematically underestimating due to insufficient sampling over . The difference may be formally bounded using moment-generating functions for the gamma distribution (see Supplemental Material) to provide a more inclusive bound than previous approximations Flammia2014 (); Epstein2014 (). Ensuring falls within an acceptable confidence interval, say within of (and assuming , or ) requires randomizations be selected in the Markovian case, but for DC case. These values increase rapidly as confidence bounds tighten. These sample sizes are much larger than typically employed in experimental settings, but may be partially relaxed when calculating by fitting to measurements performed for multiple values of . In the Markovian regime we also find a tradeoff between finite noise sampling (experimentally achieved by repeating a fixed sequence and averaging over the resulting projective measurements), and the required . Increasing the noise averaging, , reduces the value .
Beyond the question of how well measurements performed in strongly correlated environments estimate , there is uncertainty surrounding the breadth of applicability of this single proxy metric as a general quantum verification tool for quantum information. One key observation is that while the variance and skew of the converge to zero for Markovian errors in the limit of infinite noise averaging (), both remain fixed for DC noise. Accordingly our results provide direct evidence of the divergence between and parameters relevant to fault-tolerance Gutierrez2015 () such as worst-case errors Sanders2015 () in noise environments with strong temporal correlations; in the DC limit the worst-case error can be much larger than the average error.
Finally, the fact that some RB sequences are intrinsically “blind” to correlated errors, highlights potential shortcomings in performing experimental gate optimization by maximizing at fixed ; an operator may optimize an experimental parameter (to maximize the RB fidelity) in such a way to increase systematic errors in individual gates. Such issues may be partially mitigated by selecting subsets of valid RB sequences using the length of the random walk, , in the DC limit to ensure fidelities are not overestimated in a RB procedure. Future work will explore the use of entropic measures to associate RB sequence structure with without the need to perform full calculations of the random walk.
We conclude that the interpretation and applicability of a measured RB outcome can differ significantly depending on the nature of the underlying errors and measurement parameters experienced in a real experimental situation. This challenge can be partially mitigated through presentation of more complete datasets – specifically for each sequence – in order to assist readers making comparisons between reported results. We believe the new insights our calculations have revealed will help bound the utility of in quantum information settings and also help experimentalists ensure that measurements are not subject to hidden biases.
Acknowledgements.Acknowledgements: The authors thank J. M. Chow, J. Gambetta, and A. Yacoby for useful discussions on RB measurements, and L. Viola for discussions on randomized decoupling. Work partially supported by the ARC Centre of Excellence for Engineered Quantum Systems CE110001013, the Intelligence Advanced Research Projects Activity (IARPA) through the ARO, the US Army Research Office under Contracts W911NF-12-R-0012, W911NF-14-1-0098 and W911NF-14-1-0103, and a private grant from H. & A. Harley. STF acknowledges support from an ARC Future Fellowship FT130101744.
- (1) James D F V, Kwiat P G, Munro W J and White A G 2001 Phys. Rev. A 64(5) 052312
- (2) O’Brien J L, Pryde G J, Gilchrist A, James D F V, Langford N K, Ralph T C and White A G 2004 Phys. Rev. Lett. 93(8) 080502
- (3) Emerson J, Alicki R and Życzkowski K 2005 Journal of Optics B: Quantum and Semiclassical Optics 7 S347
- (4) Lévi B, López C C, Emerson J and Cory D G 2007 Phys. Rev. A 75(2) 022314
- (5) Bendersky A, Pastawski F and Paz J P 2008 Phys. Rev. Lett. 100(19) 190403
- (6) López C C, Lévi B and Cory D G 2009 Phys. Rev. A 79(4) 042328
- (7) Barz S, Fitzsimons J F, Kashefi E and Walther P 2013 Nature Physics 9 727–731
- (8) Magesan E and Cappellaro P 2013 Phys. Rev. A 88(2) 022127
- (9) Knill E, Leibfried D, Reichle R, Britton J, Blakestad R B, Jost J D, Langer C, Ozeri R, Seidelin S and Wineland D J 2008 Phys. Rev. A 77(1) 012307
- (10) Magesan E, Gambetta J M and Emerson J 2011 Phys. Rev. Lett. 106(18) 180504
- (11) Magesan E, Gambetta J M, Johnson B R, Ryan C A, Chow J M, Merkel S T, da Silva M P, Keefe G A, Rothwell M B, Ohki T A, Ketchen M B and Steffen M 2012 Phys. Rev. Lett. 109(8) 080505
- (12) Wallman J J and Flammia S T 2014 New Journal of Physics 16 103032
- (13) Epstein J M, Cross A W, Magesan E and Gambetta J M 2014 Phys. Rev. A 89(6) 062321
- (14) Biercuk M, Uys H, VanDevender A, Shiga N, Itano W M and Bollinger J J 2009 Quantum Info & Comp. 9 920–949
- (15) Olmschenk S, Chicireanu R, Nelson K D and Porto J V 2010 New Journal of Physics 12 113007
- (16) Gaebler J P, Meier A M, Tan T R, Bowler R, Lin Y, Hanneke D, Jost J D, Home J P, Knill E, Leibfried D and Wineland D J 2012 Phys. Rev. Lett. 108(26) 260503
- (17) Brown K R, Wilson A C, Colombe Y, Ospelkaus C, Meier A M, Knill E, Leibfried D and Wineland D J 2011 Phys. Rev. A 84(3) 030303
- (18) Chow J M, Gambetta J M, Tornberg L, Koch J, Bishop L S, Houck A A, Johnson B R, Frunzio L, Girvin S M and Schoelkopf R J 2009 Phys. Rev. Lett. 102(9) 090502
- (19) Kelly J, Barends R, Campbell B, Chen Y, Chen Z, Chiaro B, Dunsworth A, Fowler A, Hoi I C, Jeffrey E, Megrant A, Mutus J, Neill C, O’Malley P, Quintana C, Roushan P, Sank D, Vainsencher A, Wenner J, White T, Cleland A and Martinis J M 2014 Phys. Rev. Lett. 112(24) 240504
- (20) Harty T P, Allcock D, Ballance C J, Guidoni L, Janacek H A, Linke N M, Stacey D N and Lucas D M 2014 Phys. Rev. Lett. 113(22) 220501
- (21) Xia T, Lichtman M, Maller K, Carr A W, Piotrowicz M J, Isenhower L and Saffman M 2015 Phys. Rev. Lett. 114(10) 100503
- (22) Lu D, Li H, Trottier D A, Li J, Brodutch A, Krismanich A P, Ghavami A, Dmitrienko G I, Long G, Baugh J and Laflamme R 2015 Phys. Rev. Lett. 114(14) 140505
- (23) Muhonen J T, Laucht A, Simmons S, Dehollain J P, Kalra R, Hudson F E, Freer S, Itoh K M, Jamieson D N, McCallum J C, Dzurak A S and Morello A 2015 Journal of Physics: Condensed Matter 27 154205
- (24) Nielsen M and Chuang I 2010 Quantum Computation and Quantum Information: 10th Anniversary Edn. (Cambridge: Cambridge University Press)
- (25) Bylander J, Gustavsson S, Yan F, Yoshihara F, Harrabi K, Fitch G, Cory D G, Nakamura Y, Tsai J S and Oliver W D 2011 Nat. Phys. 7 565
- (26) Anton S M, Birenbaum J S, O’Kelley S R, Bolkhovsky V, Braje D A, Fitch G, Neeley M, Hilton G C, Cho H M, Irwin K D, Wellstood F C, Oliver W D, Shnirman A and Clarke J 2013 Phys. Rev. Lett. 110(14) 147002
- (27) Papoulis A 1991 Probability, Random Variables, and Stochastic Processes (New York: McGraw Hill)
- (28) actually describes a lattice walk and should strictly be described by a probability mass function (PMF). However this introduces unnecessary complications in the form of combinatorial expressions to account for the directional discretization, and in the interest of analytic simplicity it is entirely appropriate to make this continuous approximation. Doing so simply smooths over the discreteness of the PMF while retaining all essential distribution characteristics.
- (29) Soare A, Ball H, Hayes D, Zhen X, Jarratt M C, Sastrawan J, Uys H and Biercuk M J 2014 Phys. Rev. A 89 042329
- (30) Green T J, Sastrawan J, Uys H and Biercuk M J 2013 New J. Phys. 15 095004
- (31) Soare A, Ball H, Hayes D, Sastrawan J, Jarratt M, McLoughlin J, Zhen X, Green T and Biercuk M 2014 Nat. Phys. 10 1
- (32) Green T J, Uys H and Biercuk M J 2012 Phys. Rev. Lett. 109 020501
- (33) Ryan C A, Laforest M and Laflamme R 2009 New J. Phys 11 013034
- (34) Viola L and Knill E 2005 Phys. Rev. Lett. 94(6) 060502
- (35) Gutiérrez M and Brown K R 2015 Phys. Rev. A 91(2) 022335 (Preprint eprint 1501.00068)
- (36) Sanders Y R, Wallman J J and Sanders B C 2015 arXiv:1501.04932
Appendix A Mathematical Preliminaries
a.1 Linear transformation
Let is a continuous, non-negative random variable described by probability density functions . If is the random variable defined by , where and , then the probability density functions of is
a.2 Product distribution
Let and be to independent, continuous random variables described by probability density functions and . Then the joint distribution of is