Precision-guaranteed quantum metrology
Quantum metrology is a general term for methods to precisely estimate the value of an unknown parameter by actively using quantum resources. In particular, some classes of entangled states can be used to significantly suppress the estimation error. Here, we derive a formula for rigorously evaluating an upper bound for the estimation error in a general setting of quantum metrology with arbitrary finite data sets. Unlike in the standard approach, where lower bounds for the error are evaluated in an ideal setting with almost infinite data, our method rigorously guarantees the estimation precision in realistic settings with finite data. We also prove that our upper bound shows the Heisenberg limit scaling whenever the linearized uncertainty, which is a popular benchmark in the standard approach, shows it. As an example, we apply our result to a Ramsey interferometer, and numerically show that the upper bound can exhibit the quantum enhancement of precision for finite data.
pacs:03.65.Wj, 03.67.-a, 02.50.Tt, 06.20.Dk
High-precision measurement is one of the most important techniques for developing science and technology. Quantum metrology is a general term for methods to precisely estimate the value of an unknown parameter by actively using quantum resources like entanglement and squeezing Giovannetti et al. (2004, 2011); Demkowicz-Dobrzański et al. (2014). For example, when we use a separable state on an -partite system in a Ramsey interferometer, an estimation error of phase, , scales as (the standard quantum limit, SQL). On the other hand, when we use an entangled state like a Greenberger-Horne-Zeilinger (GHZ) state with the same number of particles, scales as (the Heisenberg limit, HL). Such quantum enhancement of precision has been experimentally achieved in several quantum systems like quantum optics Rarity et al. (1990), ions Meyer et al. (2001), and atoms Widera et al. (2004).
One of the main goal of quantum metrology theory is to derive a fundamental lower bound on the estimation error. So far many different benchmarks for the estimation error have been proposed and analyzed Demkowicz-Dobrzański et al. (2014). The most popular benchmark is the root mean squared error (RMSE), and there are two standard approaches for analyzing the RMSE. One is to apply a linear approximation of an estimation method to the RMSE. The approximated RMSE is called a linearized uncertainty (LU). The other is to analyze the classical and quantum Cramér-Rao bounds (CRBs), which are lower bounds on the RMSE for a class of estimation methods. LU and CRBs show the SQL scaling for separable states and the HL scaling for some entangled states.
From a theoretical viewpoint, LU and CRBs both are interesting and important quantities. From an experimental viewpoint, however, there are two problems with the use of these quantities. First problem is about their -dependency. LU and CRB are functions of the parameter to be estimated. The true value of the parameter is unknown in experiments, which is the reason why we try to estimate it. This means that we cannot know the exact values of LU and CRB in experiments, although it is possible to estimate their values from experimental data. Second problem is about their invalidity for finite data. In experiments an amount of available data is finite. A linear approximation is used in the derivation of the LU, while many estimation methods used in quantum metrology like a maximum-likelihood estimator are nonlinear functions of data, and the nonlinearity is not negligible for finite data. CRBs are lower bounds of the RMSE, and they are not attainable when the amount of data is finite 111There are two types of the CRBs. One is for finite data, and the other is for infinite data. The sufficient and necessary condition for the attainability of CRBs for finite data that the probability distribution of measurement outcomes is an affine function of the parameter Amari and Nagaoka (2000). In quantum metrology the probability distribution is not affine of , and the CRBs are not attainable for finite data. It becomes attainable at infinite data.. Unattainable lower bounds on an estimation error cannot be used to guarantee an estimation precision. Because the final goal of quantum metrology experiments is a highly precise estimation of an unknown parameter, it is best to rigorously guarantee an estimation precision, if possible. In order to do that, we need an upper bound on an estimation error satisfying two conditions: (1) be independent of the unknown parameter , and (2) be valid for finite data.
In this paper, we derive an upper bound satisfying these two conditions for a general setting in quantum metrology. In Sec. II, we explain the setting, notation, and our approach. In Sec. III, we introduce an estimation method called a least squares estimator and give a theorem about the estimator. The upper bound shown in the theorem makes it possible to rigorously guarantee the estimation precision in experiments with finite data, which is not possible by the standard approach of quantum metrology theory. We sketch the proof, and the details are given in Appendix B. In Sec. IV, we prove that the upper bound shows the scaling same as the LU, which means that the upper bound shows the HL scaling whenever the LU shows it. As an example, we apply our method to a Ramsey interferometer with atoms, and perform Monte Carlo simulations for . The numerical results indicate that the upper bound can exhibit the quantum enhancement of precision for finite data. In Sec. V, we discuss how to treat known and unknown systematic errors in our approach. We summarize this paper In Sec. VI.
ii.1 Procedures and assumptions
We consider the following procedure of quantum metrology (Fig. 1):
Prepare a known quantum state on a probe system.
The state undergoes a dynamical process with an unknown parameter . Our aim is to estimate , where and are upper and lower values of possible and are assumed to be known.
After the dynamical process, the state changes to a state , which depends on . We perform a known measurement on the state and obtain a measurement outcome.
Repeat steps 1 to 3 a number of times 222Note that and are different. is the number of particles in a probe system used for each measurement trial, and is the number of measurement trials.. Then we have data consisting of outcomes, .
Calculate an estimate of the parameter, , from the data by a data processing method . This function from data to a real value is called an estimator.
The measurement performed in Step. 3 is described by a positive operator-valued measure (POVM) , which is not necessarily a projective measurement. We assume that the measurement outcomes are bounded, i.e., . This assumption is valid in practice, because there are technical cutoffs on observable values of measurement outcomes in any experiment. The probability distribution is given by . Let and denote the expectation and variance of measurement outcome of , respectively. The expectation is a function of , and let denote the function, i.e., . We assume that is injective and that the derivative does not take zero for any . Let denote the range of , i.e., . Then exists for , and we have . So, if we know the value of the expectation, we can calculate the value of .
For given data , we define the sample mean , where is the -th outcome in the outcomes. The sample mean converges to the expectation in the limit of going to infinity (the law of large numbers). It might seem natural to consider a direct inversion estimator, . In general, however, the direct inversion estimator does not work well, because the sample mean is a random variable and can be out of , which is caused by a statistical fluctuation originated from the finiteness of . The inverse function may not exist outside , and we may not be able to calculate there. Even if exists, can be out of .
One solution to avoid this problem of mentioned above is a maximum-likelihood estimator (MLE), . Unlike , an estimate of the MLE always exists and takes a value in . The MLE has good statistical properties, for example, it attains the Cramér-Rao bound in the limit of going to infinity Rao (2002). The asymptotic () behavior of the MLE is well known in classical statistics Bahadur (1960), but a rigorous analysis for finite is an open problem. Instead of the MLE, we consider a different estimator relatively easier to be analyzed.
ii.3 Standard benchmarks
Let us choose an estimator . The estimates depend on data and probabilistically fluctuate. This means that we can observe estimates deviated from the true parameter . This difference is called an estimation error of the estimator. To evaluate an estimation error is an important topic in quantum metrology.
As explained in Sec I, the most popular benchmark is the root mean squared error (RMSE), which is defined as
Generally speaking, a direct analysis of the RMSE itself is difficult, since the RMSE is a function of the dynamics and our choice of the initial state , measurement , and estimator . There are two approaches to reduce the degree of this difficulty for analyzing the RMSE.
The first approach is to approximate the RMSE for the DI estimator. When the number of repetition is sufficiently large, we can approximate the RMSE of the DI estimator as follows:
In this approximation, the nonlinearity of the DI estimator is ignored. In other words, the R.H.S. of the Eq. (2) is the RMSE of the linearized DI estimator, and it is called the linearized uncertainty (LU). The details of this approximation are given in Appendix A.2. An advantage of the LU is that the analysis is easy, because it consists of the variance and derivative of expectation with respect to a single outcome. Estimates of such as
The second approach is to analyze an asymptotic lower bound of the RMSE, which is independent of our choice of estimator. Let us introduce two classes of estimators in statistical estimation theory. When the expectation of an estimator with respect to equals to the true parameter, the estimator is called unbiased for . When the derivative of the expectation converges to one in the limit of going to the infinity, the estimator is called asymptotically unbiased. For any unbiased estimator with respect to , the following inequality holds under certain normal conditions Rao (2002); Helstrom (1976); Holevo (1982):
where and are quantities called the classical and quantum Fisher information, respectively. The Eqs. (5) and (6) are called the classical and quantum Cramér-Rao inequality for finite , respectively. Most of estimators in quantum metrology, which include DI an MLE, are biased for finite , which is originated from the nonlinear parametrization of probability distributions. This means that the Cramér-Rao inequalities for finite is not applicable for quantum metrology. However, most of “natural” estimators in quantum metrology, which includes DI and MLE again, are asymptotically unbiased. For any asymptotically unbiased estimator, the following inequality holds under certain normal conditions Rao (2002); Helstrom (1976); Holevo (1982):
where Eqs. (7) and (8) are called classical and quantum Cramér-Rao inequalities for asymptotic , respectively. The MLE attains the classical Cramer-Rao bound (CRB) for asymptotic Rao (2002). An advantage of the CRBs is their generality for our choice of estimator and measurement. The classical CRB is independent of estimators, and the quantum CRB is independent of measurement. So, classical and quantum CRBs are used for evaluating ultimate performances of a combination of and or , respectively.
From a theoretical viewpoint, LU and CRBs are interesting and important quantities. However, they are not suitable for rigorously evaluating an estimation error in experiments with finite because of the following two reasons. (1) LU and CRBs are functions of the unknown parameter , and we cannot know their exact values in experiments. Of course, we can estimate their values by calculating quantities like Eq. (4), but the calculated values are estimates that can be different from the exact value. (2) LU and CRBs are not valid for finite as explained in this subsection. When is sufficiently large, we may be able to validate the use of them, but it is unclear which can be interpreted as sufficiently large. In order to rigorously evaluate an estimation error for finite , we need another benchmark.
ii.4 Confidence intervals
Roughly speaking, confidence intervals are intervals including the true parameter with high probability. We propose the size of an confidence interval as a new benchmark in quantum metrology. When an interval is a function of data and is independent of , the function is called an interval estimator. We would like to find an interval estimator such that the interval estimates include with high probability. When holds with probability at least for any , the interval estimator is called a confidence interval with ()-confidence level. For example, is a confidence interval with -confidence level. This example is trivial and useless. We need a nontrivial and useful confidence interval. The following two properties are required for a “nontrivial” and “useful” confidence interval in quantum metrology experiments.
Its size converges to zero in the limit of going to infinity.
Its size can show a quantum enhancement when we use a quantum resource in quantum metrology experiments.
In this paper, we propose a new confidence interval and prove that it has two properties mentioned above.
Before moving on to our results, let us note their difference from known results. Confidence interval and confidence level are well known concepts in classical statistics, and there are many statistical techniques to calculate them for finite data Lehmann and Romano (2005). Most of these techniques are, however, based on the normal distribution approximation (NDA), and a confidence interval calculated with the NDA is called an approximate confidence interval. The NDA is valid when the number of measurement trials is sufficiently large (the central limit theorem), but again, it is not clear which can be considered as sufficiently large. Therefore, it is not rigorous to apply approximate confidence intervals for finite data in experiments. In contrast to an approximate confidence interval, a confidence interval calculated without any assumption on probability distribution is called an exact confidence interval. Our new confidence interval is an exact confidence interval, and to the best of our knowledge, it is the first exact confidence interval for quantum metrology.
An exact confidence region, which is a generalization of confidence interval to higher dimensional spaces, for quantum tomography was proposed in Sugiyama et al. (2013). The estimation object in quantum tomography is quantum state, process, or measurement, which includes multi parameters. Some readers might think that the result in Sugiyama et al. (2013) would be applicable for quantum metrology because quantum metrology is an estimation problem of quantum process with single parameter, but this is not correct. Quantum process tomography and quantum metrology are different problems from statistical viewpoints, and the result for quantum tomography obtained in Sugiyama et al. (2013) is not applicable for quantum metrology. The main difference is from the difference of their parametrization of probability distribution. In quantum tomography, the probability distribution of measurement outcome can be linearly parametrized by the estimation object. In quantum metrology, on the other hands, the estimation object is single parameter, but the parametrization of probability distribution is nonlinear. In general, statistical estimation problems with linearly parametrized and nonlinearly parametrized probability distributions have different statistical properties. For example, the classical CRB for finite data is attainable in the linear case, but it is not attainable in the nonlinear case Amari and Nagaoka (2000). Actually, in quantum tomography, there exists an unbiased estimator that attains the equality of the classical Cramer-Rao inequality for any finite data, but in quantum metrology, there do not exist any unbiased estimators that attain the equality for finite data. This is caused from the difference of their parametrizations. Therefore quantum process tomography and quantum metrology are different problems in statistical estimation. Additionally, the linearity of the parametrization in quantum tomography is used in the derivation of the exact confidence region in Sugiyama et al. (2013), and the result is not applicable for quantum metrology. In order to derive an exact confidence interval that reflects quantumness of resources in quantum metrology, we need new mathematical techniques.
We consider the least squares (LS) estimator,
Same as the MLE, the LS estimates always exist and take values in . Let us define
Note that is the unbiased sample variance of the data and satisfies . Using the quantities introduced above, we define three functions of data and an user-specified constant .
The following theorem guarantees that the deviation of the LS estimates from the true parameter is upper bounded by with high probability.
For any number of measurement trials , user-specified constant , and true parameter ,
holds with probability at least .
Theorem 1 means that the defined in Eq. (16) is a rigorous “error bar” on LS estimates for arbitrary finite data. On the other words, holds with high probability , where we choose the value of as small as we like. The becomes larger as we choose smaller . This means that, if we require a higher confidence level for a fixed , the “error bar” becomes larger for safe. If we want to keep the confidence level, we need to increase the number of measurement.
We sketch the proof of Theorem 1, with the details shown in the Appendix B. The LS estimator is a nonlinear function of the sample mean, which is the origin of the main difficulty for the analysis. We use the Taylor expansion up to the second order with the remainder, and reduce the problem to an analysis on the deviation of the sample mean from the true expectation, . In the reduction, we use the contractivity of the LS estimator, i.e.,
The contractivity is one of the two main keys in this proof, and this is the reason why we choose the LS estimator. After the reduction, we use two inequalities for evaluating . One is Hoeffding’s inequality Hoeffding (1963), which is well known in classical statistics. The other is the empirical Bernstein inequality Maurer and Pontil (2009), which is a new mathematical tool developed for finite data analysis in machine learning. The empirical Bernstein inequality is the second key in this proof. It enables us to show a relation to the linearized uncertainty explained later. By combining these inequalities, contractivity, and Taylor expansion, we obtain Theorem 1.
It is important that depends only on data and user-specified constant , and that it is independent of the true parameter . (The probability distribution of depends on .) So, we can calculate without knowing . Let us introduce a data-dependent interval,
Theorem 1 guarantees that this interval estimator is an exact confidence interval with ()-confidence level. For example, when we choose , we obtain a confidence interval that includes with probability at least . What we do after Step. 4 in quantum metrology experiments is to choose a value of as we like and to calculate the LS estimate and from data obtained. Then we have an estimate of the unknown parameter with a rigorous error bar.
The main purpose of this paper is to propose an exact confidence interval satisfying Properties 1 and 2 explained in Sec. II.4. By definition of in Eq. (16), our new exact confidence interval satisfies the Property 1. In this section, we theoretically and numerically prove that also satisfies the Property 2. In Sec. IV.1 we show relations to the LU and quantities calculated in experiments. Especially the relation to the LU indicates that shows a quantum enhancement for asymptotically large whenever the LU shows it. In Sec. IV.2, we perform a numerical simulation of a Ramsey interferometer. The result indicates that, even for finite , can show the quantum enhancement when a quantum resource is used in quantum metrology.
iv.1 Relation to LU
First, we explain a relation between and the LU. By definition, decreases as , and the coefficient of the dominant term is given by . This coefficient converges to in the limit of going to infinity because and converge to and , respectively. So, we would expect that have the scaling same as the LU with respect to and . Actually we can prove the following inequality:
where denotes the expectation with respect to data . The proof is shown in the Appendix C. The logic mentioned above and Eq. (20) guarantee that, on average, scales same as the LU. The upper bound shows the HL scaling, whenever the LU shows it. This is important especially in noisy cases. The quantum enhancement of precision can be suppressed when the dynamical process is noisy Huelga et al. (1997), and recently there are many proposals for recovering the quantum enhancement with respect to the LU Macchiavello et al. (2000); Preskill (2000); Matsuzaki et al. (2011) and CRB Chin et al. (2012); Chaves et al. (2013); Dür et al. (2014). Eq. (20) indicates that the recovery method with respect to LU also works well for .
Next, we explain a relation between and quantities calculated in experiments. As explained in Sec. II.3, estimates of , such as Eq. (4), have been calculated in some quantum metrology experiments. Such a quantity also appears in . Let define an estimate of as
Then the dominant part of is rewritten as . This means that is a sum of an estimate of with a coefficient originated from the value of the confidence level and the higher order, i.e., , terms. The coefficient and higher order terms are corrections for guaranteeing the statistical rigorousness. Therefore our result is not what is totally different from the conventional method in experiments, but it is an extention from such a rough method toward rigorously treating finite data.
iv.2 Example: Ramsey interferometer
We apply our result to a Ramsey interferometer with atoms. For a separable probe state of atoms, the LU scales as the SQL scaling, . On the other hand, for an entangled state like a GHZ state, the LU can scale as the HL scaling, . We consider two combinations of initial state and measurement. One is a combination of a separable state and the measurement of the total energy, and the other is that of a GHZ state and the measurement of the parity, where and are excited and ground states of an atom, respectively. In the cases, we have for the separable state and for the GHZ state.
We performed Monte Carlo simulations for the cases with , , , , , and (-confidence level). The details are given in the Appendix D. In order to analyze typical behaviors of , we calculated expectations of and compared them to expectations of . Fig. 2 (a) and (b) are the results. In both panels (a) and (b), the vertical axes are for expected deviations. Solid and dashed (black) lines are and for the separable state, respectively. Chained and dotted (red) lines are and for the GHZ state, respectively. The expectations were calculated by a Monte Carlo sampling with repetitions. In panel (a), the horizontal axis is the number of atoms, . Plots in the panel express the scaling of the expected deviations with respect to with a fixed number of measurement trials, . The expectations of are larger than those of , which is consistent with Theorem 1. Panel (a) also indicates that, up to , the expectation of for the GHZ state scales as the HL scaling, although that for the separable state scales as the SQL scaling. In panel (b), the horizontal axis is the number of measurement trials, . Plots in the panel express the scaling of the expected deviations with respect to with a fixed number of atoms, . The expectations of for both states scale as , and for the GHZ state is, on average, in the panel) times smaller than for the separable state.
In conclusion of the numerical simulations, the expectations of with 90%-confidence level are larger than the expectations of the actual deviations for both separable and entangled states, which is consistent with Theorem 1. Furthermore, Fig. 2 indicates that, compared to the separable state, the entangled state gives smaller deviation of estimates and smaller error bar . Eq. (20) guarantees that shows the HL scaling for asymptotically large whenever the LU shows the scaling, and Fig. 2 indicates that can also show the quantum enhancement of precision for finite . Note that the Ramsey interferometer is mathematically equivalent to a Mach-Zehnder interferometer Lee et al. (2002), which means that can show the quantum enhancement of precision in an optical interferometer with a states.
In this section, we discuss how to apply our result to some cases in which unknown systematic errors exist.
v.1 Partially unknown systematic errors
In Theorem 1, it is assumed that we perfectly know , , and the functional form of . This assumption may not be valid when there exists a systematic error in experiments. In the standard approach of quantum metrology theory, a model for the systematic error is introduced, and it is assumed that the model correctly characterizes the error and that we know the value of a noise parameter in the model Huelga et al. (1997); Macchiavello et al. (2000); Preskill (2000); Matsuzaki et al. (2011); Chin et al. (2012); Escher et al. (2011); Demkowicz-Dobrzánski et al. (2012); Chaves et al. (2013); Dür et al. (2014); Arrad et al. (2014); Kessler et al. (2014). Theorem 1 is applicable for such a perfectly known systematic error. Even if the model is correct, however, the value that we think of as the noise parameter may be different from the true value in an experiment. Theorem 1 and the standard approach are not directly applicable for such a partially unknown systematic error. However, we can obtain an exact confidence interval for quantum metrology with a partially unknown systematic error, by modifying Theorem 1 based on the worst case of the noise parameter.
Here let us consider the case that the noise is partially unknown, i.e., the noise model is correct, but we do not know the value of a noise parameter in the model. Suppose that there is an imperfection of the preparation of initial state and that it is characterized by a noise model with a parameter , The time evolution is characterized by the true parameter of interest and noise parameter , There is an imperfection in the measurement apparatus, and it is characterized by a noise model with a parameter (, , and can be multi-parameters.) Suppose that the noise parameters,, are unknown, but that we know a region including the true noise parameters, i.e., . In this case, the probability distribution of the measurement outcome is given by
We know the function form of the probability distribution, but we do not know the true values of and . Then the functional forms of and depends on the values of , and the value of depends on as well. To clarify this noise-dependency of and , let us use new notations, and .
Let denote the values that we think as the true values of . In general, and are different. We want to evaluate the difference between , which is a LS estimate calculated from data and incorrect noise parameter, and . We have
where Eq. (24) holds with probability at least . Let us define
We obtain the following theorem.
For any number of measurement trials , user-specified constant , unknown true parameter , unknown true noise parameters , and user-specified noise parameters ,
holds with probability at least .
Lemma 1 provides an exact confidence interval for quantum metrology with partially unknown noise.
The first term in the R.H.S. of Eq. (26) is the effect of the partially unknown noise. This is a systematic error. The second term in the R.H.S. of Eq. (26) corresponds to the statistical error. When a noise is partially unknown and we choose an incorrect value for the noise parameter, any estimator cannot converge to the true parameter . So, when goes to infinity, converges to but does not. To avoid this problem in the case that the noise is partially unknown, we need to estimate the parameter of interest and noise parameters both. This simultaneous estimation of and is a theoretically interesting and practically important problem, but it is out of the main topic of this paper.
v.2 Physical vs statistical models
Here, we explain a possible method for treating unknown statistical errors, which is different from the way described in the previous subsection. An experimental setup of quantum metrology is characterized by an initial state , a dynamical process , and a measurement . Let us call a set a physical model of the experiment. Let us call the function form of the expectation a statistical model for the experiment. Recall that the calculation of the LS estimate and requires only . So, if we know the statistical model, we can use Theorem 1 even if we do not perfectly know the physical model.
Our strategy is as follows:
Step . We perform a pre-experiment before starting a quantum metrology experiment for an unknown . We set a known value of and perform quantum metrology experiments.
Step . We repeat the pre-experiment for many different known values of .
Step . We estimate the statistical model from the data obtained in the pre-experiments.
If the numbers of known s and measurement trials for each known are sufficiently large, we have a precise estimate of the statistical model , and we can use the estimate of instead of the true in Theorem 1. The method consisting of Steps from to is exactly same as experiments for observing interference fringes in quantum metrology Leibfried et al. (2004); Xiang et al. (2011); Aiello et al. (2012). The precision of estimating depends on the way of sampling s and the choice of estimator for . To establish a method for rigorously evaluating a total precision of interference fringe observation and quantum metrology for estimating unknown after the pre-experiments is an open problem, which is important for practical quantum metrology.
We considered a general setting of quantum metrology, proposing a least squares estimator and deriving an explicit formula of an exact confidence interval for the estimator with arbitrary finite number of measurement trials. The explicit formula makes it possible to calculate a rigorous error bar, , on the least squares estimates in experiments. We showed that the error bar scales same as the linearized uncertainty, which is a popular benchmark in the standard approach of quantum metrology, for asymptotically large number of measurement trials. This means that asymptotically shows the Heisenberg limit scaling whenever the linearized uncertainty shows the scaling. As an example, we applied our results to a Ramsey interferometer with atoms and performed Monte Carlo simulations for and . The numerical result indicates that, when a GHZ state is used as an initial state, shows the Heisenberg limit scaling for finite . It means that can also exhibit the quantum entrancement of precision for finite . To the best of our knowledge, this is the first result that makes it possible to rigorously guarantee an estimation precision in quantum metrology with finite data, and we hope it finds application in the analysis of experimental data.
The author would like to thank Patrick Birchall, Hugo Cable, Jonathan Matthews, Javier Sabines, and Peter S. Turner for helpful discussion about possible application of our results to quantum metrology experiments on optical systems and Fuyuhiko Tanaka for useful comments on this paper. The author also thank Martin B. Plenio for drawing his attention to Refs. Macchiavello et al. (2000); Preskill (2000). This work was supported by JSPS Postdoctoral Fellowships for Research Abroad (H25-32), the German Science Foundation (grant CH 843/2-1), the Swiss National Science Foundation (grants PP00P2-128455, 20CH21-138799 (CHIST-ERA project CQC)), the Swiss National Center of Competence in Research ‘Quantum Science and Technology (QSIT)’ and the Swiss State Secretariat for Education and Research supporting COST action MP1006.
We explain the details of our results. In Sec. A, we give a summary of assumptions and the derivation of the linearized uncertainty. In Sec. B, we give the proof of Theorem 1. In Sec. C, we give the proof of Eq. (20). In Sec. D, we explain the details of the Ramsey interferometer and Monte Carlo simulation mentioned in Sec. IV.2.
Appendix A Notations and Assumptions
In this section, for convenience we give a summary of assumptions. We also explain a relation between the RMSE and LU.
a.1 List of Assumptions
Theorem 1 holds under the following four assumptions.
We know , , and the functional form of , i.e., we know .
The measurement outcomes are bounded, i.e., .
is injective for .
The derivative of is always non-zero on , i.e., .
Assumption A1 is the standard assumption not only in quantum metrology, but also in statistical parameter estimation. Assumption A2 is necessary for the use of Hoeffding’s inequality (Lemma 2) and empirical Bernstein inequality (Lemma 3) in the proof of Theorem 1. Unbounded outcomes can exist theoretically, but outcomes are always bounded in experiments since there is a technical limit, or cutoff, on an observable range of measurement outcomes. So, assumption A2 is natural in experiments. Assumption A3 is necessary for the uniqueness of the least squares estimates for any data, and assumption A4 is necessary for avoiding the divergence of and . In Sec. A.2, we explain that assumptions A3 and A4 are required in the use of the linearized uncertainty, which means that A3 and A4 are implicitly assumed in the standard approach using the LU.
a.2 Linearized uncertainty and Assumptions
We explain a relation between the RMSE and LU (Eq. (35)), which clarifies the role of assumptions A3 and A4 for the LU. The RMSE of an estimator is defined by
Let us choose the DI estimator as the estimator. The DI estimates do not necessarily exist for any data. Assumption A3 guarantees the existence of the DI estimates only for . When is out of , the DI estimate may not exist. A3 is a necessary condition for the existence, but it is not a sufficient condition. However, let us ignore this fact, i.e., we assume that DI estimates exist for any data. By definition,
holds. We have
We apply the Taylor expansion to ,
and suppose that is sufficiently large that the nonlinear terms in the Taylor expansion, , is negligible. Then
holds. Since holds from assumption A4, we obtain
Eq. (35) means that the linearized uncertainty is an approximated RMSE of the DI estimator, which is derived by ignoring the existence problem of the estimator and the nonlinearity of . This is the reason why we call a linearized uncertainty.
In the derivation of Eq. (35), the following two conditions are required in addition to assumptions A3 and A4.
DI estimates exist for .
The number of measurement trials is sufficiently large that the nonlinearity of around is negligible.
In the standard approach using the LU, assumptions A1, A3, A4, and conditions C1 and C2 are implicitly assumed. On the other hand, Theorem 1 does not require C1 and C2. Especially the disuse of C2 is important to analyze finite data, because it is unclear which can be considered as “sufficiently” large in C2.
Appendix B Proof of Theorem 1
In this section we show the proof of Theorem 1. We derive an upper bound of . It is difficult to directly analyze this quantity, because is a nonlinear function of and is a biased estimator. On the other hand, the following two lemmas hold for .
Lemma 2 (Hoeffding’s inequality Hoeffding (1963))
Let be a random variable with and be a sequence of i.i.d. random variables satisfying , respectively. Then for any and ,
Lemma 3 (Empirical Bernstein inequality Maurer and Pontil (2009))
Let be a random variable with and be a sequence of i.i.d. random variables satisfying , respectively. Then for any and ,
holds333Some coefficients in Eq. (38) are different from the corresponding inequality in Maurer and Pontil (2009), because we have a proof of Eq. (38) but we could not prove the original inequality., where
Note that in Hoeffding’s inequality is independent of data, and that in the empirical Bernstein inequality is dependent of data.
First, we reduce the analysis of to that of . Let denote the argument of . Using the Taylor expansion of around up to the 2nd order with the remainder in the Lagrange form, we obtain the following inequality.
where is some real number between and . By combining Eq. (42) with and the contractivity, we obtain
By solving this quadratic inequality with , we can show that Eq. (44) is equivalent to
By substituting and into Eq. (45), we obtain
Appendix C Proof of Eq. (20)
Here we show the proof of Eq. (20):
where we used the Cauchy-Schwarz inequality and the equality . From the Taylor expansion, we have
At the limit of to infinity, converges to because of the contractivity, , and the law of large numbers. Then
holds, and we obtain
Appendix D Details of Ramsey interferometer simulation
In this section, we explain the details of a Ramsey interferometer and the Monte Carlo simulation. When we use a separable state of atoms for the initial state, the LU scales as the SQL scaling, . On the other hand, when we use an entangled state, the LU can scale as the HL scaling, . The procedure of the Ramsey interferometer is as follows.
Prepare an initial state of atoms. Each atom is a two-level system.
Each atom independently undergoes a free evolution, .
After the evolution, we perform a -pulse along an axis, , where is a reference phase to be user-tuned.
Perform a projective measurement of an observable, .
Repeat 1 to 4 a number of times.
We consider the following two combinations of the initial state and measured observable .
A product state and energy measurement
Let us choose a product state,
as the initial state, where and are the excited and ground states, respectively. We observe the total energy,
where . The set of possible measurement outcomes is . In this combination, the probability distribution is given by