Improving spin-based noise sensing by adaptive measurements

Improving spin-based noise sensing by adaptive measurements

Yi-Hao Zhang    Wen Yang wenyang@csrc.ac.cn Beijing Computational Science Research Center, Beijing 100193, China
Abstract

Localized spins in the solid state are attracting widespread attention as highly sensitive quantum sensors with nanoscale spatial resolution and fascinating applications. Recently, adaptive measurements were used to improve the dynamic range for spin-based sensing of deterministic Hamiltonian parameters. Here we explore a very different direction – spin-based adaptive sensing of random noises. First, we identify distinguishing features for the sensing of magnetic noises compared with the estimation of deterministic magnetic fields, such as the different dependences on the spin decoherence, the different optimal measurement schemes, the absence of the modulo-2 phase ambiguity, and the crucial role of adaptive measurement. Second, we perform numerical simulations that demonstrate significant speed up of the characterization of the spin decoherence time via adaptive measurements. This paves the way towards adaptive noise sensing and coherence protection.

quantum sensing, adaptive measurement, spin decoherence

I Introduction

Localized electronic spins in the solid state, such as nitrogen-vacancy centers in diamond Rondin et al. (2014), phosphorus donors Pla et al. (2012), silicon vacancy in SiC Widmann et al. (2014), and single rare-earth ion in yttrium aluminium garnet Kolesov et al. (2012), are attracting widespread attention as highly sensitive quantum sensors Degen et al. (2017) with nanoscale spatial resolution Balasubramanian et al. (2008, 2009); Maletinsky et al. (2012); Grinolds et al. (2013, 2014); Muller et al. (2014) and fascinating applications in condensed matter physics, materials science, and biology. The coherent Larmor precession of the spin reveals deterministic magnetic signals Chernobrod and Berman (2005); Degen (2008); Taylor et al. (2008); Maze et al. (2008); Wang et al. (2017a), while the decoherence of the spin reveals random magnetic noises Schoelkopf et al. (2002); de Sousa (2009); Hall et al. (2009, 2010) and other quantum objects Zhao et al. (2011, 2012); Kolkowitz et al. (2012); Taminiau et al. (2012); London et al. (2013); Staudacher et al. (2013); Mamin et al. (2013); Shi et al. (2014). By tracking the noises back to the environment, the localized spin can further reveal the structure and many-body physics of the environments, such as quantum criticality Quan et al. (2006); Chen et al. (2013) and partition functions in the complex plane Wei and Liu (2012); Wei et al. (2014); Peng et al. (2015); Wei et al. (2015); Heyl et al. (2013) and the quantum work spectrum Dorner et al. (2013); Mazzola et al. (2013); Batalhão et al. (2014). In these developments, the key challenge is to improve the sensing precision. For this purpose, dynamical decoupling techniques – originally developed for protecting qubits from decoherence – have been adapted for sensing alternating signals Kotler et al. (2011); de Lange et al. (2011), noises de Lange et al. (2010); Álvarez and Suter (2011); Medford et al. (2012); Bar-Gill et al. (2012); Muhonen et al. (2014), and other quantum objects Zhao et al. (2011, 2012); Kolkowitz et al. (2012); Taminiau et al. (2012); London et al. (2013); Staudacher et al. (2013); Mamin et al. (2013); Shi et al. (2014); Zhao et al. (2014); Zhao and Yin (2014); Ma et al. (2015); Casanova et al. (2015); Wang et al. (2016); Xiao and Zhao (2016); Wang et al. (2017b). Other techniques include rotating-frame magnetometry Cai et al. (2013); Yan et al. (2013); Loretz et al. (2013), Floquet spectroscopy Lang et al. (2015), two-dimensional spectroscopy Boss et al. (2016); Ma and Liu (2016), correlative measurements Laraoui et al. (2013), axillary quantum memory Zaiser et al. (2016), and compressive sensing Shabani et al. (2011); Boss et al. (2017).

Recently, there were growing interest in using adaptive measurements to mitigate the modulo-2 phase ambiguity and hence improve the dynamic range of spin-based quantum sensing Sergeevich et al. (2011); Said et al. (2011); Waldherr et al. (2012); Nusran et al. (2012); Bonato et al. (2016); Stenberg et al. (2014). However, previous works focus on deterministic Hamiltonian parameters that drive the unitary evolution of the spin quantum sensor, leaving a large, important family of tasks unexplored – the spin-based quantum sensing of random noises that drive the non-unitary decoherence of the spin. It is important to identify the distinctions of spin-based noise sensing compared with the spin-based Hamiltonian parameter estimation, and further provide feasible methods to improve the key figure of merit – the sensing precision.

In this work, we explore theoretically the role of adaptive measurement in spin-based sensing of magnetic noises. First, our general analysis identifies a series of distinguishing features for sensing a random magnetic field (i.e., magnetic noises) compared with the estimation of a deterministic magnetic field (which is a paradigmatic Hamiltonian parameter), including the different dependences on the spin decoherence, the different optimal measurement schemes, and the absence of the modulo-2 phase ambiguity. Moreover, optimizing noise sensing requires knowledge about the unknown noises to be estimated, so adaptive measurements are crucial for improving the sensing precision. By contrast, in the estimation of deterministic magnetic fields, adaptive measurements are usually alternatives to non-adaptive schemes for mitigating the modulo-2 phase ambiguity and hence improving the dynamic range Said et al. (2011); Waldherr et al. (2012) and non-adaptive measurements can even outperform adaptive ones in some cases Said et al. (2011). Second, we perform numerical simulations and demonstrate that using adaptive measurements can speed up significantly the estimation of the spin decoherence time. These results pave the way towards spin-based adaptive sensing of noises. Since rapid characterization of decoherence allows us to design efficient schemes to suppress the decoherence, these results are also relevant to quantum computation.

This rest of this paper is organized as follows. In Sec. II, we outline the basic steps of a general adaptive measurement, leaving a detailed introduction to every step in Appendices A-E. In Sec. III, we analyze general spin-based sensing of magnetic noises and identify its distinguishing features. In Sec. IV, we perform numerical simulations for the adaptive estimation of the spin decoherence time. In Sec. IV, we draw the conclusion.

Ii Adaptive quantum parameter estimation

Figure 1: General framework of adaptive quantum parameter estimation.

A general parameter estimation protocol using a quantum system to estimate an unknown, real parameter consists of three steps (Fig. 1):

  1. The quantum system is prepared into certain (usually nonclassical) initial state and then undergoes certain -dependent evolution into a final state . This step encodes the information about into the final state of the quantum system. The information contained in is quantified by the quantum Fisher information (QFI).

  2. The quantum system undergoes a measurement, which produces an outcome according to certain probability distribution. In this step, the quantum Fisher information contained in is transferred into the classical information in the measurement outcome. The information contained in each outcome is quantified by the classical Fisher information (CFI) , which obeys

    (1)
  3. Steps 1-2 are repeated times and the outcomes are processed to yield an estimator to the unknown parameter . In this step, the total CFI contained in the outcomes is converted to the estimation precision, as quantified by the statistical error of the estimator:

    (2)

    where denotes the average over a lot of estimators obtained by repeating steps 1-3 many times. For unbiased estimators obeying , the precision is fundamentally limited by the inequality

    (3)

    known as the Cramér-Rao bound Helstrom (1976); Braunstein and Caves (1994).

For optimal performance, it is necessary to optimize each step of the above initialization-evolution-measurement cycle. In step 1, the initial state and the evolution process should be optimized to maximize . In step 2, appropriate measurements should be designed to convert all the QFI contained in into the CFI contained in the measurement outcome, so that attains its maximum value allowed by Eq. (1). In step 3, optimal unbiased estimators should be used to convert all the CFI contained in the outcomes into the useful information contained in the estimator. For example, for large , the Bayesian estimator or the maximum likelihood estimator are unbiased and can saturate the Cramér-Rao bound Eq. (3).

In step 1 and step 2, and may depends on , so the optimization for maximal and requires knowledge about the true value (denoted by ) of the unknown parameter . A possible solution is adaptive measurements Barndorff-Nielsen and Gill (2000); Fujiwara (2006), i.e., using the measurement outcomes of previous initialization-evolution-measurement cycles to refine our knowledge about and then use this knowledge to optimize the next cycle. In step 2-3, the probability distribution of the measurement outcome as a function of the unknown parameter may be periodic, making it impossible to identify a unique estimator. This ambiguity problem is commonly encountered in estimating deterministic Hamiltonian parameters and can be mitigated by using either non-adaptive or adaptive measurements Said et al. (2011). In this work, we explore the spin-based sensing of random noises and show that the ambiguity problem is absent, while the dependence of and on makes adaptive measurements critical for improving the sensing precision.

Our subsequent discussions are based on the general adaptive measurement protocol in Fig. 1, which involve many important concepts and techniques, such as the QFI, the CFI, optimal unbiased estimators (such as the Bayesian estimator and the maximum likelihood estimator), the Cramér-Rao bound, and adaptive measurements. A systematic, self-contained introduction to these concepts (including a simple example) are given in Appendices A-E.

Iii Adaptive sensing of magnetic noises

The main purpose of this section is to identify the distinguishing features of spin-based noise sensing compared with the estimation of deterministic Hamiltonian parameters. For this purpose, we consider a generic pure-dephasing model

(4)

describing the evolution of a spin-1/2 under a constant magnetic field and a magnetic noise along the axis, where , , and is the gyromagnetic ratio. This model is relevant to many experiments involving a localized electron spin in solid state environments (such as the semiconductor quantum dots and nitrogen-vacancy centers in diamond), where the dominant magnetic noises come from the surrounding electron spin bath or nuclear spin bath. The former can be modelled by a Ornstein–Uhlenbeck noise de Lange et al. (2010); Dobrovitski et al. (2009); Witzel et al. (2012, 2014), while the latter can be modelled by a quasi-static noise Shulman et al. (2014); Delbecq et al. (2016) (see Ref. Yang et al., 2017 for a review). Next, we follow the standard steps outlined in Sec. II and further detailed in Appendices A-E to discuss the estimation of the noise in comparison with the estimation of – a paradigmatic Hamiltonian parameter.

For step 1, we prepare the spin into a pure initial state

(5)

parametrized by . Next, under the Hamiltonian in Eq. (4), the spin evolves for an interval into a final mixed state

(6)

where is the average of the random phase over the noise distribution. For general noies, could be complex. Here we assume is symmetric about zero, then is real. Usually the fluctuation of the random phase grows with the evolution time , so decreases with , corresponding to the decay of the average spin in the plane or spin decoherence for short Yang et al. (2017). From Eq. (6), we see that all the information about is carried by the phase factor , while all the information about the noise is carried by the decoherence factor .

For and any parameter (denoted by ) that characterizes the noise , the QFI in the final state can be computed by Eq. (31) as

(7a)
(7b)
Here and show very different dependences on the decoherence factor . This highlights the first distinguishing feature of noise sensing compared with the estimation of a deterministic magnetic field. For estimating (), we should maximize () by tuning the controlling parameters 111In principle, we can also apply time-dependent control on the spin during the evolution to engineer the final state and then maximize or in the final state by optimizing this control. However, such optimization for a time-dependent, open quantum system is still an open issue Yuan (2016); Yuan and Fung (2015); Pang and Jordan (2017). and . The optimal value of is . The optimal value of should be chosen to maximize (. The optimal depends on the specific form of as a function of , which in turn is determined by the details of the noise (to be discussed shortly).
For step 2, we consider a general projective measurement on the spin-1/2 along an axis with polar angle and azimuth . This measurement on gives an outcome according to the probability distribution
(8)

Here as a function of has a period , thus the measurement cannot distinguish and (). This is the commonly encountered modulo-2 ambiguity problem in Hamiltonian parameter estimation. By contrast, are usually not periodic in the noise parameters, so the modulo-2 ambiguity is absent. This highlights the second distinguishing feature of noise sensing.

Figure 2: Evolution of a spin-1/2 driven by the Hamiltonian in Eq. (4). The red arrows denote the initial and final spin orientation and the solid (dashed) blue line denotes the optimal measurement axis for estimating (any noise parameter ).

Given the measurement distribution, we can compute the CFI from Eq. (33) and obtain

(9a)
(9b)
We set and (), so the CFI attains the QFI: (). Namely, the optimal measurement for estimating ( is along an axis in the plane perpendicular (parallel) to the spin in the final state, as shown in Fig. 2. This highlights the third distinguishing feature of noise sensing.
For step 3, we adopt the maximum likelihood estimator (see Appendix C for details and Appendix D for an example) and leave the detailed numerical simulation to the next section.
Finally, we discuss how to optimize the evolution time to maximize the QFI and . For any classical noise (including static noises Kropf et al. (2016) and dynamical ones, Markovian noises and non-Markovian ones Kropf et al. (2016); Yang et al. (2017)), once the statistics of the noise is given, we can determine and hence and as functions of for any classical noise, at least in principle. Thus the method described here can be used to sensing an arbitrary classical noise, such as the abnormal static noises due to disorder averaging Kropf et al. (2016). Moreover, although the discussions above are restricted to a single spin-1/2 (or equivalently a qubit), the method can also be used to infer the properties of noises on a general quantum system Kropf et al. (2016). Compared with the spin-1/2 case, the difference is that the QFI should be calculated from Eq. (14) and the optimal measurement capable of converting all the QFI contained in the final density matrix of a general quantum system into the CFI is more complicated (see Appendix B).
Here for specificity we consider a widely used noise responsible for spin decoherence in electron spin baths de Lange et al. (2010); Dobrovitski et al. (2009); Witzel et al. (2012, 2014): the Ornstein–Uhlenbeck noise, which is a Gaussian noise characterized by the auto-correlation function

with () the amplitude (memory time) of the noise. The Wick’s theorem for Gaussian noises gives Yang et al. (2017)

(10)

For , the spin decoherence is Gaussian: . For , the spin decoherence is exponential on a time scale

(11)

Substituting Eq. (10) into Eq. (7) gives the QFI’s about and , respectively:

where

increases monotonically with till saturation. With increasing evolution time , the decoherence factor increases monotonically, so all the QFI’s first increases for small spin decoherence and then begin to decrease when the spin decoherence becomes significant. For , the QFI’s are given by

(12a)
(12b)
(12c)
For , the QFI’s are given by
(13a)
(13b)
(13c)
Next, we discuss the the optimization of the evolution time to maximize the QFI.
For Markovian noises (i.e., ), the spin coherence decays exponentially on a time scale , as shown in Fig. 3(a). Thus is well approximated by Eq. (13a), shown as the black dotted line in Fig. 3(b). With increasing evolution time , first increases quadratically and then decays exponentially. At the optimal evolution time it reaches the maximum , as shown in Fig. 3(b). For noise sensing, and as functions of differ from that of in that they exhibit three stages [Fig. 3(b)]. For , we have and . For , and increase linearly with . For , and decays exponentially with . At the optimal evolution time , and attain their maxima and . For estimating , the optimal evolution time is independent of , thus adaptive measurements are not necessary. By contrast, for estimating the noise parameter (), the optimal evolution time () depend on the parameter () to be estimated, so adaptive measurements are crucial.
For non-Markovian noises (i.e., ), the Gaussian decay for the spin coherence and the QFI’s [Eq. (12)] becomes appreciable even in the short-time regime , as shown in Fig. 3(c) and (d). In general, the peak location of the QFI as a function of depends on both and , thus adaptive measurements are crucial for estimating and , as opposed to the estimation of .
The general analysis in this section have identified a series of distinguishing features for the sensing of noises compared to the estimation of deterministic magnetic fields, including the different dependences of the QFI’s on the spin decoherence, the absence of the modulo-2 phase ambiguity, the different optimal measurement schemes, and the crucial role of adaptive measurements. In the next section, we perform numerical simulations to demonstrate the feasibility of adaptive measurements to improve the precision of noise sensing.
(14)

where we have made its dependence on the evolution time explicit. This spin echo technique Hahn (1950) can eliminate quasi-static noises (such as those from the surrounding nuclear spins of the host lattice) and single out the decoherence caused by the Markovian noise under consideration. Interestingly, it also eliminates the Larmor frequency , so that the final density matrix is independent of . Finally, we perform a projective measurement on the spin along the axis. According to Sec. III, this measurement is optimal. Indeed, it gives an outcome according to the probability distribution

(15)

and the CFI in each outcome attains the QFI:

(16)

as shown in Fig. 4(a).

For free evolution, we simply let the spin evolve under the Hamiltonian in Eq. (4) for an interval into the final state , i.e., Eq. (6) with and . This final state differs from that of the spin echo protocol in that it still depends sensitively on the Larmor frequency . Consequently, in order to measure from this free-evolution final state, precise knowledge about is usually necessary (to be discussed shortly), although the QFI about contained in this final state is still given by Eq. (14), i.e., the same as the spin echo protocol. Finally, we should perform an optimal measurement to convert all the QFI into the CFI. According to Sec. III, the optimal measurement is a projective one along the azimuth in the plane (dashed blue line in Fig. 2), whose CFI is equal to the QFI in Eq. (14). However, in our adaptive measurement scheme (to be discussed shortly), the parameter and hence the measurement axis will vary in different measurement cycles. The frequent change of the measurement axis may complicates its experimental realization. To avoid this problem, we fix the measurement axis to be along the axis (i.e., we always measure ), then the measurement distribution is

(17)

and the CFI in each outcome,

(18)

shows rapidly oscillation as a function of , with its envelope coinciding with the QFI [see Fig. 4(b)]. In other words, the CFI still attains the QFI when is an integer multiple of , but does not attains the QFI for general . Fortunately, we can still tune to maximize the QFI and the CFI simultaneously.

Now we optimize the evolution time . For spin echo, the optimal is [see Fig. 4(a)]

(19)

For free evolution, under the realistic assumption , the optimal is an integer multiple of closest to [see Fig. 4(b)]:

(20)

For both the spin echo protocol and the free evolution protocol, choosing gives the same maximal CFI and QFI,

(21)

and hence the same optimal sensing precision

(22)

for repeated measurements. The difference is that in the spin echo (free evolution) protocol, is independent of (dependent on) the Larmor frequency . Specifically, in the free evolution protocol, the rapid oscillation of the CFI as a function of with a period [see Fig. 4(b)] requires precise knowledge about and high control precision of on the order of to correctly locate the maximum [i.e., Eq. (20)] of the CFI. By contrast, in the spin echo protocol, the CFI as a function of is independent of [see Fig. 4(a)], so it requires no knowledge about and relatively low control precision of on the order of () to correctly locate the maximum of the CFI.

Unfortunately, for both protocols, depends on the unknown parameter . Due to this dependence, adaptive schemes that update after each measurement cycle can outperform significantly non-adaptive ones. For each protocol, we consider three different measurement schemes involving different treatments of the evolution time : repeated measurements, adaptive measurements, and the least-square fitting that is commonly used in experiments.

iv.1 Repeated measurement scheme

The evolution time is fixed during the entire estimation process. After repeating the initialization-evolution-measurement cycle times, we get outcomes . Using these outcomes, we refine our knowledge about to the posterior distribution

where () is the number of outcome () and is given by Eq. (15) for spin echo and Eq. (17) for free evolution. Finally, we construct the maximum likelihood estimator and quantify its precision by [cf. Eq. (36)]

(23)

To analyze the performance of this scheme, we notice that for large , the maximum likelihood estimator is known to be unbiased and can saturate the Cramér-Rao bound Eq. (3), so the sensing precision can be approximated by

(24)

Here the CFI is given by Eq. (16) for spin echo and Eq. (18) for free evolution (see Fig. 4). The evolution time directly determines the sensing precision, e.g., setting would lead to the optimal sensing precision in Eq. (22). However, is unknown because it depends on the unknown parameter to be estimated. This makes adaptive measurements crucial for achieving the optimal sensing precision.

iv.2 Adaptive measurement schemes

The key idea is to use the outcomes of previous measurement to refine our knowledge about and then use this knowledge to optimize . We consider two different adaptive schemes: the CFI-based scheme Olivares and Paris (2009); Brivio et al. (2010); Pang and Jordan (2017) and the locally optimal adaptive scheme Berry and Wiseman (2000); Berry et al. (2001); Said et al. (2011); Sergeevich et al. (2011), as introduced in Appendix E. The former updates the maximum likelihood estimator after every initialization-evolution-measurement cycle and then set the evolution time to

(25)

for the spin echo protocol or

(26)

for the free evolution protocol. The latter optimizes to minimize the expected uncertainty of the estimator at the end of the next cycle (see Appendix E). Suppose at the end of the th cycle, our knowledge about is quantified by the distribution and the maximum likelihood estimator constructed from the outcomes of all the previous cycles. In the th cycle with the evolution time , the measurement distribution [Eq. (15) or Eq. (17)] depends on and . If the outcome is , then our knowledge would be refined to , which in turn gives the maximum likelihood estimator and its uncertainty [cf. Eq. (36)]

Since the probability for outcome is estimated as , we should choose in the th cycle to minimize the expected uncertainty [cf. Eq. (41)]

For , i.e., the first cycle, there is no prior information, i.e., is a constant, so is chosen randomly.

To analyze the performance, we notice that after a large number of adaptive steps, the estimator would approach the true decoherence time . Consequently, according to Appendix E, the evolution time and hence the sensing precision for these two adaptive schemes would coincide with each other. In addition, the evolution time in Eqs. (25) and (26) would approach , so the corresponding sensing precision would approach the optimal precision in Eq. (22).

iv.3 Least-square fitting scheme

For a given range of the evolution time, we uniformly discretize it into grids , where . For each , we repeat the measurement times and calculate their average. Then we fit this average as a function of to the theoretical curve (for the spin echo protocol) or (for the free evolution protocol) to obtain an estimator to . Finally, we repeat the procedures above for times to obtain many estimators and determine the uncertainty of a single estimator as the square root of the statistical variance these estimators, i.e.,

where .

According to the Cramér-Rao bound in Eq. (3), the sensing precision of this scheme can be roughly estimated as

where is the total number of measurements,

is the average of the CFI over the range . For spin echo, in Eq. (16) decays exponentially for large [see Fig. 4(a)]. For , we have , thus

(27)

degrades monotonically with increasing . For free evolution, in Eq. (18) shows rapid oscillations as a function of with an envelope coinciding with the CFI for spin echo [see Fig. 4(b)]. For , we have , which is about half that of the spin echo, thus also degrades with increasing .

iv.4 Numerical simulations

Figure 5: Numerical simulation for the estimation of spin decoherence time by spin echo. (a) Estimation precision by least-square fitting (red line) with and , CFI-based adaptive sensing (green line) and locally optimal adaptive sensing (blue line). The black (gray) dotted line indicates in Eq. (22) [ in Eq. (27)]. (b) Successive refinement of the evolution time in CFI-based (green line) and locally optimal (blue line) adaptive sensing.

In the above, we have presented two protocols to measure the spin decoherence time : the spin echo protocol and the free evolution protocol. For each protocol, we consider three kinds of schemes, which involve different treatments of the evolution time : (i) The repeated measurement scheme uses a fixed ; (ii) The two adaptive measurement schemes update in every measurement cycle; (iii) The least-square fitting scheme scan over a fixed range. The spin echo protocol singles out the process from all the other unwanted evolution, so the measurement distribution is independent of the Larmor frequency . Consequently, all the schemes applied to the spin echo protocol require no knowledge about and a relatively low control precision (on the order of over . By contrast, the free evolution protocol leaves the Larmor precession intact, so the measurement distribution [see Eq. (17)] depends sensitively on . Consequently, all the schemes applied to this protocol require precise knowledge about and much higher (on the order control precision over . Specifically: (i) In the repeated measurement scheme, since the posterior distribution depends on , we cannot find the maximum likelihood estimator if is unknown; (ii) In the CFI-based adaptive scheme, we cannot set to Eq. (26) if is unknown; in the locally optimal adaptive scheme, the expected uncertainty depend on both and , so we cannot find the minimum of as a function of if is unknown. (iii) In the least-square fitting scheme, it would be difficult to choose the grid spacing and to fit the measurement data to to extract when is unknown. Therefore, the spin echo protocol is advantageous over the free evolution protocol if our knowledge about is limited or the available control precision over is low.

In all our numerical simulations, we take the true value as the unit of time, i.e., , and take the true value of to be .

To begin with, we check the sensing precision of the repeated measurement scheme applied to the spin echo protocol and the free evolution protocol. We consider three sets of evolution time: , and . For each case, our numerical simulations show that with increasing number of repeated measurements, the uncertainty calculated from Eq. (23) gradually approaches the large- limit [Eq. (24)] for both protocols. For example, the uncertainty for repeated measurements agree well with , i.e., agree well with the CFI , as shown in Fig. 4(a) and 4(b).

Next, we consider the spin echo protocol and compare the adaptive scheme with the commonly used least-square fitting scheme with and . As shown in Fig. 5(a), the precision of the least-square fitting is well approximated by in Eq. (27), which is significantly worse than the optimal sensing precision in Eq. (22). By contrast, the precision of both the CFI-based adaptive scheme and the locally optimal adaptive scheme approaches the optimal precision after measurements. Physically, this is because both adaptive schemes successively adjust the evolution time [e.g., Eqs. (25) and (26) for the CFI-based adaptive scheme applied to the spin echo protocol and the free evolution protocol] based on the newest knowledge about the unknown parameter after every measurement. As shown in Fig. 5(b), after measurements, the evolution time in both adaptive schemes already approaches the optimal evolution time [Eq. (19)].

Figure 6: Numerical simulation for the estimation of the spin decoherence time by free evolution. (a) Estimation precision by CFI-based adaptive sensing (green line) and locally optimal adaptive sensing (blue line). The black dotted line indicates the optimal sensing precision in Eq. (22). (b) Successive refinement of the evolution time in CFI-based (green lines) and locally optimal (blue lines) adaptive sensing.

Then, we turn to the free evolution protocol, which requires precise knowledge about and precise control over on the order . Since the average spin exhibits rapid oscillations as functions of the evolution time , the best way to do least-square fitting is to let the grid spacing be an integer multiple of , so that samples the envelope of the curve only. This also amounts to sampling the envelope of the rapidly oscillating CFI in Eq. (18), so that attains the corresponding QFI. Therefore, the least-square fitting scheme applied to the free evolution protocol would give the same precision as it does for the spin echo protocol, so we do not simulate this case any more. For the two adaptive schemes applied to the free evolution protocol, as shown in Fig. 6(a), both the CFI-based one and the locally optimal one approach the optimal sensing precision after measurements, similar to the case of the spin echo protocol. As shown in Fig. 6(b), the adaptive schemes successively refine the evolution time [e.g., Eq. (26)] based on the newest knowledge about after every measurement. After measurements, the evolution time in both adaptive schemes approach the optimal evolution time [Eq. (20)].

V Conclusion

Using localized spins as ultrasensitive quantum sensors is attracting widespread interest. Recently, adaptive measurements were used to improve the dynamic range for the spin-based estimation of deterministic Hamiltonian parameters such as the external magnetic field. Here we explore a very different direction – the use of adaptive measurements in spin-based sensing of random noises. We have performed general analysis that identifies a series of important differences between noise sensing and the estimation of deterministic magnetic fields, such as the different dependences on the spin decoherence, the different optimal measurement schemes, the absence of the modulo-2 phase ambiguity, and the crucial role of adaptive measurement. We have also performed numerical simulations that clearly demonstrate significant speed up of the characterization of the spin decoherence time via adaptive measurements compared with the commonly used least-square fitting method. This work paves the way towards adaptive noise sensing.

Acknowledgements.
This work was supported by the MOST of China (Grants No. 2014CB848700), the National Key R&D Program of China (Grants No. 2017YFA0303400), the NSFC (Grants No. 11774021), and the NSFC program for “Scientific Research Center” (Grant No. U1530401). We acknowledge the computational support from the Beijing Computational Science Research Center (CSRC).

Appendix A State preparation and encoding: quantum Fisher information

The amount of information about contained in a general -dependent quantum state is quantified by its QFI Braunstein and Caves (1994)

(28)

where is the so-called symmetric logarithmic derivative operator: it is an Hermitian operator defined through Helstrom (1976)

The QFI defined in Eq. (28) remains invariant under any -independent unitary transformations, i.e., such transformations conserves the quantum information. For a pure state , we have and hence

(29)

where the last step applies to unitary evolution and is the root-mean-square fluctuation of in the initial state. For a general mixed state with the spectral decomposition , its QFI is Knysh et al. (2011); Zhang et al. (2013); Liu et al. (2013)

(30)

where are nonzero eigenvalues of , are the corresponding ortho-normalized eigenstates, and is the QFI of the pure state [see Eq. (29)]. This expression shows that the QFI of a non-full-rank state is completely determined by its support, i.e., the subset of with nonzero eigenvalues. For a two-level system, its density matrix can always be expressed in terms of the Pauli matrices as , where is the Bloch vector. The QFI for such a state is Dittmann (1999); Zhong et al. (2013); Li et al. (2015)

(31)

where the second term is absent when , i.e., when is a pure state. When is the direct product state of quantum systems, its QFI is additive: , where is the QFI of .

Physically, the QFI measures the rate of variation of with the parameter , e.g., if we regard and as classical variables, then and Eq. (28) becomes the average of over the state . Moreover, the Bures distance between two quantum states and is defined as Bures (1969)

where the second term on the right-hand side is the so-called Uhlmann fidelity Uhlmann (1976). For neighboring states and , the Bures distance reduces to

so the QFI measures the distinguishability between two neighboring states parametrized by .

The importance of the QFI for parameter estimation is manifested in the inequalities Eqs. (1) and (3). Namely, given and hence , the precision of any unbiased estimator from repetitions of any measurement is limited by the inequality

(32)

known as the quantum Cramér-Rao bound Helstrom (1976); Braunstein and Caves (1994). Saturating this bound requires saturating Eqs. (1) and (3) simultaneously, i.e., using optimal measurements to convert all the QFI into the CFI and using optimal unbiased estimators to convert all the CFI into the precision of the estimator.

Appendix B Measurement: classical Fisher information

A general measurement with discrete outcomes is described by the positive-operator valued measure (POVM) elements satisfying the completeness relation . Given a quantum state , it yields an outcome according to the probability distribution that depends on . The amount of information about contained in each outcome is quantified by the CFI Kay (1993):

(33)

For continuous outcomes, we need only replace by everywhere. Physically, the CFI quantifies the dependence of the measurement distribution on the parameter . Actually, the Wootters’ distance Wootters (1981) between two probability distributions and is

For neighboring distributions and , the Wootters’ distance reduces to

so the CFI measures the distinguishability between neighboring measurement distributions parametrized by .

Since the probability distribution function is the classical counterpart of the quantum mechanical density matrix, the CFI (Wootters’ distance) is the classical counterpart of the QFI (Bures distance). The inequality Eq. (1) expresses the simple fact that no new information about can be generated in the measurement process: optimal (non-optimal) measurements convert all (part) of the QFI into the CFI. Given , the optimal measurement is not unique. The projective measurement on the symmetric logarithmic derivative operator has been identified Braunstein and Caves (1994) as an optimal measurement, but is not known. To circumvent this problem, the simplest way is to find other optimal measurements that do not depend on . Another solution Barndorff-Nielsen and Gill (2000) is to approximate by , where is our best guess to , i.e., the optimal unbiased estimator, as we discuss below.

Appendix C Data processing: optimal unbiased estimators

Given the measurement distribution and hence the CFI of each outcome, the precision of any unbiased estimator constructed from the outcomes of repeated measurements is limited by the Cramér-Rao bound Eq. (3), which expresses the simple fact that no new information about can be generated in the data processing: optimal (non-optimal) unbiased estimators convert all (part) of the CFI into the useful information quantified by the precision . Finding optimal unbiased estimators is an important step in parameter estimation. In the limit of large , two kinds of estimators are known to be unbiased and optimal: the maximum likelihood estimator and the Bayesian estimator Kay (1993), as we introduce now.

Before any measurements, our prior knowledge about the unknown parameter is quantified by certain probability distribution , e.g., a -like distribution corresponds to knowing exactly, a flat distribution corresponds to completely no knowledge about , while a Gaussian distribution corresponds to knowing to be with a typical uncertainty .

Upon getting the first outcome , our knowledge about is immediately refined from to

according to the Bayesian rule von Toussaint (2011), where is a normalization factor ensuring is normalized to unity: . Here is the posterior probability distribution of conditioned on the outcome of the measurement being : its parametric dependence on means that different measurement outcomes leads to different refinement of knowledge about .

Upon getting the second outcome , our knowledge is immediately refined from to

where is a normalization factor for the posterior distribution . If we omit the trivial normalization factors, then the measurement-induced knowledge refinement becomes

Upon getting outcomes , our knowledge about is quantified by the posterior distribution

up to a trivial normalization factor, where is the probability for getting the outcome . The posterior distribution completely describe our state of knowledge about . Nevertheless, sometimes a single number, i.e., an unbiased estimator, is required as the best guess to . There are two well-known estimators: the maximum likelihood estimator Kay (1993)

(34)

is the peak position of as a function of , while the Bayesian estimator Kay (1993)

(35)

is the average of . For large , both estimators are unbiased and optimal: and , where or , and denotes the average over a large number of estimators obtained by repeating the -outcome estimation scheme many times and is defined as Eq. (2) or

(36)

For a simple understanding, we consider , so the number of occurrence of a specific outcome approaches . Then, up to a trivial normalization factor, the posterior distribution approaches

which exhibits a sharp peak at . For large , is nonzero only in the vicinity of . This justifies a Taylor expansion around , leading to the Gaussian form with a standard deviation