Towards tests of quark-hadron duality with functional analysis and spectral function data

Towards tests of quark-hadron duality with functional analysis and spectral function data

Diogo Boito Instituto de Física de São Carlos, Universidade de São Paulo, CP 369, 13560-970, São Carlos, SP, Brazil Irinel Caprini Horia Hulubei National Institute for Physics and Nuclear Engineering,
P.O.B. MG-6, 077125 Magurele, Romania
Abstract

The presence of terms that violate quark-hadron duality in the expansion of QCD Green’s functions is a generally accepted fact. Recently, a new approach was proposed for the study of duality violations (DVs), which exploits the existence of a rigorous lower bound on the functional distance, measured in a certain norm, between a “true” correlator and its approximant calculated theoretically along a contour in the complex energy plane. In the present paper we pursue the investigation of functional-analysis based tests towards their application to real spectral function data. We derive a closed analytic expression for the minimal functional distance based on the general weighted norm and discuss its relation with the distance measured in norm. Using fake data sets obtained from a realistic toy model in which we allow for covariances inspired from the publicly available ALEPH spectral functions, we obtain by Monte Carlo simulations the statistical distribution of the strength parameter that measures the magnitude of the DV term added to the usual operator product expansion (OPE). The results show that, if the region with large errors near the end-point of the spectrum in decays is excluded, the functional-analysis based tests using either or norms are able to detect, in a statistically significant way, the presence of DVs in realistic spectral function pseudodata.

\newdateformat

mydate\THEDAY \monthname \THEYEAR

\mydate

July 20, 2019

1 Introduction

The presence of additional terms in the QCD Green’s functions, beyond those generated by OPE (understood as perturbation theory and power corrections), is a generally accepted fact, with support both from theory and phenomenology. According to the standard terminology [1, 2, 3], these terms are said to violate quark-hadron duality. We recall that, in its conventional sense, quark-hadron duality assumes that the description in terms of the OPE, valid on the Euclidian axis and at complex energies, can be analytically continued to match with the description in terms of hadrons, which live on the Minkowski axis.

DVs are supposed to arise from contributions of internal lines with soft momenta in the Feynman diagrams, which are not included in the OPE. Their calculation from first principles is, at least at present, impossible. Quantitative understanding must be based on realistic models, whose main features have been tested against experimental data. Two types of specific mechanisms have been suggested, one considering quarks in an instanton field [1, 2, 3], the other based on narrow-resonance saturation in the large- limit [3, 4, 5, 6, 7]. Both mechanisms are materialized in exponentially suppressed terms on the spacelike axis, which exhibit oscillations when analytically continued to the timelike axis. More formal arguments in favour of the existence of DVs are provided by ideas of resurgence and the associated trans-series [8]. The assumption that the OPE, expected to be a divergent expansion with increasing large-order coefficients, is actually an asymptotic series, leads also naturally to the presence of additional exponentially suppressed contributions [9]. However, beyond these somewhat general arguments, no detailed dynamical calculation of the additional contributions present in the theoretical expression of the Green’s functions is available.

The phenomenological extraction of DVs is far from trivial because one must detect terms exponentially suppressed as function of energy, while an infinity of terms logarithmically and power suppressed, i.e. larger in principle, are neglected in the standard truncated expansions of the Green functions. Since these expansions are actually divergent, the magnitude of the neglected terms can be quite substantial. Moreover, as mentioned above, the confrontation between theory and experiments implies an analytic continuation in the complex energy plane, with its known instabilities and pitfalls. Analyticity is usually exploited by means of a Cauchy integral relation along a contour in the complex plane for the QCD polarization amplitude of interest multiplied by a suitable weight. This allows one to build sum-rules that relate the integrated theoretical predictions on the contour to weighted integrals over the spectral function data on the positive Minkowski axis. The weight is chosen such as to enhance or to suppress the contribution of various terms in the theoretical expression of the amplitude. The impact of DVs for practical calculations is therefore sensitive to the weights that are employed and vary depending on the quantity of interest. When extracting QCD parameters, for example, different weight functions have been advocated. In some works, DVs are explicitly taken into account by means of realistic parametrizations [10, 11, 13, 12, 14, 15], which allows for a quantitative control of their contribution, while in others DVs are ignored on the basis of their suppression by the weight functions employed  [16, 17, 18]. The reliability of the different approaches is still being questioned [18, 19] and, therefore, a better understanding of DVs would certainly contribute to the precision with which QCD parameters can be extracted. This is particularly true for the determination of the coupling from the -hadronic spectral functions.

In the recent paper [20], a method based on functional analysis was proposed in order to test the presence of DVs in QCD. The method starts from the obvious remark that the “true” polarization amplitude and its approximate theoretical expression are entirely different functions, with different analytic properties, which cannot coincide in the complex energy plane. Moreover, defining a functional distance, measured in a certain norm, between these two functions along a contour in the complex plane, a rigorous nonzero lower bound on this distance can be shown to exist. In particular, for the functional distances defined in and in norms, the lower bound can be calculated by an explicit algorithm involving the QCD approximant in the complex plane and an infinity of Fourier coefficients obtained from the spectral function (“moments”) measured experimentally on a part of the timelike axis.

As argued in Ref. [20], the minimal distance between the true function and its approximant can be used as a tool for detecting the presence of DVs. In particular, from the variation of the minimal distance with respect to a parameter that measures the strength of the duality violating contribution, one can infer the optimal value of this parameter. Formulated in this way, the problem becomes analogous to the search for new physics beyond the standard model (SM) in experiments at very high energies, where one tests for the presence of new physics through a “strength parameter” of the signal, while treating SM as a background. In our case, the “new physics” is represented by DV terms, while OPE is the background representing the “known physics”.

The application of these ideas to a toy model proposed in Ref. [6] indicated that the new approach might be useful for detecting the presence of DVs in QCD. The asymptotic expansion of the exact model contains, besides a purely perturbative term and an expansion identified with higher-dimension terms in the OPE, an additional term that can be interpreted as a DV contribution. The minimal functional distance defined in Ref. [20], calculated with the spectral function of the model and a truncated OPE to mimic the physical cases, displayed a sharp minimum at the true value of the strength of the DV term. In particular, the functional distance measured in norm proved to be more sensitive to the variation of the parameter than the distance measured in norm. However, the effect of the experimental uncertainties inherent in the spectral function used as input was only barely touched in Ref. [20]. A detailed investigation of this aspect is crucial for assessing the usefulness of the method to detect DVs from real data. In the present paper we address precisely this problem.

We consider the same toy model proposed in Ref. [6], assuming now that the spectral function is measured only in a finite number of bins with uncertainties and correlations similar to those reported in real experiments on hadronic decays. It turns out that a statistical interpretation of the minimal distances defined by functional analysis is difficult to assess theoretically. Therefore, we perform an empirical study based on pseudodata, where fake data on the spectral function are generated in a number of bins, with a multivariate Gaussian distributions with covariances inferred from the ALEPH covariance matrix for the vector () channel [16]. The statistical distribution of the parameter that measures the magnitude of the DV term added to the usual OPE is then derived by Monte Carlo simulations, allowing the extraction of a mean and a standard deviation. The aim is to establish if the method is able to detect, in a statistically significant way, the presence of DVs from error-affected experimental measurements. We also compare the procedures based on and norms and establish which is the most eficient tool when the uncertainties in the spectral function are taken into account. In the process, we give closed analytical expressions for the functional distances in a generalized weighted norm that interpolates almost exactly between and .

The plan of the paper is as follows. We start, in Sec. 2, with a brief review of the approach proposed in Ref. [20], defining the minimal functional distances in and norms and presenting the algorithms for their calculation. Sec. 3 contains two new mathematical developments important for applications: in subsection  3.2 we prove that the minimal distance based on the general weighted norm can be written down in a closed analytic form, and in subsection 3.2 we derive a suitable approximation of the minimal distance based on norm by a class of weighted norms. In Sec. 4 we briefly review the toy model and describe the data generation with ALEPH-based covariances. Sec. 5 contains our main results and Sec. 6 is devoted to our conclusions.

2 Theoretical framework

We begin with a short presentation of the work performed in Ref. [20]. The main idea is to quantify the difference, along a contour in the complex plane, between the QCD description of a correlator of light quark currents and its true value . By QCD description one understands the perturbative part, the contribution from OPE condensates and possible duality violations:

(1)

where encompasses both the purely perturbative contribution (or dimension contribution) and the power corrections.

Figure 1: The contour in the complex s-plane.

For definiteness the contour was taken as the circle shown in Fig. 1, where is sufficiently large such that the QCD approximant is valid. Measuring the distance by the norm [21], we consider the quantity

(2)

where is the “true”, physical function, known to be an analytic function in the -plane cut along the real axis for , which satisfies the Schwarz reflection condition . In addition, its discontinuity

(3)

is known experimentally on a limited energy range, .

The exact value of cannot be computed in QCD for lack of the true : the properties stated above do not specify uniquely the function, but define a whole class of admissible functions to which the physical one must belong. If we define

(4)

where the minimization is with respect to all functions in this admissible class, then it follows that

(5)

As shown in Ref. [20], the quantity can be calculated by applying a duality theorem in functional optimization [21]. For completeness, we present below the main steps of the proof. We make first the simple change of variable

(6)

which maps the domain shown in Fig. 1 onto the unit disk cut along a segment of the real axis. Various classes of analytic functions have been defined in the canonical domain . In particular, adopting the functional distance (4) we are lead naturally to the class of functions analytic inside the disk and bounded on the circle , with the norm defined as the supremum of the modulus along the boundary :

(7)

We consider also the class of functions analytic inside the disk and of finite norm on the frontier , where

(8)

and the more general class of analytic functions of finite norm, where is the more general norm defined as

(9)

in terms of a weight given on the boundary of the unit circle.

As shown in Ref. [20], the problem (4) can be written in the equivalent form

(10)

Here the minimization is performed with respect to all the functions analytic in the disk and bounded on the frontier, and is a known complex function defined on the boundary of the unit circle by

(11)

where is an arbitrary parameter introduced for technical reasons, which does not appear in the final result (see [20] for more explanations).

The solution of the problem (10) has been obtained in Ref. [20] by means of a duality theorem in functional optimization [21]. This theorem reads

(12)

where the functions and belong to the unit sphere of , i.e. are analytic in and satisfy the conditions

(13)

where the norm is defined in Eq. (8).

We recall that all the functions considered here are real analytic, i.e. satisfy the reflection property written as . Therefore, if one writes the Taylor expansions

(14)

coefficients will be real and, due to (13), will satisfy the conditions

(15)

The supremum in the right-hand side of Eq. (12) can be calculated by means of a rather simple numerical algorithm, as shown in Ref. [20]. Namely, let be the Hankel matrix defined as

(16)

in terms of the real coefficients

(17)

The coefficients defined in Eq. (17) are actually the negative-frequency Fourier coefficients which measure the “non-analytic” part in of the complex function defined in Eq. (11). One may recognize in them the moments used in traditional finite-energy sum rules based on a Cauchy integral relation for multiplied with a power of along the contour of Fig. 1. Then, is obtained as the spectral norm

(18)

i.e.  the square root of the greatest eigenvalue of the positive-semidefinite matrix .

In the numerical calculations, the matrix is truncated at a finite order , using the fact that for large the successive approximants tend toward the exact result (for a formal proof of convergence see Appendix E of Ref. [22] and for numerical tests see Ref. [23]). By the duality theorem, the initial functional minimization problem (4) is thus reduced to a rather simple numerical computation.

One can define also the minimal functional distance based on the norm

(19)

which can be written in the variable as

(20)

for the same function defined in Eq. (11). The solution of this minimization problem has the simple form [20]

(21)

in terms of the same coefficients defined in Eq. (17).

More generally, we consider the functional distance based on the more general norm defined in Eq. (9), when instead of (20) we must solve the problem

(22)

where is a suitable weight. It can be shown, without loss of generality, that can be taken as the boundary value of an outer function [21], i.e. a function analytic and without zeros in . It is easy to show then that the solution of the problem (22) is

(23)

where the real numbers are the weighted moments

(24)

depending on the function . The quantity defined in the standard norm is obtained from these relations for .

In practice, as in the calculation of by means of Eq. (18), the infinite sums in Eqs. (21) and (23) are truncated after a finite number of terms and the convergence towards the values and is tested numerically. Actually, as we will show in the next section, the infinite summation in the general case (23) can be performed exactly and the minimal distance can be written in a closed analytic form.

3 New mathematical developments

3.1 Closed analytic form of

A compact analytic form for the quantity can be obtained easily by performing the summation upon in the expression (23). For convenience we write the real coefficients defined in Eq. (24) as

(25)

where the significance of the terms is obvious. Then we obtain from (23):

(26)

Using the expression of from (24), the first sum in Eq. (26) can be written immediately as

(27)

The second sum in Eq. (26) is written in a convenient form by using for the coefficients the expression

(28)

which is explicitly real. Using further the reality property of the functions and , i.e. the relation , the integration interval can be reduced to . Thus we obtain after a straightforward calculation

(29)

We note that the end singularities at in the integrand of (27) and at , in the integrand of (29) are logarithmically integrable.

The last sum in Eq. (26) can be written as

where denotes the principal part and

(31)

is the phase of the complex function on the circle , i.e. on .

For the numerical evaluation it is more convenient to write the second term in the r.h.s. of Eq. (3.1) in the equivalent form:

(32)

By collecting the terms in Eqs. (27), (29), (3.1) and (3.1) we obtain the final expression of the squared of

(33)

All the integration intervals can be further reduced to by taking into account, as explained above, the reality property of the functions, which implies that the real (imaginary) parts are even (odd) functions of .

3.2 Approximation of norm by a suitable class of norms

We show now that it is possible to approximate the minimal distance measured by the norm by a class of minimal distances defined by the weighted norms. We follow an argument put forward for the first time in Refs. [24, 25], which is based on the duality theorem Eq. (12) applied for solving the original minimization problem (10).

We note that the r.h.s. of Eq. (12) requires the calculation of the supremum upon two sets of functions, and , which are analytic in the unit disk and of norm bounded by 1. The idea is to calculate first the supremum upon one class of functions, say , keeping the other one fixed. We note that the r.h.s. of Eq. (12) can be written as

(34)

where are the Taylor coefficients defined in Eq. (14) and are negative-frequency Fourier coefficients of the product , given by the weighted moments (24). Then Eq. (12) becomes

(35)

The supremum upon the coefficients subject to the second condition (15) can be evaluated immediately by Cauchy-Schwarz inequality, leading to

(36)

where the dependence of the coefficients on the weight is given in Eq. (24). Finally, by using (23), we write this relation as

(37)

We emphasize that this is an exact relation, which states that the minimal distance in the norm is the largest value from the class of distances in the weighted norm, for all the weight subject to the first condition (13).

Of course, the problem is not yet solved, we still have to calculate the supremum in (37). The procedure makes sense if one can find a suitable, simple parametrization of the functions , such that the maximization upon this limited class approximates well the exact . It turns out that such a choice exists [24, 25]: the main observation is that one can obtain approximately the maximum modulus of a function on a certain interval by computing the normalized integral of its modulus squared in a variable that dilates the region where the modulus of the function reaches its maximum. Therefore, one can approximate the norm (7) of an arbitrary function by an norm (8) defined on the integration range distorted by a suitable change of variable. In order to obtain it, we consider the conformal mapping of the unit disc onto itself, achieved by the so-called Blaschke transformation [21]

(38)

where is an arbitrary parameter with . Since we consider real analytic functions, one can restrict to real values. The transformation (38) maps in particular the unit circle onto itself. This change of variable in the norm (8) introduces the Jacobian , which corresponds to a weight function in the weighted norm (9), of the form

(39)

where is a real parameter in the range . It is easy to check that this function satisfies the first condition (13).

By the above remark, the functional supremum in Eq. (37) was reduced to a maximization with respect to a single real parameter . The minimal distance can thus be calculated approximately by a relatively simple algorithm: first one calculates the minimal distance given in (23), with the particular choice (39) of the weight. Then the parameter is varied in the range and the largest value of is retained. This problem is numerically quite simple, especially since, as shown in the previous subsection, for an arbitrary weight can be written in an analytic compact form.

Some hints on the optimal value of the parameter are obtained from the specific properties of the input. Thus, we note that for values of close to 1, the function is large near , i.e. near , both on the circle and on the real axis. Therefore, in this case the weighted norm (22) is dominated by the region of the circle near the timelike axis. One can expect that such values of would be useful in order to detect DVs that are large only near the timelike axis. We shall test these expectations in the numerical studies reported in Sec. 5.

4 Toy data generation

The main goal of the formal developments presented in the previous sections is to provide tools for the validation of DV models using information from the spectral function data. It is therefore instrumental to test the procedure with toy data sets generated from a realistic model for which the DVs are known exactly. Part of the work described here is an extension of analytical results of Ref. [20] to a more realistic situation, where the spectral function comes in the form of a binned data set subjected to statistical fluctuations. With the application to ALEPH data in mind, we shall consider data sets that are obtained from a realistic covariance matrix. In this section we discuss the central model used for the exercises performed in this work as well as how we construct our covariance matrix. With these two ingredients, we have full control over the problem and can perform simulations in order to understand how the procedure can be applied to real data.

We start with a brief review of the model that we employ for this exercise.111An extended discussion of the model in the present context can be found in Ref. [20]. The model was introduced in Ref. [6], based on previous ideas from Refs. [3, 2, 4]. To be concrete, here we focus on the vector spectral function. The description is based on a “Regge tower” of resonances and upon including the meson pole into the tower, the correlator assumes the following exact form

(40)

where we defined

(41)

and is the Euler digamma function. We employ the following set of parameters:

(42)

which provides, for our purposes, a realistic description of the experimentally observed vector spectral function of the QCD correlator.

The asymptotic expansion of the digamma function can be used in order to obtain an OPE-type description of the correlator . Truncating the expansion at an order it reads

(43)

The first term corresponds to the “purely perturbative” part and the other terms are power corrections, akin to the condensate contributions of QCD. The explicit expression of the coefficients that appear in are

(44)

with representing Bernoulli polynomials.

The asymptotic expansion of Eq. (43) is not accurate near the timelike axis, as in the case for the OPE in QCD. For large enough and the description can be improved taking into account the DVs. In practice, the DV term can be obtained from the reflection property of the digamma function [6, 20]. The following modified approximant is thus obtained

(45)

valid for large enough and for . The DV contribution is given in the first quadrant ( and ) by

(46)

and can be defined in the lower half-plane using Schwarz reflection as . For this correction is assumed to vanish.

Comparing the modulus of the exact function, Eq. (40), along the upper semi-circle , , with its approximants, Eqs. (43) and (45), one learns that the truncated OPE-type expansion of Eq. (43) provides an accurate description except close to the timelike axis (), as expected in QCD. The addition of the DV term fixes this deficiency and the approximated description of Eq. (45) becomes excellent also in the vicinity of the timelike axis. (We refer to Ref. [20] for a visual account of this comparison.)

For the numerical exercises described in this work we use the model of Eq. (40) as our central description. Hence, the OPE for the model and the DV contribution are exactly known and are given by Eqs. (43) and (46), respectively. The values of the vector spectral function for toy data generation are obtained, thus, from

(47)

In order to mimic the experimental situation, the interval is split in bins and the central value of each bin is obtained from a statistical distribution that fluctuates around the values of Eq. (47) calculated at the center of each bin. We turn now to the issue of the covariance matrix that governs these fluctuations.

Our toy data generation is performed having in mind the application to the ALEPH spectral functions [16]. It is therefore desirable that the covariances used reflect those of ALEPH data sets. One could simply adopt the ALEPH covariances as such, since they are publicly available [27], and generate toy data sets following a statistical distribution given by this matrix, together with the central values of Eq. (47). The price to pay is that one would have to use the ALEPH binning of the interval . In the most recent version of the data sets, due to an improved unfolding procedure, an adaptive binning was used which results in bins with different widths, notably with larger bins towards the edge of the spectrum. Here we prefer to adopt a fixed bin width, for simplicity, and we choose such as to have more bins than ALEPH towards the end-point of the spectrum. This allows us to have a finer description at higher energies. The accompanying realistic covariances are obtained from a numerical interpolation of the ALEPH covariance matrix for the vector channel. In this way, we preserve a fixed binning together with a covariance matrix that has all the main properties of ALEPH’s, namely, strong correlations between neighbouring bins, larger uncertainties towards the end-point of the spectrum and, of course, uncertainties that are of the same order of those of ALEPH’s data.

Figure 2: An example of a toy data set obtained from the central values of the model given in Eq. (47) with covariances from a numerical interpolation of ALEPH’s covariance matrix for the vector channel [27]. The solid line gives the central value of the model for comparison.

In the present work we adopt (which is in line with what is used in the experimental analyses [28, 29, 16]), the central values of Eq. (47), and the covariance matrix obtained from a numerical interpolation of the ALEPH covariances obtained from [27], as described above. Toy data sets can then be generated from a multivariate Gaussian distribution. An example of a data set generated in this way is displayed in Fig. 2. We show also the central values of Eq. (47) for comparison. In this figure, the strong correlations are clearly visible, mainly towards the end-point of the spectrum, where the uncertainties are also larger.

When using data sets for the calculation of the functional distances discussed in Secs. 2 and 3, weighted integrals over the spectral function such as those entering Eq. (17) must be discretized. We are going to adopt integration by rectangles, as is usual when dealing with this type of integrals of the spectral functions [16, 15]. However, weight functions with high powers of the energy variable appearing, for example, in Eq. (17), vary strongly within a bin. It is therefore necessary to average over the weight function inside a bin to improve the numerical result.222In the case of ALEPH data this prescription is sometimes used due to the large bin widths of the right-most bins [15]. The numerical counterpart of a typical integral reads then

(48)

where is the value of at the center of the th bin, is the fixed bin width, and represents the index of the last included bin — here we always work with values that correspond to the right edge of a bin. The same was applied for the calculation of the relevant integrals which appear in the analytic form of derived in Sec. 3.1. We have tested that this algorithm provides enough accuracy for the explorations performed in this work.

5 Results

We apply now the functional-analysis based tools to test in practice the description of DVs. To illustrate the potential of the method, a useful approach is to introduce in the approximate description (45) of the correlator a strength parameter that allows one to tune the contribution of the DVs. Formally, we do this by using for in the formalism presented in Secs. 2 and 3.1, instead of Eq. (1), the more general expression

(49)

where the true value of the strength parameter is . As in Ref. [20], to simulate the situation of the light-quark correlators in QCD, we take as the asymptotic expansion (43) of the exact model truncated after terms. For we take the prediction (46) of the model.

In Ref. [20], it was shown by means of analytical computation that has a sharp minimum at the correct value , when one employs the description of DVs that follows from the model used for . The alternative quantity displayed also a minimum at the correct value of , but this minimum was found to be shallower [20]. In this section we investigate the impact of the use of spectral function data with realistic covariances to the above findings. It will be interesting to make use of the weighted norm, since it permits a continuous and almost exact interpolation between the and the norms, as well as a study of other weighted norms, such as “pinched” norms. The analytical results obtained for shall also be instrumental in this analysis.

5.1 Comparison between and norms

We check first on the toy model the approximation of the minimal distance based on norm by the distances based on the norms , using the particular class of weights given in (39). In this discussion we use, as in Ref. [20], the exact spectral function of the model, with no errors, and the OPE expansion truncated at .

From Fig. 3, which shows the modulus of as a function of on the first quadrant of the circle , one can see that the DV part of the model is strongly peaked towards the Minkowskian axis. Therefore, following the discussion at the end of Sec. 3.2, a weight strongly peaked towards (i.e. ) is expected to give the best approximation of norm by weighted norms for this model. As shown below, the expectation is confirmed.

Figure 3: Modulus of as a function of on the first quadrant of the circle with . is zero in the left half of the complex plane.

In Fig. 4 we show the variation with of several functional distances, calculated with the algorithms based on Fourier coefficients truncated at . We set in this exercise the radius of the circle in Fig. 1 to , but the results are similar for other choices, including .

As usual, denotes the minimal distance measured in norm, calculated from the norm (18) of the Hankel matrix, Eq. (16). For calculating , we used the truncated sum (23), with the expression (24) of and a weight of the form (39) with . This weight drastically dilates the region near on the circle, increasing its contribution to the norm. One can see that, for this choice of the weight, the distance practically coincides with . Both curves are steeper than the standard distance , which corresponds to the weight , as already remarked for in Ref. [20]. The figure shows also that the minimal distance , calculated with a pinched weight of the form333One refers as “pinched” to weight functions that have a zero for .

(50)

is much less sensitive to the variation of , which is not surprising since this type of weight suppresses the region where the DV term is nonzero.

The optimal value of the parameter was found empirically, by computing for several values of close to 1, and keeping the value leading to the best approximation of . Of course, the best value achieving the supremum in (36) depends also on the other ingredients of the input. Thus, for a different number of Fourier coefficients taken into account in the calculation of the norms a slightly different value of might yield the best approximation. Also, a slightly different optimal value of is expected if the input spectral function is slightly changed. This remark will be useful for understanding the results of the simulations performed below, which take into account the uncertainties on the spectral function.

Figure 4: Variation of the minimal functional distances with , for . is based on norm, , which coincides practically with , is obtained with of the form (39) for , and is obtained with the pinched weight (50). The calculation of the norms is done with the exact spectral function of the model, using Fourier coefficients.
Figure 5: Typical distribution obtained from Monte Carlo simulations.

5.2 Stability and comparison with the summed results

We start the simulations by investigating the computation of the distance measured with norm from the truncated version of Eq. (21) — which can be viewed as a special case of Eq. (23) with the appropriate choice of the weight . For a given data set, one can compute the value of , or more generically, of , after a truncation of the infinite sums at the -th term. The value of the norm can then be minimized numerically with respect to the strength parameter . Due to the statistical fluctuations, each toy data set that is generated yields a different value of . We repeat this procedure for 5,000 different data sets, in a reproducible way, in order to obtain the statistical distribution of the parameter . The final value of can be read off from the distributions. We quote central values given by the medians and uncertainties defined by % confidence levels, but the distributions are, to a very good approximation, Gaussian, as illustrated with the histogram shown in Fig. 5.

In Table 1, we show the dependence of the best values of on the number of included terms in the truncated version of Eq. (21). In this table we choose  GeV, which avoids some of the bins with larger uncertainties (see Fig. 2). One can conclude from this table that the convergence of the results seems to be satisfactory; with a few hundred terms in the sum the results are already stable. Furthermore, the exercise indicates that with a realistic data set the error in is such that we are able to detect the presence of DVs, i.e. , from their absence, , in a statistically meaningful way.

5
15
100
150
300
500
Table 1: Values of from the minimization of as a function of the number of coefficients included in the truncated sum (21) for . The numerical integrals needed for the are computed as in Eq. (48). The central value is obtained from the median of the distribution.

We now turn to the dependence of the predictions on the choice of . The fact that the last few bins suffer from a much larger uncertainty, combined with the decrease of the DV contribution at higher , has the consequence that the choice of larger produces less precise determinations of . In Table 2 we compare values of obtained from and from the truncated versions of Eqs. (18) and (21), respectively, for different values of (we use terms in the sums). Two main conclusions can be drawn from this table. First, as expected, the uncertainties are larger when is chosen to be closer to the edge of the spectrum. For , the determination loses statistical significancy rather fast when the last few bins are included, and at one can no longer distinguish in a meaningful way the central value from the absence of DVs. Second, the use of leads to broader -distributions and hence to larger uncertainties. This effect is small for lower values and the results are essentially undistinguishable from those obtained using . At , however, the uncertainty is the double of the counterpart.444In order to obtain the results of Table 2 for it becomes important to allow for negative central values in the toy data spectral functions. These are rare, but do occur for the last few bins where the uncertainties, following the recent ALEPH reanalysis, are rather large. We conclude that the deeper minimum of with respect to observed in the analytical calculations of Ref. [20] (and seen also above in Fig. 4) does not translate into a narrower -distribution when the errors on the spectral function are taken into account.

from from
 GeV
 GeV
 GeV
Table 2: Values of from the minimization of and for different values of . terms are included in the sum (21) and in the Hankel matrix used in (18). The numerical integrals needed for the are computed as in Eq. (48). The central value is obtained from the median of the distribution.

A final validation of the results obtained in Tabs. 1 and 2 can be obtained using the closed analytical form of derived in Sec. 3.1. The use of the weighted norm is particularly convenient as it allows for an almost exact interpolation between the and norms and, at the same time, is amenable to a fully analytical treatment of the minimization problem.

2.76  GeV
Table 3: Optimal values of from the minimization of the exact analytic expression of , for two values of . Three weights are used: corresponding to the standard norm, the weight (39) with and the pinched weight (50). The results are obtained with 5,000 toy data sets, the representative value being the median of the distributions.

Using the decomposition (49) of , we can write Eq. (3.1) as a quadratic polynomial of of the form

(51)

where

(52)

and the calculable coefficients can be read off from (3.1). In particular, it is easy to see that only depends on the spectral function, the coefficients and involving only the values of the theoretical expressions and on the circle .

The optimal value of , which achieves the minimum of (51), is obtained in a straightforward manner as

(53)

This formulation turns out to be very convenient for numerical simulations, which can be done directly on the spectral function, avoiding the calculation of many experimental moments and the issue of truncating infinite sums. In practice, the coefficients and and the function are calculated only once, being fixed during data generation. This amounts to a considerable reduction of the computational time required by the statistical simulations.

In Tab. 3 we display the ranges of obtained from simulations using the analytic result (53), for and and three choices for the weight: , which corresponds to the standard norm with the minimal distance , the expression (39) with , expected to approximate well the norm, and the pinched weight (50). We remark the perfect agreement between the results quoted in Table 3 for the weight and the values obtained from in Table 2 for the same values of . This validates the convergence of the results based on truncated sums of Fourier coefficients. We also remark a very good agreement between the results from and those from its approximated version in the second column of Tab. 3. (We discuss these results further in the next section.)

As seen from Table  3, in all cases the central value of the parameter coincides with the true central value. This result was expected, having in view the precise theoretical input used along the circle in our study. The confirmation of this expectation is a test of the numerical algorithms used in the calculations. In particular, the integral in (53) had to be computed using the improved algorithm described in (48), i.e. the product was integrated exactly over each bin. On the other hand, the uncertainties quoted in the various entries of Table  3 are quite different. The explanation of these results and their relevance for the application of the method to real data will be discussed in the next subsection.

5.3 Discussion

The two weights, used in the simulations with the analytic form of beside , are quite extreme: the expression (39) with strongly enhances the contribution of the region near the point on the circle shown in Fig. 1. Since, as seen from Fig. 3, the magnitude of is strongly peaked near the timelike axis, the corresponding minimal distance will be very sensitive to the variation of the strength parameter , as seen from Fig. 4. The same figure shows also that for this weight the norm approximates well the norm. On the contrary, the weight (50) suppresses the region near , which explains the low sensitivity of the corresponding distance to the variation of , visible in Fig. 4.

The above remarks refer to a fixed spectral function . When this quantity is varied within errors during the simulations, the two extreme weights respond in a different way. The weight (39), which enhances also the region near on the real axis, will be more sensitive to the variation of the input data, since the errors are larger towards the end of the spectrum. For the lower value , when the errors are still moderate, the effect of the variation of the input data turns out to be comparable to the opposite effect produced by the larger sensitivity to the variation of . As a consequence, the spread of the -distribution for of the form (39) with is comparable to that obtained with , as seen from the second and third columns of the first row of Table 3. On the other hand, the pinched weight ensures a low sensitivity of to the variations of the spectral function produced by the errors. However, the low sensitivity of the same quantity to the variation of the strength parameter leads to an overall large spread of the statistical distribution, which explains the larger error quoted in the last column of the first row of Table 3.

For , the large errors of the input data in the last bins lead to the large uncertainties on quoted in the second line of Tab. 3. In this case, the detection of DVs in a significant way from the pseudodata is not possible. For the weight (39) with , the great sensitivity with respect to the input data near the upper end of the spectrum exceeds the opposite large sensitivity to the variation of . The resulting has a larger uncertainty than that obtained with the standard norm. In the case of the pinched weight , the suppressing effect on the large errors of the last bins compensates the spread produced by the low sensitivity to variation. The overall effect is that for the spreads on obtained with the two extreme weights are comparable.

A last remark concerns the relation between the weighted norms and the norm. One can see that for the distribution quoted in Table 3 for the weight of the form (39) with coincides with that obtained from the distance measured in norm, given in Table 2. However, for , the standard deviation on quoted in the third column of Table 3 is somewhat smaller than that obtained with in Table 2.

To understand this small difference, we recall that the simulations reported in Table 3 were performed with a fixed value, , in the expression (39). But, as discussed in Sec. 3.2, the optimal choice of achieving the supremum in (36) depends also on the input spectral function. For , when the bins with large errors are excluded, this dependence affects in an almost unobservable way the simulations. However, for the variation of the input can be considerable due to the large errors in the last bins. In this case the weight (39) with is not always the optimal weight leading to the precise approximation of according to Eq. (36). Therefore, the result presented in Table 3 only illustrates the use of the norm for a rather extreme weight, inspired from the norm but not reproducing exactly its results. When the errors are large, the simulations using the norm must resort to the exact algorithm (18) with the Hankel matrix (16).

6 Summary and conclusions

In the present paper we continued the investigation of the functional-analyses tools proposed in Ref. [20] for detecting DVs from measurements of the spectral functions of the QCD correlators. The aim was to evaluate the potential of the method when the spectral function comes in the form of binned data with realistic covariances. We performed the analysis still in the context of the toy model for the correlator considered in [20], in which we allowed for uncertainties described by the covariances obtained from the publicly available ALEPH spectral functions. In this way we had full control over the problem and the outcome of the method could be checked against the expected results.

The paper contains also some theoretical developments of the approach proposed in Ref. [20]. In addition to the functional distances based on and norms, already discussed in Ref. [20], we introduced a general class of weighted norms , which are instrumental for several reasons. First, as shown in Sec. 3.1, we were able to obtain a closed analytic expression for the minimal functional distance measured in this norm, thereby avoiding truncated sums of Fourier coefficients. Second, these norms provide an interpolation between two extreme cases: the pinched weights familiar from phenomenological works, and the opposite class of weights which, as discussed in Sec. 3.2, provide a good approximation of the functional distance measured in norm.

To investigate the potential of the method for the detection of DVs we introduced, in the spirit of Ref. [20], a strength parameter that quantifies the DV contribution to according to (49), the true value of this parameter being . As in [20], we define the optimal as the value that achieves the minimum of the lower bounds on the functional distances , or , measured in the norms , or , respectively, between the true correlator and its approximant along the circle in the complex energy plane.

For want of a theoretical statistical interpretation of the minimal distances defined by functional analysis, we performed an empirical study where fake data on the spectral function have been generated in a number of bins. To mimic the experimental situation, we adopted a multivariate Gaussian distribution with covariances inferred from the ALEPH covariance matrix for the vector channel [16]. By simulations with 5,000 different data sets, we obtained the statistical distributions of the optimal parameter , which were, to a very good approximation, Gaussian.

The main results of these investigations are displayed in Tabs. 12 and 3, where we quote the central values given by the medians and the uncertainties defined by % confidence levels from the corresponding distributions. One can see that the results based on the truncated computation of the norms converge relatively fast, which make their practical use feasible. This could be confirmed using the analytical determination of given in Eq. (53), that avoids the necessity of truncating the sums. We investigated in this framework three types of weights: , which corresponds to the standard norm, the expression (39) with , expected to approximate well the norm, and the pinched weight (50).

We note that in all cases the true value of the strength parameter is obtained with high accuracy. Since the theoretical input we use is quite precise, this result represents a good test of the numerical algorithms adopted. In particular, as discussed in Sec. 5, the refined integration rule (48) for calculating either the moments (17) or the quantity (52) must be used for reaching this level of accuracy. On the other hand, the standard deviations, crucial for the extraction of DVs in a significant way, differ for various tests. For the lower value of , the tests based on the norms and produce comparable uncertainties, with a successful and statistically significant (by three standard deviations) detection of DVs. The test based on pinched weight (50) is however unable to detect DVs in a significant way even at low . For , due to the large uncertainties towards the edge of the spectrum, a statistically significant determination of is not possible. All the tests have very large uncertainties, although one may note that the performance of the