Simultaneous measurement of forward-backward asymmetry and top polarization in dilepton final states from production at the Tevatron
We present a simultaneous measurement of the forward-backward asymmetry and the top-quark polarization in production in dilepton final states using of proton-antiproton collisions at TeV with the D0 detector. To reconstruct the distributions of kinematic observables we employ a matrix element technique that calculates the likelihood of the possible kinematic configurations. After accounting for the presence of background events and for calibration effects, we obtain a forward-backward asymmetry of and a top-quark polarization times spin analyzing power in the beam basis of , with a correlation of between the measurements. If we constrain the forward-backward asymmetry to its expected standard model value, we obtain a measurement of the top polarization of If we constrain the top polarization to its expected standard model value, we measure a forward-backward asymmetry of A combination with the D0 measurement in the lepton+jets final state yields an asymmetry of Within their respective uncertainties, all these results are consistent with the standard model expectations.
pacs:14.65.Ha, 12.38.Qk, 13.85.Qk, 11.30.Er
The D0 Collaboration111with visitors from Augustana College, Sioux Falls, SD, USA, The University of Liverpool, Liverpool, UK, DESY, Hamburg, Germany, CONACyT, Mexico City, Mexico, SLAC, Menlo Park, CA, USA, University College London, London, UK, Centro de Investigacion en Computacion - IPN, Mexico City, Mexico, Universidade Estadual Paulista, São Paulo, Brazil, Karlsruher Institut für Technologie (KIT) - Steinbuch Centre for Computing (SCC), D-76128 Karlsruhe, Germany, Office of Science, U.S. Department of Energy, Washington, D.C. 20585, USA, American Association for the Advancement of Science, Washington, D.C. 20005, USA, Kiev Institute for Nuclear Research, Kiev, Ukraine, University of Maryland, College Park, Maryland 20742, USA and European Orgnaization for Nuclear Research (CERN), Geneva, Switzerland
In proton-antiproton collisions at =1.96 TeV, top quark pairs are predominantly produced in valence quark-antiquark annihilations. The standard model (SM) predicts this process to be slightly forward-backward asymmetric: the top quark (antiquark) tends to be emitted in the same direction as the incoming quark (antiquark), and thus, in the same direction as the incoming proton (antiproton). The forward-backward asymmetry in the production is mainly due to positive contributions from the interference between tree-level and next-to-leading-order (NLO) box diagrams. It receives smaller negative contributions from the interference between initial and final state radiation. The interferences with electroweak processes increase the asymmetry. In the SM, the asymmetry is predicted to be Bernreuther and Si (2012); Czakon et al. (2015); Kidonakis (2015). Within the SM, the longitudinal polarizations of the top quark and antiquark are due to parity violating electroweak contributions to the production process. The polarization is expected to be for all choices of the spin quantization axis Bernreuther et al. (2006); Bernreuther (2008).
Physics beyond the SM could affect the production mechanism and thus both the forward-backward asymmetry and the top quark and antiquark polarizations. In particular, models with a new parity violating interaction such as models with axigluons Frampton and Glashow (1987); Hall and Nelson (1985); Antunano et al. (2008); Frampton et al. (2010), can induce a large positive or negative asymmetry together with a sizable polarization.
The production asymmetry, , is defined in terms of the difference between the rapidities of the top and antitop quarks, :
where is the number of events in configuration . By definition, is independent of effects from the top quark decay such as top quark polarization. However, it requires the reconstruction of the initial state from the decay products, which is challenging especially in dilepton channels.
Measurements of have been performed in the lepton+jets channels by the CDF Aaltonen et al. (2013a) and D0 Abazov et al. (2014a) Collaborations. Other asymmetry measurements have been performed using observables based on the pseudo-rapidity of the leptons from decays Aaltonen et al. (2013b, 2014); Abazov et al. (2014b, 2013a). All these measurements agree with the SM predictions. A comprehensive review of asymmetry measurements performed at the Tevatron can be found in Ref. Aguilar-Saavedra et al. (2015).
As top quarks decay before they hadronize, their spin properties are transferred to the decay products. The top (antitop) polarization () along a given quantization axis impacts the angular distribution of the positively (negatively) charged lepton Bernreuther (2008)
where () is the angle between the positively (negatively) charged lepton in the top (antitop) rest frame and the quantization axis , and () is the spin analyzing power of the positively (negatively) charged lepton, which is close to 1 () at the 0.1% level within the SM Bernreuther (2008). The polarization terms () can be obtained as two times the asymmetry of the () distribution
In the following we use the beam basis, where is the direction of the proton beam in the zero momentum frame. Since we only use the beam basis, we omit the subscript in the following and define the polarization observable as:
Polarization effects have been studied at the Tevatron in the context of the measurements of the leptonic asymmetries in Ref. Abazov et al. (2013b), but no actual measurement of the polarization has been performed. Measurements of the polarization have been conducted for top pair production in collisions at the Large Hadron Collider at TeV. These measurements, performed in different basis choices, are all consistent with the SM expectations Chatrchyan et al. (2014); Aad et al. (2013).
This article presents a simultaneous measurement of and with the D0 detector in the dilepton decay channel. It is based on the full Tevatron integrated luminosity of using final states with two leptons, , , or . We first reconstruct the and distributions employing a matrix element integration technique similar to that used for the top-quark mass measurement in the dilepton channel Abazov et al. (2011). These distributions are used to extract raw measurements of asymmetry and polarization, and , in data. The experimental observables and are correlated because of acceptance and resolution effects. Using a mc@nlo Frixione and Webber (2002, 2008) simulation, we compute the relation between the raw measurements and , and the true parton-level asymmetry and polarization to determine calibration corrections. We then extract the final measured values of and . This is the first measurement of the forward-backward asymmetry obtained from the reconstructed distribution in the dilepton channel and the first measurement of the top quark polarization at the Fermilab Tevatron collider.
Ii Detector and object reconstruction
The D0 detector used for the Run II of the Fermilab Tevatron collider is described in detail in Refs. Abachi et al. (1994); Abazov et al. (2006); Abolins et al. (2008); Angstadt et al. (2010). The innermost part of the detector is composed of a central tracking system with a silicon microstrip tracker (SMT) and a central fiber tracker embedded within a 2 T solenoidal magnet. The tracking system is surrounded by a central preshower detector and a liquid-argon/uranium calorimeter with electromagnetic, fine hadronic, and coarse hadronic sections. The central calorimeter (CC) covers pseudorapidities Not (a) of . Two end calorimeters (EC) extend the coverage to , while the coverage of the pseudorapidity region , where the EC and CC overlap, is augmented with scintillating tiles. A muon spectrometer, with pseudorapidity coverage of , is located outside the calorimetry and comprises drift tubes and scintillation counters, before and after iron toroidal magnets. Trigger decisions are based on information from the tracking detectors, calorimeters, and muon spectrometer.
Electrons are reconstructed as isolated clusters in the electromagnetic calorimeter and required to spatially match a track in the central tracking system. They have to pass a boosted decision tree Abazov et al. (2014c) criterion based on calorimeter shower shape observables, calorimeter isolation, a spatial track match probability estimate, and the ratio of the electron cluster energy to track momentum (). Electrons are required to be in the acceptance of the electromagnetic calorimeter ( or ).
Muons are identified by the presence of at least one track segment reconstructed in the acceptance () of the muon spectrometer that is spatially consistent with a track in the central tracking detector Abazov et al. (2014d). The transverse momentum and charge are measured by the curvature in the central tracking system. The angular distance to the nearest jet, the momenta of charged particles in a cone around the muon track, and the energy deposited around the muon trajectory in the calorimeter, are used to select isolated muons.
Iii Dataset and Event Selection
The signature of production in dilepton final states consists of two high- leptons (electrons or muons), two high- jets arising from the showering of two quarks, and missing transverse energy () due to the undetected neutrinos. The main backgrounds in this final state arise from , with , , or , and diboson production (, , ). These backgrounds are evaluated from Monte Carlo (MC) simulated samples as described in section IV.3. Another source of background comes from +jets and multijet events, if one or two jets are misreconstructed as electrons or if a muon from a jet passes the isolation criteria. The contribution from these backgrounds, denoted as “instrumental background events”, are estimated directly from data as described in section IV.5. Each of the dilepton channels is subject to a different mixture and level of background contamination, in particular for the background arising from the process. We therefore apply slightly different selection requirements. The main selection criteria to obtain the final samples of candidate events are:
We select two high ( ) isolated leptons of opposite charge.
We require that at least one electron passes a single electron trigger condition in the channel ( efficient), and that at least one muon passes a single muon trigger condition in the channel ( efficient). In the channel, we do not require any specific trigger condition, i.e., we use all D0 trigger terms ( efficient).
We require two or more jets of and .
We further improve the purity of the selection by exploiting the significant imbalance of transverse energy due to undetected neutrinos and by exploiting several topological variables:
The missing transverse energy is the magnitude of the missing transverse momentum, obtained from the vector sum of the transverse components of energy deposits in the calorimeter, corrected for the differences in detector response of the reconstructed muons, electrons, and jets.
The missing transverse energy significance, , is the logarithm of the probability to measure under the hypothesis that the true missing transverse momentum is zero, accounting for the energy resolution of individual reconstructed objects and underlying event Schwartzman (2004).
is the scalar sum of transverse momenta of the leading lepton and the two leading jets.
In the channel we require , in the channel , and in the channel and .
We require that at least one of the two leading jets be -tagged, using a cut on the multivariate discriminant described in Ref. Abazov et al. (2014f). The requirement is optimized separately for each channel. The selection efficiencies for these requirements are , , and for the , , and channels, respectively.
The integration of the matrix elements by vegas, described in section V.1, may return a tiny probability if the event is not consistent with the event hypothesis due to numerical instabilities in the integration process. After removing low probability events, we retain signal events in the MC simulation with an efficiency of 99.97%. For background MC, the efficiency is . We remove no data events with this requirement.
Iv Signal and background samples
To simulate the signal, we employ MC events generated with the CTEQ6M1 parton distribution functions (PDFs) Nadolsky et al. (2008) and mc@nlo 3.4 Frixione and Webber (2002, 2008) interfaced to herwig 6.510 Corcella et al. (2001) for showering and hadronization. Alternate signal MC samples are generated to study systematic uncertainties and the shape of the distribution. We use a sample generated with alpgen Mangano et al. (2003) interfaced to pythia 6.4 Sjostrand et al. (2006) for showering and hadronization and a sample generated with alpgen interfaced to herwig 6.510. For both samples we use cteq6l1 PDFs Nadolsky et al. (2008).
The mc@nlo generator is used for the nominal signal sample as it simulates NLO effects yielding non-zero . The value of at parton level without applying any selection requirement is , which is smaller than a SM prediction Czakon et al. (2015) that includes higher order effects.
The MC events are generated with a top-quark mass of . They are normalized to a production cross section of 7.45 pb, which corresponds to the calculation of Ref. Moch and Uwer (2008) for . The generated top mass of 172.5 differs from the Tevatron average mass of Aaltonen et al. (2012). We correct for this small difference in section VI.2.
iv.2 Beyond standard model benchmarks
We also study the five benchmark axigluons models proposed in Ref. Carmona et al. (2014) that modify production. For each of the proposed beyond standard model (BSM) benchmarks, we produce a MC sample using the madgraph Alwall et al. (2014) generator interfaced to pythia 6.4 for showering and hadronization, and the cteq6l1 PDFs. The boson model proposed in Ref. Carmona et al. (2014) is not considered here since it is excluded by our differential cross-section measurement Abazov et al. (2014g).
iv.3 Background estimated with simulated events
The background samples are generated using the CTEQ6L1 PDFs. The events are generated using alpgen interfaced to pythia 6.4. We normalize the sample to the NNLO cross section Gavin et al. (2011). The distribution of bosons is weighted to match the distribution observed in data Abazov et al. (2008), taking into account its dependence on the number of reconstructed jets. The diboson backgrounds are simulated using pythia and are normalized to the NLO cross section calculation performed by MCFM Campbell and Ellis (1999, 2010).
iv.4 D0 simulation
The signal and background processes except instrumental background are simulated with a detailed geant3-based Brun and Carminati (unpublished) MC simulation of the D0 detector. They are processed with the same reconstruction software as used for data. In order to model the effects of multiple interactions, the MC events are overlaid with events from random collisions with the same luminosity distribution as data. The jet energy calibration is adjusted in simulated events to match the one measured in data. Corrections for residual differences between data and simulation are applied to electrons, muons, and jets for both identification efficiencies and energy resolutions.
iv.5 Instrumental background estimated with data
The normalization of events with jets misidentified as electrons is estimated using the “matrix method” Abazov et al. (2007) separately for the and channels. The contribution from jets producing identified muons in the channel is obtained using the same selection criteria as for the sample of candidate events, but demanding that the leptons have the same charge. In the channel, it is obtained in the same way but after subtracting the contribution from events with jets misidentified as electrons.
Once the absolute contribution of instrumental background events has been determined, we also need “template samples” that model their kinematic properties. In the channel, the template for instrumental background events is obtained with the same selection criteria as for the samples of candidate events, but without applying the complete set of electron selection criteria. For the and channels, the contributions from instrumental background events is negligible and the result is not sensitive to the choice of template. For simplicity, we re-employ the template for both the and channels.
iv.6 Comparison of MC simulation to selected data
A comparison between the expected and observed numbers of events at the final selection levels is reported in Table 1. The selected sample is relatively pure with a background fraction varying between 10% and 16% depending on the channel.
A comparison of kinematic distributions between data and expectations at the final selection level is shown in Fig. 1.
V Matrix element method
To reconstruct distributions of kinematic observables describing the events, we use a novel modification of the matrix element (ME) integration developed for the measurements Abazov et al. (2011, 2015) by the D0 Collaboration. In particular, this method is employed to reconstruct the , , and distributions, from which an estimate of the forward-backward asymmetry and top polarization are extracted.
v.1 Matrix element integration
In this expression, is a vector describing the kinematic quantities of the six particles of the final state, is the matrix element describing the dynamics of the process, is the 6-body phase space term, the functions are the PDFs of the incoming partons of momenta and and of different possible flavors, , referred to as the transfer function, describes the probability density of a parton state to be reconstructed as , is a function describing the distribution of the system tranverse momentum, , while the azimuthal angle of this system, , is assumed to have a uniform distribution over , and is the product of the experimental acceptance and the production cross-section. The matrix element, , is computed at leading order (LO) for annihilation only, as it represents the main subprocess () of the total production. The functions are given by the CTEQ6L1 leading order PDF set. The function is derived from parton-level simulated events generated with alpgen interfaced to pythia. More details on this function can be found in Ref. Grohsjean (2008). Ambiguities between partons and reconstructed particle assignments are properly treated by defining an effective transfer function that sums over all the different assignments As we consider only the two leading jets in the integration process, there are only two possibilities to assign a given jet to either the or partons.
The number of variables to integrate is given by the six three-vectors of final state partons (of known mass), the transverse momentum and transverse direction, and the longitudinal momenta of the two incoming partons. These 22 integration variables are reduced by the following constraints: the lepton and -quark directions are assumed to be perfectly measured (8 constraints), the energy-momentum between the initial state and the final state is conserved (4 constraints), the and system have a mass of Olive et al. (2014) (2 constraints), and the and system have a mass of (2 constraints). Transfer functions account for muon and jet energies. The transfer functions are the same as used in Ref. Abazov et al. (2015). The electron momentum measurement has a precision of , which is much better than the muon momentum resolution of typically 10% and the jet momentum resolution of typically 20%. We thus consider that the electron momenta are perfectly measured. This gives one additional constraint in the channel and two additional constraints in the channel. Thus, we integrate over 4, 5, and 6 variables in the , , and channels, respectively. The integration variables are , , energy of leading jet, energy of sub-leading jet, and energy of the muon(s) (if applicable).
The integration is performed using the MC-based numerical integration program vegas Lepage (1978, 1980). The interface to the vegas integration algorithm is provided by the GNU Scientific Library (GSL) Galassi et al. (2009). The MC integration consists of randomly sampling the space of integration variables, computing a weight for each of the random points that accounts for both the integrand and the elementary volume of the sampling space, and finally summing all of the weights. The random sampling is based on a grid in the space of integration that is iteratively optimized to ensure fine sampling in regions with large variations of the integrand. For each of the random points, equations are solved to transform these integration variables into the parton-level variables of Eq. (5), accounting for the measured quantities . The Jacobian of the transformation is also computed to ensure proper weighting of the sampling space elementary volume.
v.2 Likelihood of a parton-level observable
For any kinematic quantity reconstructed from the parton momenta , for example , we can build a probability density that measures the likelihood of at the partonic level to give the reconstructed value . This likelihood is obtained by inserting a term in the integrand of Eq. (5), and normalizing the function so that . The probability density is obtained by modifying the vegas integration algorithm. For each reconstructed event and each point in the integration space tested by vegas, the integrand of Eq. (5) and the quantity are computed. After the full space of integration has been sampled, we obtain a weighted distribution of the variable that represents the function up to an overall normalization factor.
For each reconstructed event with observed kinematics , where is an event index, we obtain a likelihood function . By accumulating these likelihood functions over the sample of events, we obtain a distribution that estimates the true distribution of the variable . The performance of this method of reconstruction for parton-level distributions is estimated by comparing the accumulation of likelihood functions to the true parton-level quantities for MC events, as shown in Fig. 2.
v.3 Raw estimate of
We could choose to use the maximum of the likelihood function to estimate the true value of on an event-by-event basis. However, to maximize the use of available information, we keep the full shape of the functions and accumulate these functions over the sample of events to obtain an estimate of the parton level distributions, which is then used to determine . This method has been verified to perform better than the maximum likelihood method. The distribution is shown in Fig. 3(a), after subtracting the background contributions from the data. The raw asymmetry , extracted from this distribution, is reported in Table 2. Since this distribution is an approximate estimation of the true distribution of , the raw asymmetry is an approximation of the true . The measurement therefore needs to be calibrated. The calibration is discussed below.
The use of an event-by-event likelihood function allows us to define an asymmetry observable for each event
where the observable averaged over the sample of candidate events is equal to the raw asymmetry . By construction, lies in the interval . For a perfectly reconstructed event without resolution effects, would be either equal to for or to for . The use of allows us to determine the statistical uncertainty on as the uncertainty on the average of a distribution.
v.4 Raw estimate of
In the same way as in the previous section, we use the accumulation of the likelihoods and to estimate the distributions of and . The distributions and are shown in Figs. 3(b) and 3(c), after subtracting the background contributions from the data. The raw asymmetries, and , and the raw polarization extracted from the data are reported in Table 3. As for , the measurement of needs to be calibrated to retrieve the parton-level values of the polarization.
v.5 Statistical correlation between and
We measure the statistical correlation between and in the data, which is needed to determine the statistical correlation between the measurements of and . In the same way as is the average of an event-by-event asymmetry , the raw asymmetries and are the averages of event-by-event asymmetries denoted by and . The correlation between and is identical to the correlation between the observables and . This correlation is determined from the background subtracted data by computing the RMS and mean values of the distributions of , , and :
We report the values measured in data in Table 4.
Vi Results corrected for calibration
The calibration procedure finds a relation between the raw asymmetry and polarization, , obtained after subtracting the background contributions, and the true asymmetry and polarization of events. The calibration procedure corrects for dilution effects that arise from the limited acceptance for events, the finite resolution of the kinematic reconstruction, and the simplified assumptions used in the matrix element integration (e.g., leading order ME, no ME, only two jets considered). The relation is inverted to extract a measurement of and from the values of and observed in data.
The nominal calibration is determined using a sample of simulated mc@nlo dilepton events. The procedure is repeated with the samples from the other generators (see section IV.1 and IV.2) to determine different systematic uncertainties. We normalize the individual , , and contributions to have the same proportions as observed in the data samples after subtracting the expected backgrounds.
vi.1 Samples for calibration
We produce test samples from a nominal MC sample by reweighting the events according to the true value of the parton-level , , and . The reweighting factors are computed as follows.
vi.1.1 Reweighting of lepton angular distributions
The general expression for the double differential lepton angle distribution is Bernreuther (2008)
where is the spin correlation coefficient, which is in the SM. In the beam basis one has . We use this relation to reweight a given MC sample to simulate a target polarization of .
vi.1.2 Reweighting of distribution
To determine a method of reweighting the distribution, denoted , we study its shape using the different MC samples of section IV at the generated level, i.e., before event selection and reconstruction. Inspired by the studies performed for the distribution of rapidity of the charged leptons in Ref. Hong et al. (2014), we rewrite as
where is the ratio between the odd and even part of the distribution, also called differential asymmetry as a function of ; we then fit with an empirical odd function
where and are shape parameters, while is a magnitude parameter. The term was not needed in the study of Ref. Hong et al. (2014), but improves the modeling significantly for the case of . The results of the fit for different MC samples are shown in Fig. 4. If we reweight a MC sample so that the even part of the distribution, the term , and the term are preserved, then the forward-backward asymmetry is proportional to .
These considerations yield the following procedure to produce a sample of test asymmetry starting from a MC sample of generated asymmetry . We first fit the differential asymmetry at the generated level with the function of Eq. (10) and determine the parameters , , and . Then we apply weights to the events processed through the D0 simulation
This procedure preserves the even part of the distribution of . It also preserves the original shape of the differential asymmetry, but changes its magnitude to the desired value.
Starting from the nominal mc@nlo sample, we produce test samples using the product of the weights defined in sections VI.1.1 and VI.1.2. We use a grid of values for polarizations of and asymmetries of to obtain 30 samples in addition to the unweighted nominal sample. We apply the method of ME reconstruction to each of the 31 fully simulated samples and extract a raw measurement associated with a given parton-level . A fit to the obtained set of points in the space determines two affine functions that relate the reconstructed quantities to the true quantities: and . The affine functions fit the 31 points well, with residuals . We rewrite the affine relations using a matrix equation:
where is a calibration matrix and is a vector of offset terms. The values of the matrix and are reported in Table 5 for the the different dilepton channels. To determine the statistical uncertainties on the calibration parameters, we use an ensemble method. We split the mc@nlo samples into 100 independent ensembles and then repeat the calibration procedure for each of them.
vi.2 Measurement of and after calibration
The calibration relation of Eq. (12) is inverted to retrieve the true partonic asymmetry and the true polarization from the reconstructed and . We obtain a measurement of reported in Table 6 for each dileptonic channel using the calibration coefficients from Table 5 and the raw measurements from Tables 2 and 3.
Two alpgen+pythia samples generated at different are used to estimate the dependence of the measurement on . Considering a top mass of Aaltonen et al. (2012) as reference, the dilepton results reported in Table 6 have to be corrected by and for and , respectively. The corrected combined dilepton results are
Vii Systematic uncertainties
We consider three categories of uncertainties. Uncertainties affecting the signal are obtained by deriving calibration coefficients from alternate signal models and propagating them to the final results. Uncertainties affecting the background have an impact on the raw measurements, and , as these observables are obtained after subtracting the background. They are propagated to the final measurement by applying the nominal calibration correction to the modified and . The third category consists of the uncertainties on the calibration method. Since the measurement is performed after background subtraction, the calibration is independent of the normalization of the simulation, and there is no systematic uncertainty due to signal normalization. The uncertainties on and due to the different sources are summarized in Table 7, together with the correlations.
vii.1 Uncertainties on signal
Several sources of systematic uncertainties due to the detector and reconstruction model affect the jets and thus the signal kinematics. We consider uncertainties on the jet energy scale, flavor-dependent jet response, and jet energy resolution Abazov et al. (2014e). We also take into account uncertainties associated with tagging and vertexing Abazov et al. (2014f).
To estimate the impact of higher order correction, we compare the calibration obtained with mc@nlo+herwig to the calibration obtained with alpgen+herwig. To propagate uncertainty on the simulation of initial state and final state radiations (ISR/FSR), the amount of radiation is varied by scaling the ktfac parameter either by a factor of or in an alpgen+pythia simulation of events Abazov et al. (2015). The hadronization and parton-shower model uncertainty is derived from the difference between the pythia and herwig generators, estimated by comparing alpgen+herwig to alpgen+pythia samples. The different models for parton showers used by various MC generators yield different amounts of ISR between forward and backward events Skands et al. (2012); Winter et al. (2013). The uncertainty on the ISR model is defined as 50% of the difference between the nominal results and the results derived from a mc@nlo simulation in which the dependence of the forward-backward asymmetry on the of the system is removed. The uncertainty of 0.94 GeV on Aaltonen et al. (2012) is propagated to the final result using two alpgen+pythia samples generated with different values. We determine PDF uncertainties by varying the 20 parameters describing the CTEQ6M1 PDF Nadolsky et al. (2008) within their uncertainties.
vii.2 Uncertainties on background
The uncertainty on the background level is obtained by varying the instrumental background normalization by 50% and the overall background normalization by 20%. The model of the instrumental background kinematics is varied, using the same method as in Ref. Abazov et al. (2013a). We reweight the reconstructed , , and distributions by a factor of , where is the statistical uncertainty band of the distribution and is chosen to be positive for , , , and negative for , , and <