Measurement of the W boson helicity in top quark decays using 5.4 fb of collision data
We present a measurement of the helicity of the W boson produced in top quark decays using decays in the jets and dilepton final states selected from a sample of 5.4 fb of collisions recorded using the D0 detector at the Fermilab Tevatron collider. We measure the fractions of longitudinal and right-handed bosons to be and , respectively. This result is consistent at the 98% level with the standard model. A measurement with fixed to the value from the standard model yields
pacs:14.65.Ha, 14.70.Fm, 12.15.Ji, 12.38.Qk, 13.38.Be, 13.88.+e
The D0 Collaboration111with visitors from Augustana College, Sioux Falls, SD, USA, The University of Liverpool, Liverpool, UK, SLAC, Menlo Park, CA, USA, ICREA/IFAE, Barcelona, Spain, Centro de Investigacion en Computacion - IPN, Mexico City, Mexico, ECFM, Universidad Autonoma de Sinaloa, Culiacán, Mexico, and Universität Bern, Bern, Switzerland.
The top quark is the heaviest known fundamental particle and was discovered in 1995 cdftopobs (); d0topobs () at the Tevatron proton-antiproton collider at Fermilab. The dominant top quark production mode at the Tevatron is . Since the time of discovery, over 100 times more integrated luminosity has been collected, providing a large number of events with which to study the properties of the top quark. In the standard model (SM), the branching ratio for the top quark to decay to a boson and a quark is %. The on-shell boson from the top quark decay has three possible helicity states, and we define the fraction of bosons produced in these states as (longitudinal), (left-handed), and (right-handed). In the SM, the top quark decays via the charged weak current interaction, which strongly suppresses right-handed bosons and predicts and at leading order in terms of the top quark mass (), boson mass (), and quark mass () to be fval ()
where , , and . With the present measurements of GeV/ topmass () and GeV wmass (), and taking to be 5 GeV/, the SM expected values are =0.698, =0.301, and . The absolute uncertainties on the SM expectations, which arise from uncertainties on the particle masses as well as contributions from higher-order effects, are for and , and for fval ().
In this paper, we present a measurement of the boson helicity fractions and and constrain the fraction through the unitarity requirement of . Any significant deviation from the SM expectation would be an indication of new physics, arising from either a deviation from the expected coupling of the vertex or the presence of non-SM events in the data sample. The most recently published results are summarized in Table 1.
|D0, 1 fb prevd0result ()|
|CDF, 2.7 fb prevcdfresult ()|
The extraction of the boson helicities is based on the measurement of the angle between the opposite of the direction of the top quark and the direction of the down-type fermion (charged lepton or , quark) decay product of the boson in the boson rest frame. The dependence of the distribution of on the boson helicity fractions is given by
with . After selection of a enriched sample the four-momenta of the decay products in each event are reconstructed as described below, permitting the calculation of . Once the distribution is measured, the values of and are extracted with a binned Poisson likelihood fit to the data. The measurement presented here is based on collisions at a center-of-mass energy = 1.96 TeV corresponding to an integrated luminosity of 5.4 fb, five times more than the amount used in the result in Ref. prevd0result ().
The D0 Run II detector d0nim () is a multipurpose detector which consists of three primary systems: a central tracking system, calorimeters, and a muon spectrometer. We use a standard right-handed coordinate system. The nominal collision point is the center of the detector with coordinate (0,0,0). The direction of the proton beam is the positive axis. The axis is horizontal, pointing away from the center of the Tevatron ring. The axis points vertically upwards. The polar angle, , is defined such that is the direction. Usually, the polar angle is replaced by the pseudorapidity . The azimuthal angle, , is defined such that points along the axis, away from the center of the Tevatron ring.
The silicon microstrip tracker (SMT) is the innermost part of the tracking system and has a six-barrel longitudinal structure, where each barrel consists of a set of four layers arranged axially around the beam pipe. A fifth layer of SMT sensors was installed near the beam pipe in 2006 smtl0 (). The data set recorded before this addition is referred to as the “Run IIa” sample, and the subsequent data set is referred to as the “Run IIb” sample. Radial disks are interspersed between the barrel segments. The SMT provides a spatial resolution of approximately 10 m in and 100 m in (where is the radial distance in the - plane) and covers . The central fiber tracker (CFT) surrounds the SMT and consists of eight concentric carbon fiber barrels holding doublet layers of scintillating fibers (one axial and one small-angle stereo layer), with the outermost barrel covering . The solenoid surrounds the CFT and provides a 2 T uniform axial magnetic field.
The liquid-argon/uranium calorimeter system is housed in three cryostats, with the central calorimeter (CC) covering and two end calorimeters (EC) covering . The calorimeter is made up of unit cells consisting of an absorber plate and a signal board; liquid argon, the active material of the calorimeter, fills the gap. The inner part of the calorimeter is the electromagnetic (EM) section and the outer part is the hadronic section.
The muon system is the outermost part of the D0 detector and covers . It is primarily made of two types of detectors, drift tubes and scintillators, and consists of three layers (A,B and C). Between layer A and layer B, there is magnetized steel with a 1.8 T toroidal field.
Iii Data and Simulation samples
At the Tevatron, with proton and anti-proton bunches colliding at intervals of 396 ns, the collision rate is about 2.5 MHz. Out of these beam crossings per second at D0, only those that produce events which are identified by a three-level trigger system as having properties matching the characteristics of physics events of interest are retained, at a rate of 100 Hz d0nim (); l1cal2b (). This analysis is performed using events collected with the triggers applicable for jets and dilepton final states between April 2002 and June 2009, corresponding to a total integrated luminosity of 5.4 fb. Analysis of the Run IIa sample, which totals about 1 fb, was presented in Ref. prevd0result (). Here we describe the analysis of the Run IIb data sample and then combine our result with the result from Ref. prevd0result () when reporting our measurement from the full data sample.
The Monte Carlo (MC) simulated samples used for modeling the data are generated with alpgen ref:alpgen () interfaced to pythia ref:pythia () for parton shower simulation, passed through a detailed detector simulation based on geant geant (), overlaid with data collected from a random subsample of beam crossings to model the effects of noise and multiple interactions, and reconstructed using the same algorithms that are used for data. For the signal () sample, we must model the distribution of corresponding to any set of values for the boson helicity fractions, a task that is complicated by the fact that alpgen can only produce linear combinations of and couplings. Hence, for this analysis, we use samples that are either purely or purely , and use a reweighting procedure (described below) to form models of arbitrary helicity states. alpgen is also used for generating all jets processes where represents the vector bosons. pythia is used for generating diboson (, , and ) backgrounds in the dilepton channels. Background from multijet production is modeled using data.
Iv Event Selection
We expect a priori that our measurement will be limited by statistics, so our analysis strategy aims to maximize the acceptance for events. The selection is done in two steps. In the first step, a loose initial selection using data quality, trigger, object identification, and kinematic criteria is applied to define a sample with the characteristics of events. Subsequently, a multivariate likelihood discriminant is defined to separate the signal from the background in the data. We use events in the jets and dilepton decay channels, which are defined below.
In the jets decay , events contain one charged lepton (where lepton here refers to an electron or a muon), at least four jets with two of them being quark jets, and significant missing transverse energy (defined as the opposite of the vector sum of the transverse energies in each calorimeter cell, corrected for the energy carried by identified muons and energy added or subtracted due to the jet energy calibration described below) . The event selection requires at least four jets with transverse momentum GeV/ and with the leading jet GeV/. At least one lepton is required with GeV/ and 1.1 (2.0) for electrons (muons). Requirements are also made on the value of and the angle between the vector and the lepton (to reduce the contribution of events in which mismeasurement of the lepton energy gives rise the spurious ): in the jets channel the requirement is GeV and , and in the jets channel the requirement is GeV and . In addition, for the jets channel, the invariant mass of the selected muon and any other muon in the event is required to be outside of the boson mass window ( GeV/ or GeV/).
For the dilepton decay channel, , the signature is two leptons of opposite charge, two quark jets, and significant . The event selection requires at least two jets with GeV/ and and two leptons (electron or muon) with GeV/. The muons are required to have , and the electrons are required to have or .
Jets are defined using a mid-point cone algorithm jetalg () with radius 0.5. Their energies are first calibrated to be equal, on average, to the sums of the energies of the particles within the jet cone. This calibration accounts for the energy response of the calorimeters, the energy that crosses the cone boundary due to the transverse shower size, and the additional energy from event pileup and multiple interactions in a single beam crossing. The energy added to or subtracted from each jet in due to the above calibration is propagated to the calculation of . Subsequently, an additional correction to for the average energy radiated by gluons outside of the jet cone is applied to the jet energy. Electrons are identified by their energy deposition and shower shape in the calorimeter combined with information from the tracking system. Muons are identified using information from the muon detector and the tracking system. We require the (two) highest- lepton(s) to be isolated from other tracks and calorimeter energy deposits in the jets (dilepton) channel. For all channels, we require a well-reconstructed vertex (PV) with the distance in between this vertex and the point of closest approach of the lepton track being less than 1 cm.
The main sources of background after the initial selection in the jets channel are jets and multijet production; in the dilepton channels they are boson and diboson production as well as multijet and +jets production. Events with fewer leptons than required (multijet events, or jets events in the dilepton channel) can enter the sample when jets are either misidentified as leptons or contain a lepton from semileptonic quark decay that passes the electron likelihood or muon isolation criterion. In all cases they are modeled using data with relaxed lepton identification or isolation criteria. The multijet contribution to the jets final states in the initially-selected sample is estimated from data following the method described in Ref. matrix (). This method relies on the selection of two data samples, one (the tight sample) with the standard lepton criteria, and the other (the loose sample) with relaxed isolation or identification criteria. The numbers of events in each sample are:
Here the coefficient is the efficiency for isolated leptons in or events to satisfy the standard lepton requirements, while is the efficiency for a jet in multijet events to satisfy those requirements. We measure in control samples and in multijet control samples. Inserting the measured values, we solve Eqs. 5 and 6 to obtain the number of
multijet events () and
the number of events with isolated leptons (). In the dilepton channels we model the background due to jets being
misidentified as isolated leptons using data events where both leptons have the same charge. This background originates from multijets events with two jets misidentified as leptons and from
jets events with one jet misidentified as a lepton.
To separate the signal from these sources of background, we define a multivariate likelihood and retain only events above a certain threshold in the value of that likelihood. The set of variables used in the likelihood and the threshold value are optimized separately for each decay channel. The first step in the optimization procedure is to identify a set of candidate variables that may be used in the likelihood. The set we consider is:
Aplanarity , defined as 3/2 of the smallest eigenvalue of the normalized momentum tensor for the jets (in the +jets channels) or jets and leptons (in the dilepton channels). The aplanarity is a measure of the deviation from flatness of the event, and events tend to have larger values than background.
Sphericity , defined as 3/2 of the sum of the two smallest eigenvalues of the normalized momentum tensor for the jets (in the +jets channels) or jets and leptons (in the dilepton channels). This variable is a measure of the isotropy of the energy flow in the event, and events tend to have larger values than background.
Centrality , defined as where is the sum of all jet energies. The centrality is similar to but normalized in a way to minimize dependence on the top quark mass.
, defined as , where is the distance in space between the closest pair of jets, is the lowest jet value in the pair, and is the transverse energy of the leptonically-decaying boson (in the dilepton channels is the magnitude of the vector sum of the and leading lepton ). Only the four leading- jets are considered in computing this variable. Jets arising from gluon radiation (as is the case for most of the background) tend to have lower values of .
, defined as the smallest dijet mass of pairs of selected jets. This variable is sensitive to gluon radiation and tends to be smaller for background than signal.
, defined as the scalar sum of all the selected jet and lepton energies. Jets arising from gluon radiation often have lower energy than jets in events, and leptons arising from the decay of heavy flavor jets often have lower energy than leptons from boson decay, so background events tend to have smaller values of than signal.
, defined as the for a kinematic fit of jets final states to the hypothesis. Signal events tend to have smaller values than background. This variable is not used for dilepton events, for which a kinematic fit is underconstrained.
, defined as the angle between the leading lepton and the . jets events with arising from mismeasured lepton tend to have or .
jet content of the event. Due to the long lifetime of the quark, tracks within jets arising from quarks have different properties (such as larger impact parameters with respect to the PV and the presence of secondary decay vertices) than tracks within light-quark or gluon jets. The consistency of a given jet with the hypothesis that the jet was produced by a quark is quantified with a neural network (NN) that considers several properties of the tracks contained within the jet cone bidNIM (). In the jets channels, we take the average of the NN values NN of the two most -like jets to form a variable called NN, and in the dilepton channels we take the NN values of the two most -like jets as separate variables NN (the largest NN value) and NN (the second-largest NN value). For top quark events, these variables tend to be close to one, while for events containing only light jets they tend to be close to zero.
or . For the and channels only, is considered as a variable in the likelihood discriminant. In the channel, where spurious can arise from mismeasurement of the muon momentum, we instead use , the of a kinematic fit to the hypothesis.
Dilepton mass Also for the dilepton channels only, the invariant mass of the lepton pairs is considered as a variable in the classical likelihood. The motivation is to discriminate against boson production.
We consider all combinations of the above variables to select the optimal set to use for each decay channel. For a given combination of variables, the likelihood ratio is defined as
where is the number of input variables used in the likelihood, and is the ratio of the parameterized signal and background probability density functions. We consider all possible subsets of the above variables to be used in and scan across all potential selection criteria on . For each definition and prospective selection criterion, we compute the following figure of merit (FOM):
where and are the numbers of signal and background events expected to satisfy the
The term reflects the uncertainty in the background selection efficiency arising from any mis-modeling of the input variables in the MC. To assess , we compare each variable in data and MC in background-dominated samples. The background-dominated samples are created by forming a multivariate likelihood ratio (Eq. 7) that does not use the variable under study, nor any variable that is strongly correlated with it, where the criterion is a correlation coefficient between 0.10 and 0.10. We select events that have low values of this likelihood, and are therefore unlikely to be events, such that 95% of MC events are rejected. Because the contribution to the selected data sample is negligible, we can directly compare the background model to data. The impact of any mis-modeling on the likelihood distribution is assessed by taking the ratio of the observed to the expected distributions as a function of each variable and fitting this to a polynomial. The result is that for each variable we build a function that encodes the data/MC discrepancies in that variable. For each simulated background event, we reweight each likelihood according to the data/MC differences. For example, for a likelihood that uses of the possible variables, the likelihood is given a weight
The quantity is the difference in the predicted background yield when the unweighted and weighted distributions are used for background. This uncertainty is propagated through the analysis as one component of the total uncertainty in the background yield.
|Events passing initial selection||1442||1250|
|Variables in best|
|Events passing initial selection||323||3275||5740|
|Variables in optimized||,,,||,,||,,,|
The sets of variables and selection criteria that maximize the FOM defined in Eq. 8 for each final state are shown in Tables 2 and 3. Figures 1-5 show the distributions of the variables in the best likelihood discriminant for the events passing the preselection cuts, where the signal and background contributions are normalized as described below. In addition, we use to determine the signal and background content of the initially-selected sample by performing a binned Poisson maximum likelihood fit to the distribution where the signal and total background normalizations are free parameters. The jets contribution is determined by the fit to the distribution, while the multijet component is constrained to be consistent with the value determined from Eqs. 5 and 6. In the dilepton channels the relative contributions of the different background sources are fixed according to their expected yield, but the total background is allowed to float. The signal and background yields in the initially-selected sample for the jets channels are listed in Table 2, and for the dilepton channels in Table 3. Figures 6 and 7 show the distribution of the best likelihood discriminant for each channel, where the signal and background contributions are normalized according to the values returned by the fit. Tables 4 and 5 show the optimal cut value for each channel and the final number of events in data and the expected numbers of signal and background events after applying the requirement.
After the final event selection, is calculated for each event by using the reconstructed top quark and boson four-momenta. In the jets decay channel, the four-momenta are reconstructed using a kinemetic fit with the constraints: (i) two jets should give the invariant mass of the boson (80.4 GeV/), (ii) the invariant mass of the lepton and neutrino should be the boson mass, (iii) the mass of the reconstructed top and anti-top quark should be 172.5 GeV/, and (iv) the of the system should be opposite that of the unclustered energy in the event. The four highest- jets in each event are used in the fit, and among the twelve possible permutations in the assignment of the jets to initial partons, the solution with the highest probability is chosen, considering both the NN values of the four jets and . This procedure selects the correct jet assignment in 59% of MC events. With the jet assigned, the complete kinematics of the decay products (i.e., including the neutrino) are determined, allowing us to boost to the rest frames of each boson in the event. We compute for the boson that decays leptonically. The hadronic boson decay from the other top quark in the event also contains information about the helicity of that boson, but since we do not distinguish between jets formed from up-type and down-type quarks, we can not identify the down-type fermion to calculate . We therefore calculate only , which is identical for both jets in the rest frame of the hadronically decaying boson. Left-handed and right-handed bosons have identical distributions, but we can distinguish either of those states from longitudinal bosons, thereby improving the precision of the measurement.
In the dilepton decay channel, the presence of two neutrinos prevents a constrained kinematic fit, but with the assumption that the top quark mass is 172.5 GeV/, an algebraic solution for the neutrino momenta can be obtained (up to a two-fold ambiguity in pairing the jets and leptons, and a four-fold solution ambiguity). To account for the lepton and jet energy resolutions, the procedure described above is repeated 500 times with the energies fluctuated according to their uncertainties, and the average of all the solutions is used as the value of the for each top quark.
As mentioned above, the extraction of both and requires comparing the data with the MC models in which both of these values are varied. Since alpgen can only produce linear combinations of and couplings, it is unable to produce non-SM values, and can produce values only in the range . We therefore start with alpgen and samples, and divide the samples in bins of parton-level . For each bin, we note the efficiency for the event to satisfy the event selection and the distribution of reconstructed values. With this information we determine the expected distribution of reconstructed values for any assumed helicity fractions, and in particular we choose to derive the distributions expected for purely left-handed, longitudinal, or right-handed boson, as shown in Fig. 8. The deficit of entries near relative to the expectation from Eq. 4 is due to the requirement imposed when selecting leptons. We verify the reweighting procedure by comparing the generated alpgen samples with the combination of reweighted distributions expected for couplings, and find that these distributions agree within the MC statistics. The templates for background samples are obtained directly from the relevant MC or data background samples, and are shown in Fig. 9.
Vi Model-independent Helicity Fit
The boson helicity fractions are extracted by computing a binned Poisson likelihood with the distribution of in the data to be consistent with the sum of signal and background templates. The likelihood is a function of the boson helicity fractions and , defined as
where is the Poisson probability for observing events given a mean expectation value , is the number of channels in the fit (a maximum of five in this analysis: jets, jets, , , and ), is the number of background sources in the channel, is the number of bins in the distribution for any given channel (plus the number of bins in the distribution for hadronic boson decays in the jets channels), is the nominal number of measurements from the background contributing to the channel, is the uncertainty on , is the fitted number of events for this background, is the number of data events in the bin of for the channel, and is the predicted sum of signal and background events in that bin. The can be expressed as
where represents the number of measurements from signal events in a given channel, the represent the probabilities for an event from some source to appear in bin for channel (as determined from the templates), and the subscripts 0, , refer to the templates for events in which the bosons have zero, negative, or positive helicity, and the subscript refers to the templates for the background source. The efficiency for a event to satisfy the selection criteria depends upon the helicity states of the two bosons in the event; the are therefore necessary to translate the fractions of events with different helicity states in the selected sample to the fractions that were produced. The quantity is defined as
where is the relative efficiency for events with bosons in the and helicity states to satisfy the selection criteria. The values of for each decay channel are given in Table 6. While performing the fit, both and are allowed to float freely, and the measured helicity fractions correspond to those leading to the highest likelihood value.
We check the performance of the fit using simulated ensembles of events, with all values of and from 0 through 1 as inputs in increments of 0.1, with the sum of and not exceeding unity. We simulate input data distributions for the various values by combining the pure left-handed, longitudinal, and right-handed templates in the assumed proportions. In these ensembles, we draw a random subset of the simulated events, with the number of events chosen in each channel fixed to the number observed in data. Within the constant total number of events, the numbers of signal and background events are fluctuated binomially around the expected values. Each of these sets of simulated events is passed through the maximum likelihood fit using the standard templates. We find that the average fit output value is close to the input value across the entire range of possible values for the helicity fractions, with the small differences between the input and output values being consistent with statistical fluctuations in the ensembles. As an example, the set of and values obtained when events are drawn in the proportions expected in the SM is shown in Fig. 10.
Vii Systematic Uncertainties
Systematic uncertainties are evaluated using simulated event ensembles in which both changes in the background yield and changes in the shape of the templates in signal and background are considered. The simulated samples from which the events are drawn can be either the nominal samples or samples in which the systematic effect under study has been shifted away from the nominal value. In general, the systematic uncertainties assigned to and are determined by taking an average of the absolute values of the differences in the average fit output values between the nominal and shifted and samples.
The jet energy scale, jet energy resolution, and jet identification efficiency each have relatively small uncertainties that are difficult to observe above fluctuations in the MC samples. To make the effects more visible, we vary these quantities by 5 standard deviations, and then divide the resulting differences in the average fit output by 5. The top quark mass uncertainty corresponds to shifting by 1.4 GeV/, which is the sum in quadrature of the uncertainty on the world average (1.1 GeV/) and the difference between the world average value (173.3 GeV/) and the value assumed in the analysis (172.5 GeV/). We evaluate the contribution of template statistics to the uncertainty by repeating the fit to the data 1000 times, fluctuating the signal and background distributions according to their statistics in each fit. The uncertainties due to the modeling of events are separated into several categories and evaluated using special-purpose MC samples. The uncertainty in the model of gluon radiation is assessed using pythia MC samples in which the amount of gluon radiation is shifted upwards and downwards; the impact of NLO effects is assessed by comparing the default leading-order alpgen generator with the NLO generator mc@nlo mcatnlo (); the uncertainty in the hadronic showering model is assessed by comparing alpgen events showered with pythia and with herwig herwig (); and lastly, the impact of color reconnection effects is assessed by comparing pythia samples where the underlying event model does and does not include color reconnection. The uncertainty due to data and MC differences in the background distribution is derived by taking the ratio of the data and the MC distribution for a background-enriched sample (defined by requiring that events have low values of ) and then using that ratio to re-weight the distribution of background MC events that satisfy the standard selection. The uncertainty in the heavy flavor content of the background is estimated by varying the fraction of background events with heavy flavor jets by %. Uncertainties due to the fragmentation of jets are evaluated by comparing the default fragmentation model, the Bowler scheme bowler () tuned to data collected at the CERN LEP collider, with an alternate model tuned to data collected by the SLD collaboration bfragtuning (). Uncertainties in the parton distribution functions (PDFs) are estimated using the set of errors provided for the CTEQ6M cteq6m () PDF. The analysis consistency uncertainty reflects the typical difference between the input helicity fractions and the average output values observed in fits to simulated event ensembles. Finally, we include an uncertainty corresponding to muon triggers and identification, as control samples indicate some substantial data/MC discrepancies for the loose selection we use. All the systematic uncertainties are summarized in Table 7.
|Source||Uncertainty ()||Uncertainty ()|
|Jet energy scale||0.007||0.009|
|Jet energy resolution||0.004||0.009|
|Top quark mass||0.011||0.009|
|Heavy flavor fraction||0.011||0.026|
Applying the model independent fit to the Run IIb data, we find
The comparison between the best-fit model and the data is shown in Fig. 11, and the 68% and 95% C.L. contours in the plane are shown in Fig. 12(a). To account for systematic uncertainties, we perform a MC smearing of the distribution, where the width of the smearing in and is given by the systematic uncertainty on each helicity fraction, and the correlation coefficient of between them is taken into account.
To assess the consistency of the result with the SM, we note that the change in (Eq. 10) between the best fit and the SM points is 0.24 considering only statistical uncertainties and 0.16 when systematic uncertainties are included. The probability of observing a greater deviation from the SM due to fluctuations in the data is 78% when only the statistical uncertainty is considered and 85% when both statistical and systematic uncertainties are considered.
We have also split the data sample in various ways to check the internal consistency of the measurement. Using jets events only, we find
and when using only dilepton events we find
We also divide the sample into events with only electrons (jets and ) and events with only muons (jets and ). The results for electrons only are
and for muons only are
Finally, we perform fits in which one of the two helicity fractions is fixed to its SM value. Constraining , we find
We also constrain and measure , finding
Ix Combination with Our Previous Measurement
To combine this result with the previous measurement from Ref. prevd0result (), we repeat the maximum likelihood fit with the earlier and current data samples and their respective MC models, treating them as separate channels in the fit. This is equivalent to multiplying the two-dimensional likelihood distributions in and corresponding to the two data sets. We determine the systematic uncertainty on the combined result by treating most uncertainties as correlated (the exception is template statistics) and propagating the uncertainties to the combined result. The results are presented in Table 8.
|Source||Uncertainty ()||Uncertainty ()|
|Jet energy scale||0.009||0.010|
|Jet energy resolution||0.004||0.008|
|Heavy flavor fraction||0.010||0.022|
The combined result for the entire 5.4 fb sample is
The combined likelihood distribution is presented in Figs. 12(b). The probability of observing a greater deviation from the SM due to fluctuations in the data is 83% when only statistical uncertainties are considered and 98% when systematic uncertainties are included.
Constraining to the SM value, we find
and constraining to the SM value gives
We have measured the helicity of bosons arising from top quark decay in events using both the jets and dilepton decay channels and find
in a model-independent fit. The consistency of this measurement with the SM values , is 98%. Therefore, we report no evidence for new physics at the decay vertex.
We thank the staffs at Fermilab and collaborating institutions, and acknowledge support from the DOE and NSF (USA); CEA and CNRS/IN2P3 (France); FASI, Rosatom and RFBR (Russia); CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil); DAE and DST (India); Colciencias (Colombia); CONACyT (Mexico); KRF and KOSEF (Korea); CONICET and UBACyT (Argentina); FOM (The Netherlands); STFC and the Royal Society (United Kingdom); MSMT and GACR (Czech Republic); CRC Program and NSERC (Canada); BMBF and DFG (Germany); SFI (Ireland); The Swedish Research Council (Sweden); and CAS and CNSF (China).
- (1) F. Abe et al. (CDF Collaboration), Phys. Rev. Lett. 74, 2626 (1995).
- (2) S. Abachi et al. (D0 Collaboration), Phys. Rev. Lett. 74, 2632 (1995).
- (3) M. Fischer et al., Phys. Rev. D 63, 031501(R) (2001).
- (4) Tevatron Electroweak Working Group, arXiv:1007.3178[hep-ex] (2010).
- (5) K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010).
- (6) V.M. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 100, 062004 (2008).
- (7) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 105, 042002 (2010).
- (8) V.M. Abazov et al. (D0 Collaboration), Nucl. Instrum. Methods in Phys. Res. A 565, 463 (2006).
- (9) R. Angstadt et al., Nucl. Instrum. Methods in Phys. Res. A 622, 298 (2010).
- (10) M. Abolins et. al., Nucl. Instrum. Methods in Phys. Res. A 584, 75 (2008).
- (11) M.L. Mangano, J. High Energy Phys. 07, 001 (2003).
- (12) T. Sjöstrand et al., Computer Phys. Commun. 135 238, (2001).
- (13) S. Agostinelli et al., Nucl. Instrum. Methods in Phys. Res. A 506, 250 (2003).
- (14) G.C. Blazey et al., arXiv:hep-ex/0005012 (2000).
- (15) V.M. Abazov et al. (D0 Collaboration), Nucl. Instrum. Methods in Phys. Res. A 620, 490 (2010).
- (16) V.M. Abazov et al. (D0 Collaboration), Phys. Lett. B 626, 45 (2005).
- (17) F. Abeet al. (CDF Collaboration), Phys. Rev. D50, 2966 (1994).
- (18) S. Abachi et al. (D0 Collaboration), Phys. Rev. Lett. 74, 2422 (1995).
- (19) S. Frixione and B. Webber, J. High Energy Phys. 06, 29 (2002).
- (20) G. Corcella et al., J. High Energy Phys. 01, 010 (2001).
- (21) M.G. Bowler, Z. Phys. C 11 169 (1981).
- (22) Y. Peters et al., FERMILAB-TM-2425-E (2006).
- (23) J. Pumplin et al., J. High Energy Phys. 07, 012 (2002).