Measurement of the WW+WZ Production Cross Section Using a Matrix Element Technique in Lepton + Jets Events

# Measurement of the WW+WZ Production Cross Section Using a Matrix Element Technique in Lepton + Jets Events

###### Abstract

We present a measurement of the production cross section observed in a final state consisting of an identified electron or muon, two jets, and missing transverse energy. The measurement is carried out in a data sample corresponding to up to 4.6 fb of integrated luminosity at TeV collected by the CDF II detector. Matrix element calculations are used to separate the diboson signal from the large backgrounds. The cross section is measured to be  pb, in agreement with standard model predictions. A fit to the dijet invariant mass spectrum yields a compatible cross section measurement.

###### pacs:
14.80.Bn, 14.70.Fm, 14.70.Hp, 12.15.Ji

CDF Collaboration222With visitors from University of Massachusetts Amherst, Amherst, Massachusetts 01003, Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, 09042 Monserrato (Cagliari), Italy, University of California Irvine, Irvine, CA 92697, University of California Santa Barbara, Santa Barbara, CA 93106 University of California Santa Cruz, Santa Cruz, CA 95064, CERN,CH-1211 Geneva, Switzerland, Cornell University, Ithaca, NY 14853, University of Cyprus, Nicosia CY-1678, Cyprus, University College Dublin, Dublin 4, Ireland, University of Fukui, Fukui City, Fukui Prefecture, Japan 910-0017, Universidad Iberoamericana, Mexico D.F., Mexico, Iowa State University, Ames, IA 50011, University of Iowa, Iowa City, IA 52242, Kinki University, Higashi-Osaka City, Japan 577-8502, Kansas State University, Manhattan, KS 66506, University of Manchester, Manchester M13 9PL, England, Queen Mary, University of London, London, E1 4NS, England, Muons, Inc., Batavia, IL 60510, Nagasaki Institute of Applied Science, Nagasaki, Japan, National Research Nuclear University, Moscow, Russia, University of Notre Dame, Notre Dame, IN 46556, Universidad de Oviedo, E-33007 Oviedo, Spain, Texas Tech University, Lubbock, TX 79609, IFIC(CSIC-Universitat de Valencia), 56071 Valencia, Spain, Universidad Tecnica Federico Santa Maria, 110v Valparaiso, Chile, University of Virginia, Charlottesville, VA 22906, Yarmouk University, Irbid 211-63, Jordan, On leave from J. Stefan Institute, Ljubljana, Slovenia,

July 12, 2019

## I Introduction

Measurements of the production cross section of pairs of heavy gauge bosons test the electroweak sector of the standard model (SM). The production cross section can be enhanced by anomalous triple gauge boson interactions hagiwara () or from new particles decaying to pairs of vector bosons.

In this paper, we describe the measurement of the production cross section in events containing a high- electron or muon and two hadronic jets. This event topology is expected when one boson in the event decays to an electron or muon and a neutrino, and the other or boson decays to two quarks (). We consider both the and processes as signal because our limited detector resolution of hadronic jets makes the separation of from impracticable.

The leading-order and production diagrams are shown in Fig. 1. The predicted SM production cross sections at the Tevatron, calculated at next-to-leading order (NLO), are  pb and  pb VVtheory (). Both of these production cross sections have been measured previously at the Tevatron in channels in which both gauge bosons decay leptonically diblepCDF (); diblepD0 (), and no deviation between measurement and prediction has been observed.

Hadronic decay modes have higher branching ratios than the leptonic decays, but the corresponding final states are exposed to large backgrounds. The first observation of diboson production at the Tevatron with a hadronic decay was achieved in events with two jets and large missing transverse energy at CDF metjets (). Evidence and observation of the process and decay discussed in this paper, , were previously reported by the D0 d0lvjj () and CDF ourPRL () collaborations. The observation reported by CDF used a matrix element technique relying on knowledge of the differential cross sections of signal and background processes to separate signal events from the background.

The measurement of is relevant to the search for the Higgs boson at the Tevatron. One of the most powerful channels used in the search for a Higgs boson with a mass lower than 130 GeV/ is the channel in which the Higgs boson is produced in association with a boson, with the Higgs boson decaying to a pair of quarks and the boson decaying leptonically (). A similar matrix element analysis to the one presented in this paper is employed in the search at CDF WHME (). A well-established measurement of the channel gives us confidence in the similar techniques in the search for the Higgs boson. Similar issues in background modeling and systematic uncertainties are relevant for the two analyses. One important difference, however, is that the search for production uses methods to identify jets originating from b-quarks (Ò-taggingÓ), whereas the analysis presented in this paper does not use -tagging.

This paper presents the details of the matrix element method used in the observation of , but applied to a larger data sample corresponding to up to 4.6 fb of integrated luminosity taken with the CDF II detector and with some changes in the event selection criteria. In particular, the event selection has been made more inclusive so that it resembles that used in the search more closely.

The organization of the rest of this paper is as follows. Section II describes the apparatus used to carry out the measurement, while Section III describes the event selection and backgrounds. The modeling of the signal and background processes is discussed in Section IV. Section V contains the details of the matrix element technique used for the measurement. The systematic uncertainties and results are discussed in Sections VI and VII respectively. A fit to the dijet invariant mass spectrum, performed as a cross check, is presented in Section VIII. Finally, we summarize the conclusions in Section IX.

## Ii CDF II detector

The CDF II detector is a nearly azimuthally and forward-backward symmetric detector designed to study collisions at the Tevatron. It is described in detail in Ref. CDFdet (). It consists of a charged particle tracking system surrounded by calorimeters and muon chambers. Particle positions and angles are expressed in a cylindric coordinate system, with the axis along the proton beam. The polar angle, , is measured with respect to the direction of the proton beam, and is the azimuthal angle about the beam axis. The pseudo-rapidity, , is defined as .

The momentum of charged particles is measured by the tracking system, consisting of silicon strip detectors surrounded by an open-cell drift chamber, all immersed in a 1.4 T solenoidal magnetic field coaxial with the Tevatron beams. The silicon tracking system SVX () consists of eight layers of silicon covering the radial region from 1.5 cm to 28 cm from the beam axis. The drift chamber, or central outer tracker (COT) COT (), is composed of eight superlayers that alternate between axial and 2 degree stereo orientations. Each superlayer contains 12 sense wires. The COT covers the radial region from 40 cm to 137 cm and provides good tracking efficiency for charged particles out to .

The tracking system is surrounded by calorimeters which measure the energies of electrons, photons, and jets of hadronic particles. The electromagnetic calorimeters use a scintillating tile and lead sampling technology, while the hadronic calorimeters are composed of scintillating tiles with steel absorber. The calorimeters are divided into central and plug sections. The central region, composed of the central electromagnetic (CEM) CEM () and central and end-wall hadronic calorimeters (CHA and WHA) CHAWHA (), covers the region . The end-plug electromagnetic (PEM) PEM () and end-plug hadronic calorimeters (PHA) extend the coverage to . The calorimeters have a component called the shower maximum (ShowerMax) ShowerMax () detector located at the depth in the calorimeter at which the electromagnetic shower is expected to be widest. The ShowerMax uses wire chambers and cathode strips to provide a precise position measurement for electromagnetic clusters.

A muon system composed of planar multi-wire drift chambers records hits when charged particles pass through. Four different sections of the muon detector are used for the analysis presented here: the central muon detector (CMU) CMU (), the central muon upgrade (CMP), the central muon extension (CMX), and the barrel muon chambers (BMU). In the central region, , four layers of chambers located just outside of the calorimeter make up the CMU system; the CMU is surrounded by 60 cm of iron shielding and another four layers of chambers compose the CMP system. The CMX covers the region with , while the BMU extends the coverage to .

Cherenkov luminosity counters (CLCs) CLC () measure the rate of inelastic collisions, which can be converted to an instantaneous luminosity. The integrated luminosity is calculated from the instantaneous luminosity measurements. The CLCs consist of gaseous Cherenkov counters located at high pseudo-rapidity, 3.64.6.

The three-level trigger system at CDF is used to reduce the event rate from 1.7 MHz to about 150 Hz. The first level uses hardware, while the second is a mixture of hardware and fast software algorithms XFT (). The software-based third-level trigger makes use of detailed information on the event, very similar to that available offline.

## Iii Candidate Event Selection and Backgrounds

The event selection can be divided into a baseline selection corresponding to the topology of our signal, and a variety of vetoes that are imposed to remove backgrounds. The baseline selection, the relevant backgrounds, and the vetoes are all described in more detail below.

A few relevant quantities for the event selection are defined here. The transverse momentum of a charged particle is , where is the momentum of the charged particle track. The analogous quantity measured with calorimeter energies is the transverse energy, . The missing transverse energy, , is defined by , where is a unit vector perpendicular to the beam axis and pointing at the calorimeter tower. is corrected for high-energy muons as well as for factors applied to correct hadronic jet energies. We define . Jets are clustered using a cone algorithm, with a fixed cone size in which the center of the jet is defined as () and the size of the jet cone as .

### iii.1 Baseline event selection

Figure 2 shows the decay topology that is considered as the signal in this analysis. The final state contains a charged lepton, a neutrino, and two quarks. We focus on events in which the charged lepton is an electron or muon. Events in which the boson decays to a lepton may also be considered part of the signal if a leptonic decay results in an isolated electron or muon. The neutrino passes through the detector without depositing energy; its presence can be inferred in events with . The two quarks will hadronize to form collimated jets of hadrons. As a result, our baseline event selection requires events to contain one high- electron or muon, significant , and two jets.

Several triggers are used to collect events for this analysis. Roughly half of the events are selected with a trigger requiring a high- central electron in the CEM ( GeV, ). Two muon triggers, one requiring hits in both the CMP and CMU and the other requiring hits in the CMX, collect events with central muons ( GeV/, ). Finally, a trigger path requiring large and two jets is used to collect events with muons that were not detected by the central muon triggers. The plus jets trigger requires  GeV and two jets with  GeV. The jet and used in the trigger selection are not corrected for detector or physics effects.

Further selection criteria are imposed on triggered events offline. Electron (muon) candidates are required to have  GeV ( GeV/). They must fulfill several other identification criteria designed to select pure samples of high- electrons (muons) LepSel (), including an isolation requirement that the energy within a cone of around the lepton axis is less than 10% of the () of the electron (muon). The jet energies are corrected for detector effects jet_details (). We require the highest- jet in the event to have  GeV and the second highest- jet in the event to have  GeV. Finally, we require 20 GeV.

Some criteria are imposed specifically on events collected by the plus jets trigger to ensure a high efficiency. We require that the two jets are sufficiently separated, , that one of the jets is central, , and that the transverse energy of both jets is larger than 25 GeV. Even after these cuts, this trigger path is not fully efficient, which is taken into account by a correction curve as a function of .

### iii.2 Backgrounds

The baseline selection is based on the signal topology we are trying to select. However, several backgrounds can result in events with a similar topology.

• jets: events in which a boson is produced in association with quarks or gluons form a background if the boson decays leptonically. This is the dominant background because of its high production cross section and signal-like properties.

• jets: events in which a boson is produced in association with two quarks or gluons may enter our signal sample if the boson decays to electrons or muons and one lepton falls outside the fiducial region of the detector or other mismeasurement leads to significant .

• QCD non-: events in which several jets are produced, but no real boson is present, may form a background if a jet fakes an electron or muon and mismeasurement of the jet energies results in incorrectly assigning a large to the event.

• : top quark pair production is a background because top quarks nearly always decay to a boson and a quark. If a boson decays leptonically, events may pass our baseline event selection criteria.

• Single top: leading-order production and decay of single top quarks results in an event topology with a boson and two quarks.

### iii.3 Event vetoes

In order to reduce the size of the backgrounds described above, several vetoes are imposed on events in our sample. Events are required to have no additional electrons, muons, or jets, reducing the jets, QCD non-, and backgrounds. A further +jets veto rejects events with a second loosely identified lepton with the opposite charge as the tight lepton if the invariant mass of the tight and loose lepton pair is close to the boson mass:  GeV/. events are also effectively removed after this veto.

A veto developed specifically to reduce the size of the QCD non- background is imposed. This veto is more stringent for events which contain an electron candidate, since jets fake electrons more often than muons. In electron events, the minimum is raised to 25 GeV, and the transverse mass of the leptonically decaying boson candidate, , is required to be at least 20 GeV/. A variable called the significance is also defined:

 \raisebox1.29pt$⧸$ETsig=\raisebox1.29pt$⧸$ET√∑jetsC2JEScos2(Δϕjet,→⧸ET)ErawT,jet+cos2(Δϕ→ET,uncl,→⧸ET)∑ET,uncl, (1)

where is the raw, uncorrected energy of a jet and is the correction to the jet energy jet_details (), is the vector sum of the transverse component of calorimeter energy deposits not included in any jet, and is the total magnitude of the unclustered calorimeter energies. The is a measure of the distance between the and jets or unclustered energy; it tends to be larger for stemming from a neutrino than for stemming from mismeasurement. We require and in events with an electron candidate. In muon events, the QCD veto simply requires  GeV/.

We veto events with additional “loose” jets, defined as jets with  GeV and . This veto is found to improve the agreement between Monte Carlo and data in the modeling of some kinematic variables.

Events consistent with photon conversion and cosmic ray muons are also vetoed stPRD ().

## Iv Modeling

Both the normalization (number of events in our sample) and the shapes of signal and background processes must be understood to carry out this analysis.

### iv.1 Models used

The signal processes and all background processes except the QCD non- background are modeled using events generated by a Monte Carlo program which are run through the CDF II detector simulation CDFsim (). The Monte Carlo event generators used for each process are listed in Table 1. pythia is a leading-order event generator that uses a parton shower to account for initial and final state radiation pythia (). alpgen and madevent are leading-order parton-level event generators alpgen (); madevent (); events generated by alpgen and madevent are passed to pythia where the parton shower is simulated.

The top mass is assumed to be  GeV/ in the modeling of and single top events. The distributions of the longitudinal momenta of the different types of quarks and gluons within the proton as a function of the momentum transfer of the collision are given by parton distribution functions (PDFs). The CTEQ5L PDFs are used in generating all Monte Carlo samples in this analysis CTEQ ().

Simulation of the QCD non- background is difficult: its production cross section is large and the probability to mimic a boson in the event is small. In addition, the mismeasurements that lead to the QCD non- background having large may not be simulated well. Therefore, this background is modeled using data rather than simulation. Events from jet-based triggers containing a jet that deposits most of its energy in the electromagnetic segment of the calorimeter, as well as events from single lepton triggers that fail lepton requirements but pass a looser set of requirements are used.

### iv.2 Expected event yields

The number of events due to the signal and jets, and single top backgrounds that enter our sample are estimated based on their cross section (), the efficiency () with which they are selected, and the integrated luminosity (): . The efficiency , which includes the detector acceptance, is estimated from the Monte Carlo simulation. is taken from NLO calculations for the , , and single top processes and from the CDF inclusive boson production cross section measurement for the +jets background xsections ().

As mentioned in the introduction, the and cross sections calculated at NLO are  pb and  pb respectively VVtheory (). The acceptance of these samples measured with respect to the inclusive production cross section is about 2.4% for events and about 1.2% for events.

Since neither the production cross section nor the selection efficiency of the QCD non- background is known, we rely on a data-driven technique to estimate its normalization. The shape of the spectrum is very different in events with a real boson than in the events coming from the QCD non- background, as is shown in Fig. 3. The spectrum observed in data is fit with the sum of all contributing processes, where the QCD non- normalization and the jets normalization are free parameters. The fit is performed over GeV, meaning the cut on the described in the event selection above is removed. An example of the fit is shown in Fig. 3 for events with a central electron. The percentage of QCD non events in our signal sample (with the cut imposed) is estimated based on the fit; it is about 5% for events with a central electron, 3% for events with a central muon, and 3% for events in the extended muon category.

The +jets normalization is a free parameter in the final likelihood fit to extract the cross section, which is described in Section VC. A preliminary estimate of the +jets normalization used in the modeling validation is derived from the fit described above. Table 2 lists the total expected number of events for signal and background processes. The background normalization uncertainties will be described in Sec. VI.

### iv.3 Background shape validation

The kinematics of the background model are validated by comparing the shape of various kinematic quantities in data to the prediction from the models. Each signal and background process is normalized according to Table 2, and the sum of their shapes for a given quantity is compared to that observed in the data. Some examples of the comparisons are shown in Fig. 4 for the , the lepton , the and of both jets, the distance between the two jets (), and the of the two-jet system (). In all of these figures, the integral of the total expectation is set to be equal to the number of data events, so the figures show shape comparisons. The hatched band is the uncertainty in the shape of the backgrounds due to the jet energy scale and the scale in alpgen, described further in Sec. VI. The modeling of the kinematic quantities generally matches the data well within the uncertainties. In the case of , the systematic uncertainties do not seem to cover the disagreement between data and Monte Carlo, so an additional mismodeling uncertainty is imposed; this is described further in Sec. VI. The mismodeling uncertainty derived from also affects the modeling of correlated variables, particularly and , covering the observed disagreement between data and expectation.

## V Measurement technique

The expected number of events from production is small compared to the expected number of events from +jets production. Moreover, the uncertainty on the number of +jets events expected is large due to uncertainty in the modeling of this process, making it difficult to separate the signal from the +jets background. We employ a matrix element analysis technique to improve the signal and background separation. Matrix element probabilities for various processes are calculated which are then combined to form a single discriminant.

### v.1 Matrix element event probability

The matrix element method defines a likelihood for an event to be due to a given production process based on the differential cross section of the process. An outline of the procedure is given here, and full details can be found in Ref. pdongthesis ().

The differential cross section for an -body final state with two initial state particles with momenta and and masses and is

 dσ=(2π)4|M|24√(→q1⋅→q2)2−m21m22×dΦn (2)

where is a phase space factor given by

 dΦn=δ4(q1+q2−n∑i=1pi)n∏i=1d3pi(2π)32Ei (3)

and and are the energies and momenta of the final state particles PDG (). is the matrix element of the process.

We define a probability density for a given process by normalizing the differential cross section to the total cross section:

 P∼dσσ. (4)

is not a true probability, as various approximations are used in the calculation of the differential cross section: leading-order matrix elements are used, there are integrations over unmeasured quantities (described below), and several constants are omitted from the calculation.

We cannot measure the initial state momenta and the resolution of the final state measurements is limited by detector effects. As a result, we weight the differential cross section with parton distribution functions (PDFs) for the proton and integrate over a transfer function encoding the relationship between the measured quantities and the parton-level quantities . The probability density is then given by

 P(x)=1σ∫dσ(y)dq1dq2f(q1)f(q2)W(y,x), (5)

where and are the PDFs in terms of the fraction of the proton momentum (), and is the transfer function. The PDFs are evaluated based on the CTEQ6.1 parameterization CTEQ (). Using Eqs. 2 and 3 and neglecting the masses and transverse momenta of the initial partons, the event probability is given by

 P(x)=1σ∫2π4|M|2f(y1)|Eq1|f(y2)|Eq2|W(y,x)dΦ4dEq1dEq2. (6)

The squared matrix element, , is calculated at tree level using the helas package helas (), with the diagrams for a given process provided by madgraph madevent ().

In , the lepton energy and angle, as well as the jet angles, are assumed to be measured exactly. The jet energy transfer function is derived by comparing parton energies to the fully simulated jet response in Monte Carlo events. A double Gaussian parameterization of the difference between the jet and parton energy is used. Three different transfer functions are derived: one for jets originating from quarks, one for jets originating from other non- quarks, and one for jets originating from gluons. The appropriate transfer function is chosen based on the diagram in the matrix element being evaluated. The measured missing transverse energy is not used in the calculation of the event probability; conservation of momentum is used to determine the momentum of the neutrino.

After conservation of energy and momentum have been imposed, the integral to derive the event probability is three-dimensional: the energies of the quarks and the longitudinal momentum of the neutrino are integrated over. The integration is carried out numerically using an adaptation of the CERNLIB radmul routine radmul () or the faster divonne integration algorithm implemented in the cuba library cuba (). The results of the two integrators were checked against each other and found to be compatible.

### v.2 Event Probability Discriminant

The matrix element event probability is calculated for the signal and processes, as well as for single top production and several contributions to the +jets background: , , , , and , where , , , and are gluons, light flavor quarks, bottom quarks, and charm quarks respectively.

No matrix element calculation is carried out for the , +jets, and QCD non- background processes. All of these backgrounds require some additional assumptions, making the matrix element calculation more difficult and computationally intensive. For example, events become a background if several jets or a lepton are not detected; incorporating this in the matrix element calculation requires additional integrations which are computationally cumbersome. For the +jets background process, a lepton either fakes a jet or escapes detection, two scenarios difficult to describe in the matrix element calculation. Finally, the QCD non- background would require a large number of leading-order diagrams as well as a description of quarks or gluons faking leptons. The +jets and QCD backgrounds look very different from the signal (i.e. there will be no resonance in the dijet mass spectrum) so we expect good discrimination even without including probabilities explicitly for those background processes.

The probabilities for individual processes described above (, where runs over the processes) are combined to form a discriminant, a quantity with a different shape for background-like events than for signal-like events. We define the discriminant to be of the form so that background-like events will have values close to zero and signal-like events will have values close to unity. The and are just the sum of individual probabilities for signal and background processes, but we put in some additional factors to form the event probability discriminant, or .

First, as noted above, various constants are omitted from the calculations of . We normalize the relative to each other by calculating them for each event in large Monte Carlo samples. We then find the maximal over all Monte Carlo events corresponding to a given process, . The normalized probabilities are then given by .

In addition, we multiply each by a coefficient, . This coefficient has the effect of weighting some probabilities more than others in the discriminant. These are optimized to achieve the best expected sensitivity based on the models. The full is then given by:

 EPD=nsig∑i=1CiPiPmaxinsig∑i=1CiPiPmaxi+nBG∑j=1CjPjPmaxj, (7)

where the summation over signal processes runs over and () and the summation over background processes runs over , , , , and the single top diagrams () .

Figure 5 shows the templates for signal and background processes normalized to unit area. The background processes all have similar shapes while the signal process falls more slowly. We validate the modeling of the for background events by comparing data and simulation in the region with  GeV/ and  GeV/, where we expect very little signal. The result of the comparison is shown in Fig. 6. The agreement between data and simulation is very good.

The effectiveness of the in isolating signal-like events can be seen by plotting the invariant mass of the two jets in bins, shown in Fig. 7. This quantity is expected to have a resonance around the or boson mass for signal-like events. The bin with low values (00.25), in the top left plot, has events in the full dijet mass range from 20 to 200 GeV/. For , however, the distribution is peaked around the mass. As the range approaches unity, the expected signal to background ratio increases and the dijet mass peak becomes narrower.

### v.3 Likelihood fit

The shape of the observed in data is fit to a sum of the templates shown in Fig. 5 to extract the signal cross section. The events are divided into three channels corresponding to different lepton categories: one channel for central electrons, another for central muons, and a third for events with muons collected by the plus jets trigger.

A maximum likelihood fitting procedure is used. The likelihood is defined as the product of Poisson probabilities over all bins of the template over all channels:

 L=nbins∏i=1μniini!e−μi, (8)

where and are the observed and predicted number of events in bin respectively. The prediction in a bin is the sum over signal and background predictions:

 μi=nsig∑k=1sik+nbg∑k=1bik (9)

with the predicted contribution from background in bin . is two, corresponding to the and processes; is the number of background processes.

The predicted number of events in a bin is affected by systematic uncertainties. The sources of systematic uncertainty are described in detail in Section VI. For each source of uncertainty, a nuisance parameter is introduced whose value changes the predicted contribution of a process to a bin. Each nuisance parameter has a Gaussian probability density function (p.d.f.) with a mean of zero and a width given by the 1 uncertainty. A detailed mathematical description of the way the nuisance parameters are incorporated in the likelihood is given in Ref. stPRD ().

Finally, with a likelihood that is a function of the observed data, the signal cross section, the predicted signal and background contributions, and systematic uncertainties and their corresponding nuisance parameters, we extract the cross section. A Bayesian marginalization technique integrates over the nuisance parameters, resulting in a posterior probability density which is a function of the signal cross section. The measured cross section corresponds to the maximum point of the posterior probability density, and the 68% confidence interval is the shortest interval containing 68% of the area of the posterior probability density.

The measured cross section is the total cross section of the signal, . Assuming the ratio between the and cross sections follows the NLO prediction, . If the ratio between the cross sections is different than the NLO prediction, we are measuring the total cross section . Here and are not assumed to follow NLO predictions. The ratio between the and acceptances is predicted from the signal simulations described in Sec. IVA.

## Vi Systematic Uncertainties

Systematic uncertainties affect the normalization of background processes, the signal acceptance, and the shape of the for both background and signal processes. The sources of systematic uncertainty and the aspects of the measurement affected by each are briefly described in this section. Finally, the expected contribution of the uncertainties to the cross section measurement are explored.

### vi.1 Sources of uncertainty

• Normalization of background processes: The uncertainties in the normalization of the background processes are summarized in Table 3. The uncertainty on the +jets normalization is taken to be an arbitrarily large number; the fit to extract the cross section constrains the +jets normalization to a few percent, so taking a 20% uncertainty is equivalent to allowing the +jets normalization to float. The uncertainty on the jets, , and single top backgrounds are derived from the uncertainty in their cross sections and uncertainties on the efficiency estimate. The 40% uncertainty on the QCD non- contribution is a conservative estimate based on differences observed between different choice of sample models.

• Jet Energy Scale (JES): As mentioned above, jet energies are corrected for detector effects. The corrections have systematic uncertainties associated with them jet_details (). The size of the 1 uncertainty depends on the of the jet, ranging from about 3% for jet 80 GeV to about 7% for jet 20 GeV.

The effect of the JES uncertainty on the measurement is estimated by creating two shifted Monte Carlo samples: one in which the energy of each jet in each event of our Monte Carlo samples is shifted by and the second in which each jet energy is shifted by , taking the -dependence of the uncertainty into account. The whole analysis is repeated with the shifted Monte Carlo samples, including the calculation of the matrix elements.

The JES uncertainty has a small effect on the estimated signal acceptance because the efficiency of the jet selection depends on the JES. The size of the acceptance uncertainty is about 1%. In addition, the shape of the templates for the signal processes and for the dominant +jets background process are affected by the JES uncertainty. The change in the background shape is relatively small compared to the change in the signal shape. The signal normalization uncertainty, the signal shape uncertainty, and the background shape uncertainty are incorporated as a correlated uncertainty in the likelihood fit.

• scale in alpgen: The factorization and renormalization scale, or scale, is a parameter in the perturbative expansion used to calculate matrix elements in alpgen. Higher-order calculations become less dependent on the choice of scale, but alpgen is a leading-order generator and its modeling is affected by the choice of scale. The scale used in generating our central +jets samples is , where is the mass of the boson, is the transverse mass, and the summation is over all final-state partons. alpgen +jets samples were generated with this central scale doubled and divided by two. These are taken as uncertainties on the shape of the +jets template.

• Integrated luminosity: The integrated luminosity is calculated based on the inelastic cross section and the acceptance of CDF’s luminosity monitor Lumi (). There is a 6% uncertainty on the calculation, which is included as a correlated uncertainty on the normalization of all processes except the non- QCD background and the +jets background, whose normalizations are determined from fits to the data.

• Initial and final state radiation: Comparison between samples simulated with pythia and Drell-Yan data, where no FSR is expected, are used to determine reasonable uncertainties for the parameters used to tune the initial and final state radiation in pythia MtopTemplate (). The signal and samples were generated with the level of ISR and FSR increased and decreased, and the change in the acceptance was estimated. This results in an uncertainty of about 5% on the signal acceptance.

• PDFs: The PDFs used in generating the Monte Carlo samples have some uncertainty associated with them. The uncertainty on the signal acceptance is estimated in the same way as in Ref. MtopTemplate (). The uncertainty in the signal acceptance is found to be 2.5%.

• Jet Energy Resolution (JER): A comparison between data and simulation is used to assign an uncertainty on the jet energy resolution TopWidth (). For a jet with measured of 40 GeV, the jet energy resolution is )%. The matrix element calculations are repeated for the signal Monte Carlo sample with a higher jet energy resolution, and no change in the shape of the is observed. A small () uncertainty on the signal acceptance is assigned.

• +jets modeling: In addition to the shape uncertainties on the jets due to the JES and scale, we impose shape uncertainties due to mismodeling of the of the dijet system () and the of the lower- jet in the event (). We derive the uncertainty due to the mismodeling of these variables by reweighting the jets Monte Carlo model to agree with data as a function of either or . When deriving the weights, we remove events with GeV/ (the region in which we expect most of the signal) to avoid biasing the measurement towards the expected result. The mismodeling of has a negligible effect on the shape of the , whereas the mismodeling of has a small effect on its shape.

• Lepton identification efficiency: There is a 2% uncertainty on the efficiency with which we can identify and trigger on leptons. This uncertainty is assigned in the same way as the uncertainty on the integrated luminosity.

### vi.2 Effect on cross section fit

Pseudoexperiments are carried out to determine the expected uncertainty on the cross section. The pseudoexperiments are generated by varying the bin contents of each template histogram according to Poisson distributions as well as randomly setting a value for each nuisance parameter according to its p.d.f. The likelihood fit is applied to each pseudoexperiment to extract the cross section.

In order to estimate the effect of certain systematic uncertainties, they are taken out of the pseudo-experiments one-by-one. The expected statistical uncertainty (including the uncertainty on the background normalizations) was found to be 14% while the total systematic uncertainty is expected to be 16%. The total (systematic plus statistical) uncertainty expected on the cross section is 21%. The largest predicted systematic uncertainties are the JES, scale, and luminosity uncertainties, which contribute 8%, 7%, and 6% respectively to the total uncertainty.

Based on the pseudoexperiments, we can also understand which nuisance parameters are constrained in the likelihood fit. The +jets normalization uncertainty, which has a width of 20% in the prior p.d.f., is constrained on average to 1.8% in the pseudoexperiments. The first few bins of the , which are dominated by the +jets contribution, establish this constraint, and the effect of the constraint is to reduce the uncertainty in the +jets normalization in the high- bins, which are most important to the signal extraction.

## Vii Results

The likelihood fit is carried out in a data sample corresponding to an integrated luminosity of 4.6 fb. The shape of the observed in data is shown superimposed on the shape expected from Monte Carlo in Fig. 8. The cross section for production is found to be  pb. This result agrees with the prediction from NLO calculations of  pb.

The cross section was extracted in each lepton channel separately as a cross-check. The results are listed in Table 4. The extracted cross section agrees across lepton channels.

## Viii Fit to the dijet invariant mass

A similar template fit to the one described above was carried out using the invariant mass of the two jets rather than the with exactly the same event selection and sources of systematic uncertainty. The distribution of in data is shown superimposed on the stacked predictions in Fig. 9. The templates for the fit are shown in Fig. 10. There is a resonance for the signal since the two jets are a product of or boson decay, while the backgrounds have very different shapes without an apparent resonance. The shape of the jets background is a falling distribution shaped by event selection cuts.

The expected uncertainty on the cross section extracted by a fit to is about 19%, lower than the expected uncertainty when fitting the . While the statistical uncertainty is larger when fitting than when fitting the , the systematic uncertainty is smaller. The dominant systematic uncertainty is expected to be the shape uncertainty on the +jets background due to the mismodeling of , while the JES and scale uncertainties are less important than when fitting the .

The cross section extracted from the fit to is  pb. Based on pseudo-experiments, the expected correlation between the fit to and the fit to is about 60%. Thus the cross sections extracted from the and the fits have a discrepancy of about 1.8.

Fitting the dijet mass is presented here as a cross-check to the result from the matrix element technique because it is a less sensitive way of extracting the signal. In other words, the expected probability that the signal can be faked by the background is higher when fitting the dijet mass than when fitting the . As a result, the first observation of the signal in this channel was provided by the matrix element technique ourPRL (). With the data sample presented in this paper, the expected sensitivity of the matrix element technique is 5.0, while it is 4.6 when fitting . The observed significances are 5.4 and 3.5 for the matrix element and analyses respectively.

## Ix Conclusions

We have extracted the cross section for production in the final state with a lepton, two jets, and missing transverse energy using a matrix element technique. The cross section is measured to be  pb, in agreement with the NLO theoretical prediction of  pb. The measurement is primarily systematically limited; the jet energy scale and scale uncertainties give both large contributions to the total uncertainty. Improvements to the cross section measurement could be achieved by reducing the size of the systematic uncertainties via data-driven methods. The effect of systematic uncertainties on the measurement could also be reduced by further optimization of the event selection and discriminant.

###### Acknowledgements.
We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Culture, Sports, Science and Technology of Japan; the Natural Sciences and Engineering Research Council of Canada; the Humboldt Foundation, the National Science Council of the Republic of China; the Swiss National Science Foundation; the A.P. Sloan Foundation; the Bundesministerium für Bildung und Forschung, Germany; the Korean Science and Engineering Foundation and the Korean Research Foundation; the Science and Technology Facilities Council and the Royal Society, UK; the Institut National de Physique Nucleaire et Physique des Particules/CNRS; the Russian Foundation for Basic Research; the Ministerio de Ciencia e Innovación, and Programa Consolider-Ingenio 2010, Spain; the Slovak R&D Agency; and the Academy of Finland.

## References

• (1) K. Hagiwara, S. Ishihara, R. Szalapski, and D. Zeppenfeld, Phys. Rev. D 48, 2182 (1993).
• (2) J. M. Campbell and R. K. Ellis, Phys. Rev. D 60, 113006 (1999).
• (3) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 104, 201801 (2010); A. Abulencia et al. (CDF Collaboration), ibid 98, 161801 (2007);
• (4) V. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 103, 191801 (2009) and Phys. Rev. D 76, 111104(R) (2007).
• (5) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 103, 091803 (2009).
• (6) V. M. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 102, 161801 (2009).
• (7) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. bf 104, 101801 (2010).
• (8) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 103, 101802 (2009).
• (9) D. Acosta et al. (CDF Collaboration), Phys. Rev. D 71, 032001 (2005).
• (10) A. Sill et al., Nucl. Instrum. Methods A 447, 1 (2000).
• (11) T. Affolder et al., Nucl. Instrum. Methods A 526, 249 (2004).
• (12) L. Balka et al., Nucl. Instrum. Methods A 267, 272 (1988).
• (13) S. Bertolucci et al., Nucl. Instrum. Methods A 267, 301 (1988).
• (14) M. Albrow et al., Nucl. Instrum. Methods A 480, 524 (2002).
• (15) G. Apollinari et al., Nucl. Instrum. Methods A 412, 515 (1998).
• (16) G. Ascoli et al., Nucl. Instrum. Methods A 268, 33 (1988).
• (17) D. Acosta et al., Nucl. Instrum. Methods A 494, 57 (2002).
• (18) E.J. Thomson et al., IEEE Trans. on Nucl. Science. 49, 1063 (2002).
• (19) A. Abulencia et al., J. Phys.G 34, 2457 (2007).
• (20) A. Bhatti et al., Nucl. Instrum. Methods A 566, 375 (2006).
• (21) T. Aaltonen et al. (CDF Collaboration), arXiv:hep-ex/1004.1181
• (22) E. Gerchtein and M. Paulini, CHEP03 Conference Proceedings, 2003.
• (23) T. Sjöstrand et al., Comput. Phys. Commun., 135, 238 (2001).
• (24) M. L. Mangano et al., J. High Energy Phys. 07 (2003) 001.
• (25) J. Alwall et al. J. High Energy Phys. 09 (2007) 028.
• (26) J. Pumplin et al. J. High Energy Phys. 07 (2002) 012.
• (27) D. Acosta et al. (CDF collaboration), Phys. Rev. Lett. 94, 091803 (2005); M. Cacciari et al., J. High Energy Phys. 09 (2008) 127; B. W. Harris et al., Phys. Rev. D 66, 054024 (2002).
• (28) P. J. Dong, Ph.D. Thesis, University of California at Los Angeles, 2008, FERMILAB-THESIS-2008-12.
• (29) C. Amsler et al. Phys. Lett. B 667, 1 (2008).
• (30) I. Murayama, H. Watanabe and K. Hagiwara, Tech. Rep. 91-11, KEK (1992).
• (31) A. Genz and A. Malik, J. Comput. Appl. Math. 6, 295 (1980); implemented as CERNLIB algorithm D120, documented at http://wwwasdoc.web.cern.ch/wwwasdoc/shortwrupsdir/d120/top.html.
• (32) T. Hahn, Comput. Phys. Commun. 168, 78 (2005).
• (33) D. Acosta et al. Nucl. Instrum. Methods A 494, 57 (2002).
• (34) A. Abulencia et al. (CDF Collaboration), Phys. Rev. D 73, 032003 (2006).
• (35) T. Aaltonen et al. (CDF Collaboration), Phys. Rev. Lett. 102, 042001 (2009).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters