Combined CDF and D0 Upper Limits on Standard Model Higgs Boson Production with up to 8.2 fb{}^{-1} of Data

# Combined CDF and D0 Upper Limits on Standard Model Higgs Boson Production with up to 8.2 fb−1 of Data

The TEVNPH Working Group111The Tevatron New-Phenomena and Higgs Working Group can be contacted at TEVNPHWG@fnal.gov. More information can be found at http://tevnphwg.fnal.gov/. for the CDF and D0 Collaborations
July 16, 2019
###### Abstract

We combine results from CDF and D0’s direct searches for the standard model (SM) Higgs boson () produced in  collisions at the Fermilab Tevatron at  TeV. The results presented here include those channels which are most sensitive to Higgs bosons with mass between 130 and 200 GeV/, namely searches targeted at Higgs boson decays to , although acceptance for decays into and is included. Compared to the previous Tevatron Higgs search combination, more data have been added and the analyses have been improved to gain sensitivity. We use the MSTW08 parton distribution functions and the latest theoretical cross section predictions when testing for the presence of a SM Higgs boson. With up to 7.1 fb of data analyzed at CDF, and up to 8.2 fb at D0, the 95% C.L. upper limits on Higgs boson production is a factor of 0.54 times the SM cross section for a Higgs boson mass of 165 GeV/. We exclude at the 95% C.L. the region  GeV/.
Preliminary Results

FERMILAB-CONF-11-044-E

CDF Note 10441

D0 Note 6184

## I Introduction

The search for a mechanism for electroweak symmetry breaking, and in particular for a standard model (SM) Higgs boson, has been a major goal of particle physics for many years, and is a central part of the Fermilab Tevatron physics program, and also that of the Large Hadron Collider. Recently, the ATLAS and CMS collaborations at the Large Hadron Collider have released results on searches for the Standard Model Higgs boson decaying to  atlasww (); cmsww (),  atlaszz (),  atlasgammagamma () Both the CDF and D0 collaborations have performed new combinations cdfHWW (); DZHiggs () of multiple direct searches for the SM Higgs boson. The new searches cover a larger data sample, and incorporate improved analysis techniques compared to previous analyses. The sensitivities of these new combinations significantly exceed those of previous combinations prevhiggs (); WWPRLhiggs ().

In this note, we combine the most recent results of searches for the SM Higgs boson produced in  collisions at  TeV. The analyses combined here seek signals of Higgs bosons produced in association with vector bosons (), through gluon-gluon fusion (), and through vector boson fusion (VBF) () corresponding to integrated luminosities up to 7.1 fb at CDF and up to 8.2 fb at D0. The searches included here target Higgs boson decays to , although acceptance for decays into and are included in the D0 channels.

The searches are separated into 46 mutually exclusive final states (12 for CDF and 34 for D0; see Tables 1 and 2) referred to as “analysis sub-channels” in this note. The selection procedures for each analysis are detailed in Refs. cdfHWW () through dzHgg (), and are briefly described below.

## Ii Channels Included in the Combination

Event selections are similar for the corresponding CDF and D0 analyses.

For the  analyses which seek events in which both bosons decay leptonically, signal events are characterized by large  and two oppositely-signed, isolated leptons. The presence of neutrinos in the final state prevents the accurate reconstruction of the candidate Higgs boson mass. D0 selects events containing large  and electrons and/or muons, dividing the data sample into three final states: , , and . The searches in the three final states each use 8.1 fb of data which is subdivided according to the number of jets in the event: 0, 1, or 2 or more (“2+”) jets. Decays involving tau leptons are included in two orthogonal ways. For the first time, a dedicated analysis () using 7.3 fb of data studying the final state involving a muon and a hadronic tau decay, is included in the Tevatron combination. Final states involving other tau decays and mis-identified hadronic tau decays are included in the , , and final state analyses. CDF separates the  events in five non-overlapping samples, split into both “high ” and “low ” categories based on lepton types and different categories based on the number of reconstructed jets: 0, 1, or 2+ jets. The sample with two or more jets is not split into low and high lepton categories due to low statistics. A sixth CDF channel is the low dilepton mass () channel, which accepts events with  GeV. This channel increases the sensitivity of the analyses at low , adding 10% additional acceptance at  GeV. The division of events into jet categories allows the analysis discriminants to separate three different categories of signals from the backgrounds more effectively. The signal production mechanisms considered are , , and vector-boson fusion. The D0 , , and final state channels use boosted decision tree outputs as the final discriminants while the channel uses neural networks. CDF uses neural-network outputs, including likelihoods constructed from calculated matrix-element probabilities as additional inputs for the 0-jet bin.

D0 includes  analyses in which the associated vector boson and the boson from the Higgs boson decay that has the same charge are required to decay leptonically, thereby defining three like-sign dilepton final states (, , and ). The combined output of two decision trees, trained against the instrumental and diboson backgrounds respectively, is used as the final discriminant. CDF also includes a separate analysis of events with same-sign leptons and large  to incorporate additional potential signal from associated production events in which the two leptons (one from the associated vector boson and one from a boson produced in the Higgs decay) have the same charge. CDF for the first time also incorporates three tri-lepton channels to include additional associated production contributions where leptons result from the associated boson and the two bosons produced in the Higgs decay or where an associated boson decays into a dilepton pair and a third lepton is produced in the decay of either of the bosons resulting from the Higgs decay. In the latter case, CDF separates the sample into 1 jet and 2+ jet sub-channels to fully take advantage of the Higgs mass constraint available in the 2+ jet case where all of the decay products are reconstructed.

CDF also includes opposite-sign channels in which one of the two lepton candidates is a hadronic tau. Events are separated into - and - channels. The final discriminants are obtained from boosted decision trees which incorporate both hadronic tau identification and kinematic event variables as inputs.

D0 includes channels in which one of the bosons in the process decays leptonically and the other decays hadronically. Electron and muon final states are studied separately, each with 5.4 fb of data. Random forests are used for the final discriminants.

D0 has updated its search for Higgs boson production in which the Higgs decays into a pair of photons to include 8.2 fb of data and to use a boosted decision tree as the final discriminant.

The D0 analysis of the  jets final state has also been updated to include additional data (4.3 fb) and both muonic and electronic tau decays. The output of boosted decision trees is used as the final discriminant.

For both CDF and D0, events from QCD multijet (instrumental) backgrounds are measured in independent data samples using different methods. For CDF, backgrounds from SM processes with electroweak gauge bosons or top quarks were generated using pythia pythia (), alpgen alpgen (), mc@nlo MC@NLO (), and herwig herwig () programs. For D0, these backgrounds were generated using pythia, alpgen, and comphep comphep (), with pythia providing parton-showering and hadronization for all the generators. These background processes were normalized using either experimental data or next-to-leading order calculations (including mcfm mcfm () for the heavy flavor process).

Tables 1 and 2 summarize, for CDF and D0 respectively, the integrated luminosities, the Higgs boson mass ranges over which the searches are performed, and references to further details for each analysis.

## Iii Signal Predictions and Uncertainties

We normalize our Higgs boson signal predictions to the most recent highest-order calculations available, for all production processes considered. The largest production cross section, , is calculated at next-to-next-to-leading order (NNLO) in QCD with soft gluon resummation to next-to-next-to-leading-log (NNLL) accuracy, and also includes two-loop electroweak effects and handling of the running quark mass anastasiou (); grazzinideflorian (). The numerical values in Table 3 are updates grazziniprivate () of these predictions with set to 173.1 GeV/ tevtop10 (), and an exact treatment of the massive top and bottom loop corrections up to next-to-leading-order (NLO) + next-to-leading-log (NLL) accuracy. The factorization and renormalization scale choice for this calculation is . These calculations are refinements of earlier NNLO calculations of the production cross section harlanderkilgore2002 (); anastasioumelnikov2002 (); ravindran2003 (). Electroweak corrections were computed in Refs. actis2008 (); aglietti (). Soft gluon resummation was introduced in the prediction of the production cross section in Ref. catani2003 ().

The production cross section depends strongly on the gluon parton density function, and the accompanying value of . The cross sections used here are calculated with the MSTW 2008 NNLO PDF set mstw2008 (), as recommended by the PDF4LHC working group pdf4lhc (). The reason to use this PDF parameterization is that it is the only NNLO prescription that results from a fully global fit to all relevant data. We follow the PDF4LHC working group’s prescription to evaluate the uncertainties on the production cross section due to the PDFs. This prescription is to evaluate the predictions of at NLO using the global NLO PDF sets CTEQ6.6 cteq66 (), MSTW08 mstw2008 (), and NNPDF2.0 nnpdf (), and to take the envelope of the predictions and their uncertainties due to PDF+ for the three sets as the uncertainty range at NLO. The ratio of the NLO uncertainty range to that of just MSTW08 is then used to scale the NNLO MSTW08 PDF+ uncertainty, to estimate a larger uncertainty at NNLO. This procedure roughly doubles the PDF+ uncertainty from MSTW08 at NNLO alone. Other PDF sets, notably ABKM09 abkm09 () and HERAPDF1.0 herapdf (), have been proposed bd (); bagliocrit () as alternate PDF sets for use in setting the uncertainties on . These two PDF sets fit HERA data only in the case of HERAPDF1.0, and HERA, fixed-target deep-inelastic scattering (DIS), fixed-target Drell-Yan (DY), and neutrino-nucleon di-muon data in the case of ABKM09, and do not include Tevatron jet data cdfinclusivejet ()d0inclusivejet (). We note that HERA, fixed-target DIS, fixed-target DY, and neutrino-nucleon di-muon data do not probe directly the gluon PDF at the scales most relevant for our analyses, but are most sensitive to the quark distributions. The high- Tevatron jet data contain gluon-initiated processes at the relevant momentum transfer scales. The PDF constraints from the HERA, fixed-target DIS, fixed-target DY, and di-muon data are extrapolated to make predictions of high- gluons at the relevant energy scales. The reliability of these extrapolations is checked for the HERAPDF1.0 PDF set by showing the NLO prediction fastnlo () of the inclusive jet spectrum as a function of jet and , as compared with measurements by CDF and D0 as shown in Figures 1 and 2, respectively. The measured inclusive jet cross sections are significantly under-predicted using this PDF set. The corresponding comparisons using the same NLO jet cross section calculation with the MSTW08 PDF set are shown in Figures 49 and 50 in Ref. mstw2008 (). The agreement with the jet data is much better with the MSTW2008 PDF set, which is expected, since those data are included in the MSTW08 PDF fit. Consequently the high- gluon component is larger in the MSTW2008 PDF set than in the HERAPDF1.0 PDF set. This forms an important experimental foundation for the prediction of the gluon-fusion Higgs boson production cross section. Recently, it has been pointed out abm2011 () that the gluon PDF in the ABKM09 fit, and thus the values of , depend on the handling of the data input from the New Muon Collaboration (NMC), specifically whether the differential spectrum as measured by NMC is included in the fit or whether the structure function reported by NMC is used in the fit. It is verified by the NNPDF Collaboration nnpdfnmc () that the NNPDF2.1 global PDF fit is negligibly impacted by changing the handling of the input NMC data, indicating that the global fit, which includes Tevatron jet data, is more solid and less sensitive to the precise handling of NMC data. It should be noted however that this study is performed only at NLO.

We also include uncertainties on due to uncalculated higher order processes by following the standard procedure of varying the factorization renormalization scales up and down together by a factor . It had been proposed to vary and independently bd (), but the maximum impact on is found when varying the two scales together. At a central scale choice of , the fixed-order calculations at NLO and NNLO converge rapidly and the scale uncertainties generously cover the differences. Furthermore, the NNLO calculation using a scale choice of is very close to the NNLL+NNLO calculation that we use. The authors of the NNLL+NNLO calculation recommend that we use the scale choice for the central value grazzinideflorian (). The uncertainty on is evaluated by varying from a lower value of to an upper value of , the customary variation, and by evaluating the NNLO+NNLL cross sections at these scales. A larger scale variation has been proposed, varying the scale by a factor  bd (). The original justification for this proposal bd () was for the LO calculation to cover the NNLO central value. The LO calculation however does not have enough scale-dependent terms for its scale uncertainty to be fairly considered – it is missing significant QCD radiation and other terms that increase the scale dependence at NLO compared with that at LO. The second justification bagliocrit () is simply that the higher-order QCD corrections are larger than those of other processes, such as Drell-Yan production. The corrections are large due to the fact that there are gluons in the initial state which radiate more gluons more frequently than the quarks that initiate other processes. The scale dependence of the calculations of which include this radiation and other terms is correspondingly larger, and the customary range of scale variation provides uncertainties which are commensurate with the values of the corrections. An independent calculation by Ahrens, Becher, Neubert, and Yang Ahrens () using a renormalization-group improved resummation of radiative corrections at NNLL accuracy gives an alternative calculation of that converges more rapidly and requires even smaller scale uncertainties. The central values of this calculation of are in excellent agreement with those used in our analysis and which are given in Table 3.

Because the analyses separate the data into categories based on the numbers of observed jets, we assess factorization and renormalization scale and PDF variations separately for each jet category as evaluated in Ref. anastasiouwebber (). This calculation is at NNLO for +0 jets, at NLO for +1 jet, and at LO for +2 or more jets. A newer, more precise calculation campbell2j () of the +2 or more jets cross section at NLO is used to evaluate the uncertainties in this category. These scale uncertainties are used instead of the inclusive NNLL+NNLO scale uncertainty because we require them in each jet category, and the uncertainties we use are significantly larger than the inclusive scale uncertainty. The scale choice affects the spectrum of the Higgs boson when produced in gluon-gluon fusion, and this effect changes the acceptance of the selection requirements and also the shapes of the distributions of the final discriminants. The effect of the acceptance change is included in the calculations of Ref. anastasiouwebber () and Ref. campbell2j (), as the experimental requirements are simulated in these calculations. The effects on the final discriminant shapes are obtained by reweighting the spectrum of the Higgs boson production in our Monte Carlo simulation to higher-order calculations. The Monte Carlo signal simulation used by CDF and D0 is provided by the LO generator pythia pythia () which includes a parton shower and fragmentation and hadronization models. We reweight the Higgs boson spectra in our pythia Monte Carlo samples to that predicted by hqt hqt () when making predictions of differential distributions of signal events. To evaluate the impact of the scale uncertainty on our differential spectra, we use the resbos resbos () generator, and apply the scale-dependent differences in the Higgs boson spectrum to the hqt prediction, and propagate these to our final discriminants as a systematic uncertainty on the shape, which is included in the calculation of the limits.

We treat the scale uncertainties as 100% correlated between jet categories and between CDF and D0, and also treat the PDF+ uncertainties in the cross section as correlated between jet categories and between CDF and D0. We treat however the PDF uncertainty as uncorrelated with the scale uncertainty. The main reason is that the PDF uncertainty arises from experimental uncertainties and PDF parameterization choices, while the scale uncertainty arises from neglected higher-order terms in the perturbative cross section calculations. The PDF predictions do depend on the scale choice, however. The dependence of the prediction of due to the scale via the PDF dependence is included as part of the scale uncertainty and not as part of the PDF uncertainty, to ensure that all scale dependence is considered correlated. Furthermore, we have verified anastasiouprivate () that the fractional change in the prediction of due to the PDF variation depends negligibly on the value of the scale choice, justifying the treatment of PDF and scale uncertainties as uncorrelated. As described in Section V, systematic uncertainties arising from uncorrelated sources should not be added linearly together, but instead should be considered to fluctuate independently of one another. We have investigated the use of a Gaussian prior for the scale uncertainty using the variation as the variation, and compared it with the use of a flat prior between the low edge and the upper edge, and have found a negligible impact on our sensitivity and our observed limits. The main reason for this is that while the flat prior has a higher density near the edges, the Gaussian prior has longer tails, and both effects essentially compensate. The use of a flat prior with edges for the scale uncertainty does not affect the choice of whether to add this uncertainty linearly or not with the PDF uncertainty; that choice is determined by the correlations of the two sources of uncertainty.

Another source of uncertainty in the prediction of is the extrapolation of the QCD corrections computed for the heavy top quark loops to the light-quark loops included as part of the electroweak corrections. Uncertainties at the level of 1-2% are already included in the cross section values we use grazzinideflorian (); anastasiou (). In Ref. anastasiou (), it is argued that the factorization of QCD corrections is known to work well for Higgs boson masses many times in excess of the masses of the loop particles. A 4% change in the predicted cross section is seen when all QCD corrections are removed from the diagrams containing light-flavored quark loops, which is too conservative. For the quark loop, which is computed separately in Ref. anastasiou (), the QCD corrections are much smaller than for the top loop, further giving confidence that it does not introduce large uncertainties.

We include all significant Higgs boson production modes in our searches. Besides gluon-gluon fusion through virtual quark loops (GGF), we include Higgs boson production in association with a or vector boson, and vector boson fusion (VBF). We use the and production cross sections computed at NNLO in Ref. djouadibaglio (). This calculation starts with the NLO calculation of v2hv v2hv () and includes NNLO QCD contributions vhnnloqcd (), as well as one-loop electroweak corrections vhewcorr (). We use the VBF cross section computed at NNLO in QCD in Ref. vbfnnlo (). Electroweak corrections to the VBF production cross section are computed with the hawk program hawk (), and are small and negative (-2% to -3%) for the Higgs boson mass range considered here. They are not included in this update but will be incorporated in future results.

In order to predict the distributions of the kinematics of Higgs boson signal events, CDF and D0 use the pythia pythia () Monte Carlo program, with CTEQ5L and CTEQ6L cteq () leading-order (LO) parton distribution functions. The Higgs boson decay branching ratio predictions are calculated with hdecay hdecay (), and are also listed in Table 3. We use hdecay Version 3.53. While the coupling is well predicted, depends on the partial widths of all other Higgs boson decays. The partial width is sensitive to and , is sensitive to and , and is sensitive to . The impacts of these uncertainties on depend on due to the fact that , , become very small for Higgs boson masses above 160 GeV, while they have a larger impact for lower . We use the uncertainties on the branching fraction from Ref. bagliodjouadilittlelhc (). At  GeV, for example, the variation gives a relative variation in , gives a variation, and gives a variation. At  GeV, all three of these uncertainties are below 0.1% and remain small for higher .

## Iv Distributions of Candidates

All analyses provide binned histograms of the final discriminant variables for the signal and background predictions, itemized separately for each source, and the observed data. The number of channels combined is large, and the number of bins in each channel is large. Therefore, the task of assembling histograms and checking whether the expected and observed limits are consistent with the input predictions and observed data is difficult. We thus provide histograms that aggregate all channels’ signal, background, and data together. In order to preserve most of the sensitivity gain that is achieved by binning the data instead of collecting them all together and counting, we aggregate the data and predictions in narrow bins of signal-to-background ratio, . Data with similar may be added together with no loss in sensitivity, assuming similar systematic errors on the predictions. The aggregate histograms do not show the effects of systematic uncertainties, but instead compare the data with the central predictions supplied by each analysis.

The range of is quite large in each analysis, and so is chosen as the plotting variable. Plots of the distributions of are shown for and 165 GeV/ in Figure 3. These distributions can be integrated from the high- side downwards, showing the sums of signal, background, and data for the most pure portions of the selection of all channels added together. These integrals can be seen in Figure 4. The most significant candidates are found in the bins with the highest ; an excess in these bins relative to the background prediction drives the Higgs boson cross section limit upwards, while a deficit drives it downwards. The lower- bins show that the modeling of the rates and kinematic distributions of the backgrounds is very good.

We also show the distributions of the data after subtracting the expected background, and compare that with the expected signal yield for a standard model Higgs boson, after collecting all bins in all channels sorted by . These background-subtracted distributions are shown in Figure 5. These graphs also show the remaining uncertainty on the background prediction after fitting the background model to the data within the systematic uncertainties on the rates and shapes in each contributing channel’s templates.

## V Combining Channels

To gain confidence that the final result does not depend on the details of the statistical formulation, we perform two types of combinations, using Bayesian and Modified Frequentist approaches, which yield limits on the Higgs boson production rate that agree within 10% at each value of , and within 1% on average. Both methods rely on distributions in the final discriminants, and not just on their single integrated values. Systematic uncertainties enter on the predicted number of signal and background events as well as on the distribution of the discriminants in each analysis (“shape uncertainties”). Both methods use likelihood calculations based on Poisson probabilities.

Both methods treat the systematic uncertainties in a Bayesian fashion, assigning a prior distribution to each source of uncertainty, parameterized by a nuisance parameter, and propagating the impacts of varying each nuisance parameter to the signal and background predictions, with all correlations included. A single nuisance parameter may affect the signal and background predictions in many bins of many different analyses. Independent nuisance parameters are allowed to vary separately within their prior distributions. Both methods use the data to constrain the values of the nuisance parameters, one by integration, the other by fitting. These methods reduce the impact of prior uncertainty in the nuisance parameters thus improving the sensitivity. Because of these constraints to the data, it is important to evaluate the uncertainties and correlations properly, and to allow independent parameters to vary separately, otherwise a fit may overconstrain a parameter and extrapolate its use improperly. The impacts of correlated uncertainties add together linearly on a particular prediction, while those of uncorrelated uncertainties are convoluted together, which is similar to adding in quadrature. Adding uncorrelated uncertainties linearly implies a correlation which is not present and which may result in incorrect results.

Both methods set limits at the 95% confidence (or “credibility” in the case of the Bayesian method). This is a probabalistic statement. To be consistent and accurate, we give equal treatment to all sources of uncertainty, both theoretical and experimental. Ref. bagliocrit () proposes testing the minimum possible production cross section and quoting our exclusion limits based on only this prediction. We do not claim that our exclusion limits are independent of the model choices that are made, and hence we are not required to only test the case in which all values of all uncertain parameters fluctuate simultaneously to the values that produce the weakest results. To do so would provide inconsistent results when computing limits and when attempting to discover. A set of values for the nuisance parameters which weakens the limit may strengthen a discovery and vice versa, particularly those parameters that affect the background rate and shape predictions. By always setting uncertain parameters to their most extreme values, we could find we have an excess of data over the background prediction when we set our limits, and a deficit of data with respect to the background in the same sample when attempting to make a discovery. The prescriptions described below provide a consistent method for both tasks.

### v.1 Bayesian Method

Because there is no experimental information on the production cross section for the Higgs boson, in the Bayesian technique prevhiggs ()pdgstats () we assign a flat prior for the total number of selected Higgs events. For a given Higgs boson mass, the combined likelihood is a product of likelihoods for the individual channels, each of which is a product over histogram bins:

 L(R,→s,→b|→n,→θ)×π(→θ)=NC∏i=1Nb∏j=1μnijije−μij/nij!×nnp∏k=1e−θ2k/2 (1)

where the first product is over the number of channels (), and the second product is over histogram bins containing events, binned in ranges of the final discriminants used for individual analyses, such as the dijet mass, neural-network outputs, or matrix-element likelihoods. The parameters that contribute to the expected bin contents are for the channel and the histogram bin , where and represent the expected background and signal in the bin, and is a scaling factor applied to the signal to test the sensitivity level of the experiment. Truncated Gaussian priors are used for each of the nuisance parameters , which define the sensitivity of the predicted signal and background estimates to systematic uncertainties. These can take the form of uncertainties on overall rates, as well as the shapes of the distributions used for combination. These systematic uncertainties can be far larger than the expected SM Higgs boson signal, and are therefore important in the calculation of limits. The truncation is applied so that no prediction of any signal or background in any bin is negative. The posterior density function is then integrated over all parameters (including correlations) except for , and a 95% credibility level upper limit on is estimated by calculating the value of that corresponds to 95% of the area of the resulting distribution.

### v.2 Modified Frequentist Method

The Modified Frequentist technique relies on the method, using a log-likelihood ratio (LLR) as test statistic DZHiggs ():

 LLR=−2lnp(data|H1)p(data|H0), (2)

where denotes the test hypothesis, which admits the presence of SM backgrounds and a Higgs boson signal, while is the null hypothesis, for only SM backgrounds. The probabilities are computed using the best-fit values of the nuisance parameters for each pseudo-experiment, separately for each of the two hypotheses, and include the Poisson probabilities of observing the data multiplied by Gaussian priors for the values of the nuisance parameters. This technique extends the LEP procedure pdgstats () which does not involve a fit, in order to yield better sensitivity when expected signals are small and systematic uncertainties on backgrounds are large pflh ().

The technique involves computing two -values, and . The latter is defined by

 1−CLb=p(LLR≤LLRobs|H0), (3)

where is the value of the test statistic computed for the data. is the probability of observing a signal-plus-background-like outcome without the presence of signal, i.e. the probability that an upward fluctuation of the background provides a signal-plus-background-like response as observed in data. The other -value is defined by

 CLs+b=p(LLR≥LLRobs|H1), (4)

and this corresponds to the probability of a downward fluctuation of the sum of signal and background in the data. A small value of reflects inconsistency with . It is also possible to have a downward fluctuation in data even in the absence of any signal, and a small value of is possible even if the expected signal is so small that it cannot be tested with the experiment. To minimize the possibility of excluding a signal to which there is insufficient sensitivity (an outcome expected 5% of the time at the 95% C.L., for full coverage), we use the quantity . If for a particular choice of , that hypothesis is deemed to be excluded at the 95% C.L. In an analogous way, the expected , and values are computed from the median of the LLR distribution for the background-only hypothesis.

Dividing CL by CL incurs a median penalty of a factor of two in the expected -value, as the distribution of CL is uniform under the null hypothesis. One may use CL directly to set limits, and solve the problem of excluding non-testable models by introducing a “power-constraint”. The power-constrained limit (PCL) method atlasww (); atlaszz (); atlasgammagamma () involves excluding models for which CL, but if the limit thus obtained makes an excursion below a previously chosen lower bound, to quote the lower-bound limit instead. The ATLAS collaboration sets this lower bound at in the distribution of limits expected from background-only outcomes. The PCL method thus retains the desired coverage rate and does not exclude untestable models, and it provides by construction stronger expected and observed limits than CL. The expected and observed PCL limits with Tevatron data may be estimated from Figure 9 and Table 8 which show and list the observed and expected values of CL as functions of . Nowhere in the range is there more than a downward fluctuation relative to the background prediction, and so a similar power constraint to the one ATLAS applies would not have an effect on the observed limits. With the PCL method, the region of expected to be excluded at the 95% C.L. grows by 40% compared to that obtained with the CL method. The PCL method has a larger false exclusion rate than the CL and Bayesian methods that we use to quote our results, the distribution of possible limits under the null hypothesis is highly asymmetrical, and this distribution depends on the arbitrary choice of the location of the power constraint. We decided to continue quoting CL and Bayesian limits also because they provide a strong numerical cross-check of each other, and they are comparable to those of LEP lephiggs (), CMS cmsww (), and our own previous results prevhiggs ().

Systematic uncertainties are included by fluctuating the predictions for signal and background rates in each bin of each histogram in a correlated way when generating the pseudo-experiments used to compute and .

### v.3 Systematic Uncertainties

Systematic uncertainties differ between experiments and analyses, and they affect the rates and shapes of the predicted signal and background in correlated ways. The combined results incorporate the sensitivity of predictions to values of nuisance parameters, and include correlations between rates and shapes, between signals and backgrounds, and between channels within experiments and between experiments. More discussion on this topic can be found in the individual analysis notes cdfHWW () through dzHgg (). Here we consider only the largest contributions and correlations between and within the two experiments.

#### v.3.1 Correlated Systematics between CDF and D0

The uncertainties on the measurements of the integrated luminosities are 6% (CDF) and 6.1% (D0). Of these values, 4% arises from the uncertainty on the inelastic  scattering cross section, which is correlated between CDF and D0. CDF and D0 also share the assumed values and uncertainties on the production cross sections for top-quark processes ( and single top) and for electroweak processes (, , and ). In order to provide a consistent combination, the values of these cross sections assumed in each analysis are brought into agreement. We use , following the calculation of Moch and Uwer mochuwer (), assuming a top quark mass  GeV/ tevtop10 (), and using the MSTW2008nnlo PDF set mstw2008 (). Other calculations of are similar otherttbar ().

For single top, we use the NLL -channel calculation of Kidonakis kid1 (), which has been updated using the MSTW2008nnlo PDF set mstw2008 () kidprivcomm (). For the -channel process we use kid2 (), again based on the MSTW2008nnlo PDF set. Both of the cross section values below are the sum of the single and single cross sections, and both assume GeV.

 σt−chan=2.10±0.027(scale)±0.18(PDF)±0.045(mass)pb. (5)
 σs−chan=1.046±0.006(scale)±0.059 (PDF)±0.030 (mass) pb. (6)

Other calculations of are similar for our purposes harris ().

mcfm mcfm () has been used to compute the NLO cross sections for , , and production dibo (). Using a scale choice and the MSTW2008 PDF set mstw2008 (), the cross section for inclusive production is

 σW+W−=11.34+0.56−0.49 (scale) +0.35−0.28(PDF)pb (7)

and the cross section for inclusive production is

 σW±Z=3.22+0.20−0.17 (scale) +0.11−0.08 (PDF) pb (8)

For the , leptonic decays are used in the definition of the cross section, and we assume both and exchange. The cross section quoted above involves the requirement  GeV for the leptons from the neutral current exchange. The same dilepton invariant mass requirement is applied to both sets of leptons in determining the cross section which is

 σZZ=1.20+0.05−0.04 (scale) +0.04−0.03 (PDF) pb (9)

For the diboson cross section calculations, we use for all calculations. Loosening this requirement to include all leptons leads to +0.4% change in the predictions. Lowering the factorization and renormalization scales by a factor of two increases the cross section, and raising the scales by a factor of two decreases the cross section. The PDF uncertainty has the same fractional impact on the predicted cross section independent of the scale choice. All PDF uncertainties are computed as the quadrature sum of the twenty 68% C.L. eigenvectors provided with MSTW2008 (MSTW2008nlo68cl). We furthermore constrain in the signal regions of the channels in the process of setting our limits, either by integration over the uncertain parameters, or by a direct fit, depending on the method. Our posterior constraint on has a fractional uncertainty of %, which includes the prior theoretical constraint of %, indicating that the data and the theoretical prediction are approximately equally constraining on .

In many analyses, the dominant background yields are calibrated with data control samples. Since the methods of measuring the multijet (“QCD”) backgrounds differ between CDF and D0, and even between analyses within the collaborations, there is no correlation assumed between these rates. Similarly, the large uncertainties on the background rates for +heavy flavor (HF) and +heavy flavor are considered at this time to be uncorrelated, as both CDF and D0 estimate these rates using data control samples, but employ different techniques. The calibrations of fake leptons, unvetoed conversions, -tag efficiencies and mistag rates are performed by each collaboration using independent data samples and methods, and are therefore also treated as uncorrelated.

#### v.3.2 Correlated Systematic Uncertainties for CDF

The dominant systematic uncertainties for the CDF analyses are shown in the Appendix in Tables 9, 10, and 11 for the channels, in Table 12 for the and channels, and in Table 13 for the channels. Each source induces a correlated uncertainty across all CDF channels’ signal and background contributions which are sensitive to that source. Shape dependencies of templates on jet energy scale and “FSR”) are taken into account in the analyses (see tables). For , the largest uncertainties on signal acceptance originate from Monte Carlo modeling. Uncertainties on background event rates vary significantly for the different processes. The backgrounds with the largest systematic uncertainties are in general quite small. Such uncertainties are constrained by fits to the nuisance parameters, and they do not affect the result significantly. Because the largest background contributions are measured using data, these uncertainties are treated as uncorrelated for the  channels. The differences in the resulting limits when treating the remaining uncertainties as either correlated or uncorrelated, is less than .

#### v.3.3 Correlated Systematic Uncertainties for D0

The dominant systematic uncertainties for the D0 analyses are shown in the Appendix, in Tables 14, 15, 16, 17, 18, and 19. Each source induces a correlated uncertainty across all D0 channels sensitive to that source. Wherever appropriate the impact of systematic effects on both the rate and shape of the predicted signal and background is included. For the and analyses, a significant source of uncertainty is the measured efficiencies for selecting leptons. Significant sources for all analyses are the uncertainties on the luminosity and the cross sections for the simulated backgrounds. For analyses involving jets the determination of the jet energy scale, jet resolution and the multijet background contribution are significant sources of uncertainty. All systematic uncertainties arising from the same source are taken to be correlated among the different backgrounds and between signal and background.

## Vi Combined Results

Before extracting the combined limits we study the distributions of the log-likelihood ratio (LLR) for different hypotheses, to quantify the expected sensitivity across the mass range tested. Figure 6 displays the LLR distributions for the combined analyses as functions of . Included are the median of the LLR distributions for the background-only hypothesis (LLR), the signal-plus-background hypothesis (LLR), and the observed value for the data (LLR). The shaded bands represent the one and two standard deviation () departures for LLR centered on the median. Table 4 lists the observed and expected LLR values shown in Figure 6.

These distributions can be interpreted as follows: The separation between the medians of the LLR and LLR distributions provides a measure of the discriminating power of the search. The sizes of the one- and two- LLR bands indicate the width of the LLR distribution, assuming no signal is truly present and only statistical fluctuations and systematic effects are present. The value of LLR relative to LLR and LLR indicates whether the data distribution appears to resemble what we expect if a signal is present (i.e. closer to the LLR distribution, which is negative by construction) or whether it resembles the background expectation more closely; the significance of any departures of LLR from LLR can be evaluated by the width of the LLR bands.

Using the combination procedures outlined in Section III, we extract limits on SM Higgs boson production in  collisions at  TeV for GeV/. To facilitate comparisons with the SM and to accommodate analyses with different degrees of sensitivity, we present our results in terms of the ratio of obtained limits to the SM Higgs boson production cross section, as a function of Higgs boson mass, for test masses for which both experiments have performed dedicated searches in different channels. A value of the combined limit ratio which is equal to or less than one indicates that that particular Higgs boson mass is excluded at the 95% C.L. A value less than one indicates that a Higgs boson of that mass is excluded with a smaller cross section than the SM prediction, and that the SM prediction is excluded with more certainty than 95% C.L.

The combinations of results cdfHWW (); DZHiggs () of each single experiment, as used in this Tevatron combination, yield the following ratios of 95% C.L. observed (expected) limits to the SM cross section: 0.92 (0.93) for CDF and 0.75 (0.92) for D0 at  GeV/. Both collaborations independently exclude  GeV/ at the 95% C.L.

The ratios of the 95% C.L. expected and observed limit to the SM cross section are shown in Figure 7 for the combined CDF and D0 analyses. The observed and median expected ratios are listed for the tested Higgs boson masses in Table 5 for  GeV/, and in Table 6 for  GeV/, as obtained by the Bayesian and the methods. In the following summary we quote only the limits obtained with the Bayesian method, which was chosen a priori. It turns out that the Bayesian limits are slightly less stringent. The corresponding limits and expected limits obtained using the method are shown alongside the Bayesian limits in the tables. We obtain the observed (expected) values of 1.31 (0.92) at  GeV/, 0.54 (0.65) at  GeV/, 1.13 (0.85) at  GeV/ and 1.49 (1.30) at  GeV/.

We show in Figure 8 and list in Table 7 the observed 1- and its expected distribution for the background-only hypothesis as a function of the Higgs boson mass. This is directly interpreted as the level of exclusion of our search using the method. The region excluded at the 95% C.L. agrees very well with that obtained via the Bayesian calculation.

In addition, we provide in Figure 9 (and listed in Table 8) the values for the observed 1- and its expected distribution as a function of . The value is the -value for the signal-plus-background hypothesis. These values can be used to obtain alternative upper limits that are more constraining compared to those obtained via the method. In such a formulation, the power of the search is limited at a level chosen a priori to avoid setting limits when the background model grossly overpredicts the data or the data exhibit a large background-like fluctuation (e.g., limit at the -1 background fluctuation level.). Within Figure 9, 95% C.L. power-constrained limits can be found at the points for which 1- exceeds 95%. The expected range of exclusion is 40% larger using PCL than the Bayesian and CL limits quoted here. We continue our convention of quoting Bayesian and CL limits however.

In summary, we combine CDF and D0 results on SM Higgs boson searches, based on luminosities up to 8.2 fb. Compared to our previous combination, more data have been added to the existing channels, additional channels have been included, and analyses have been further optimized to gain sensitivity. We use the recommendation of the PDF4LHC working group for the central value of the parton distribution functions and uncertainties pdf4lhc (). We use the highest-order calculations of , , , and VBF theoretical cross sections when comparing our limits to the SM predictions at high mass. We include consensus estimates of the theoretical uncertainties on these production cross sections and the decay branching fractions in the computations of our limits.

The 95% C.L. upper limit on Higgs boson production is a factor of 0.54 times the SM cross section for a Higgs boson mass of 165 GeV/. Based on simulation, the corresponding median expected upper limit is 0.65 times the SM cross section. Standard Model branching ratios, calculated as functions of the Higgs boson mass, are assumed.

We choose to use the intersections of piecewise linear interpolations of our observed and expected rate limits in order to quote ranges of Higgs boson masses that are excluded and that are expected to be excluded. The sensitivities of our searches to Higgs bosons are smooth functions of the Higgs boson mass and depend most strongly on the predicted cross sections and the decay branching ratios (the decay is the dominant decay for the region of highest sensitivity). The mass resolution of the channels is poor due to the presence of two highly energetic neutrinos in signal events. We therefore use the linear interpolations to extend the results from the 5 GeV/ mass grid investigated to points in between. This procedure yields higher expected and observed interpolated rate limits than if the full dependence of the cross section and branching ratio were included as well, since the latter produces limit curves that are concave upwards. We exclude in this way the region  GeV/ at the the 95% C.L. The expected exclusion region, given the current sensitivity, is  GeV/. The excluded region obtained by finding the intersections of the linear interpolations of the observed curve shown in Figure 8 is slightly larger than that obtained with the Bayesian calculation. As previously stated, and following the procedure used in previous combinations prevhiggs (), we make the a priori choice to quote the exclusion region using the Bayesian calculation.

The results presented in this paper significantly extend the individual limits of each collaboration and those obtained in our previous combination. The sensitivity of our combined search is sufficient to exclude a Higgs boson at high mass and is expected to grow substantially in the future as more data are added and further improvements are made to the analysis techniques.

## References

• (1) CDF Collaboration, “Search for Production Using 5.9 fb”, CDF Conference Note 10432 (2011).
• (2) “Combined Upper Limits on Standard Model Higgs Boson Production in the , and decay modes from the D0 Experiment in up to 8.2 fb of data”, D0 Conference Note 6183 (2011).
• (3) The CDF and D0 Collaborations and the TEVNPHWG Working Group, “Combined CDF and D0 Upper Limits on Standard Model Higgs-Boson Production with up to 6.7 fb of Data”, FERMILAB-CONF-10-257-E, CDF Note 10241, D0 Note 6096, arXiv:1007.4587v1 [hep-ex] (2010); // CDF Collaboration, “Combined Upper Limit on Standard Model Higgs Boson Production for HCP 2009”, CDF Conference Note 9999 (2009);
CDF Collaboration, “Combined Upper Limit on Standard Model Higgs Boson Production for Summer 2009”, CDF Conference Note 9807 (2009);
D0 Collaboration,“Combined Upper Limits on Standard Model Higgs Boson Production from the D0 Experiment in 2.1-5.4 fb”, D0 Conference Note 6008 (2009);
D0 Collaboration, “Combined upper limits on Standard Model Higgs boson production from the D0 experiment in 0.9-5.0 fb”, D0 Conference Note 5984 (2009);
The CDF and D0 Collaborations and the TEVNPHWG Working Group, “Combined CDF and DZero Upper Limits on Standard Model Higgs-Boson Production with 2.1 to 4.2 fb of Data”, FERMILAB-PUB-09-0557-E, CDF Note 9998, D0 Note 5983, arXiv:0911.3930v1 [hep-ex] (2009).
• (4) CDF Collaboration, “Inclusive Search for Standard Model Higgs Boson Production in the WW Decay Channel Using the CDF II Detector”, Phys. Rev. Lett. 104, 061803 (2010);
D0 Collaboration, “ Search for Higgs Boson Production in Dilepton and Missing Energy Final States with 5.4 fb of Collisions at  TeV”, Phys. Rev. Lett. 104, 061804 (2010);
The CDF and D0 Collaborations, “Combination of Tevatron Searches for the Standard Model Higgs Boson in the Decay Mode”, Phys. Rev. Lett. 104, 061802 (2010).
• (5) ATLAS Collaboration, “Higgs Boson Searches using the Decay Mode with the ATLAS Detector at 7 TeV”, ATLAS-CONF-2011-005 (2011).
• (6) CMS Collaboration, “Measurement of W+W- Production and Search for the Higgs Boson in pp Collisions at sqrt(s) = 7 TeV,” [arXiv:1102.5429 [hep-ex]] (2011).
• (7) ATLAS Collaboration, “Search for a Standard Model Higgs Boson in the Mass Range 200-600 GeV in the Channels and with the ATLAS Detector”, ATLAS-CONF-2011-026 (2011).
• (8) ATLAS Collaboration, “Search for the Higgs boson in the diphoton final state with 37.6 pb of data recorded by the ATLAS detector in proton-proton collisions at  TeV”, ATLAS-CONF-2011-025 (2011).
• (9) D0 Collaboration, “ Search for Higgs boson production in dilepton plus missing energy final states with 8.1 fb of collisions at  TeV”, D0 Conference Note 6182.
• (10) D0 Collaboration, “ Search for the Standard Model Higgs boson in jet final state with 7.3 fb of data”, D0 Conference Note 6179.
• (11) D0 Collaboration, “A search for the standard model Higgs boson in the Decay Channel”, [e-Print: arXiv:1001.6079v2 [hep-ph]].
• (12) D0 Collaboration, “Search for associated Higgs boson production with like sign leptons in collisions at  TeV”, D0 Conference Note 6091.
• (13) D0 Collaboration, “Search for the standard model Higgs boson in the + 2 jets final state”, D0 Conference note 6171.
• (14) D0 Collaboration, “Search for the Standard Model Higgs boson in final states at D0 with 8.2 fb of data”, D0 Conference Note 6177.
• (15) T. Sjöstrand, L. Lonnblad and S. Mrenna, “PYTHIA 6.2: Physics and manual,” arXiv:hep-ph/0108264.
• (16) M. L. Mangano, M. Moretti, F. Piccinini, R. Pittau and A. D. Polosa, “ALPGEN, a generator for hard multiparton processes in hadronic collisions,” JHEP 0307, 001 (2003).
• (17) S. Frixione and B.R. Webber, JHEP 0206, 029 (2002).
• (18) G. Corcella et al., JHEP 0101, 010 (2001).
• (19) A. Pukhov et al., “CompHEP: A package for evaluation of Feynman diagrams and integration over multi-particle phase space. User’s manual for version 33,” [arXiv:hep-ph/9908288].
• (20) J. Campbell and R. K. Ellis, http://mcfm.fnal.gov/.
J. M. Campbell, R. K. Ellis, Nucl. Phys. Proc. Suppl. 205-206, 10-15 (2010). [arXiv:1007.3492 [hep-ph]].
• (21) C. Anastasiou, R. Boughezal and F. Petriello, JHEP 0904, 003 (2009).
• (22) D. de Florian and M. Grazzini, Phys. Lett. B 674, 291 (2009).
• (23) M. Grazzini, private communication (2010).
• (24) The CDF and D0 Collaborations and the Tevatron Electroweak Working Group, arXiv:1007.3178 [hep-ex], arXiv:0903.2503 [hep-ex].
• (25) R. V. Harlander and W. B. Kilgore, Phys. Rev. Lett. 88, 201801 (2002).
• (26) C. Anastasiou and K. Melnikov, Nucl. Phys. B 646, 220 (2002).
• (27) V. Ravindran, J. Smith, and W. L. van Neerven, Nucl. Phys. B 665, 325 (2003).
• (28) S. Actis, G. Passarino, C. Sturm, and S. Uccirati, Phys. Lett. B 670, 12 (2008).
• (29) U. Aglietti, R. Bonciani, G. Degrassi, A. Vicini, “Two-loop electroweak corrections to Higgs production in proton-proton collisions”, arXiv:hep-ph/0610033v1 (2006).
• (30) S. Catani, D. de Florian, M. Grazzini and P. Nason, “Soft-gluon resummation for Higgs boson production at hadron colliders,” JHEP 0307, 028 (2003) [arXiv:hep-ph/0306211].
• (31) A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Eur. Phys. J. C 63, 189 (2009).
• (32) http://www.hep.ucl.ac.uk/pdf4lhc/;
S. Alekhin et al., (PDF4LHC Working Group), [arXiv:1101.0536v1 [hep-ph]];
M. Botje et al., (PDF4LHC Working Group), [arXiv:1101.0538v1 [hep-ph]].
• (33) P. M. Nadolsky et al., Phys. Rev. D 78, 013004 (2008) [arXiv:0802.0007 [hep-ph]].
• (34) R. D. Ball et al. [NNPDF Collaboration], Nucl. Phys. B 809, 1 (2009) [Erratum-ibid. B 816, 293 (2009)] [arXiv:0808.1231 [hep-ph]].
• (35) S. Alekhin, J. Blumlein, S. Klein and S. Moch, Phys. Rev. D 81, 014032 (2010) [arXiv:0908.2766 [hep-ph]].
• (36) F. D. Aaron et al. [ H1 and ZEUS Collaboration ], JHEP 1001, 109 (2010). [arXiv:0911.0884 [hep-ex]].
• (37) J. Baglio, A. Djouadi, JHEP 1010, 064 (2010). [arXiv:1003.4266 [hep-ph], arXiv:1009.1363 [hep-ph]].
• (38) J. Baglio, A. Djouadi, S. Ferrag, R. M. Godbole, [arXiv:1101.1832 [hep-ph]].
• (39) A. Abulencia et al. [ CDF - Run II Collaboration ], Phys. Rev. D75, 092006 (2007).
• (40)
• (41) V.M. Abazov et al., D0 Collaboration, Phys. Rev. Lett. 101, 062001 (2008).
• (42) T. Kluge, K. Rabbertz and M. Wobisch, arXiv:hep-ph/0609285.
• (43) S. Alekhin, J. Blümlein, and S. Moch, arXiv:1101.5261 [hep-ph] (2011).
• (44) R. D. Ball et al. [NNPDF Collaboration], arXiv:1102.3182 [hep-ph] (2011).
• (45) V. Ahrens, T. Becher, M. Neubert et al., Eur. Phys. J. C62, 333-353 (2009). [arXiv:0809.4283 [hep-ph]];
V. Ahrens, T. Becher, M. Neubert et al., [arXiv:1008.3162 [hep-ph]].
• (46) C. Anastasiou, G. Dissertori, M. Grazzini, F. Stöckli and B. R. Webber, JHEP 0908, 099 (2009).
• (47) J. M. Campbell, R. K. Ellis, C. Williams, Phys. Rev. D81, 074023 (2010). [arXiv:1001.4495 [hep-ph]].
• (48) G. Bozzi, S. Catani, D. de Florian, and M. Grazzini, Phys. Lett. B 564, 65 (2003); // G. Bozzi, S. Catani, D. de Florian, and M. Grazzini, Nucl. Phys. B 737, 73 (2006).
• (49) C. Balazs, J.  Huston, I. Puljak, Phys. Rev. D 63 014021 (2001).
C. Balazs and C.-P. Yuan, Phys. Lett. B 478 192-198 (2000).
Qing-Hong Cao and Chuan-Ren Chen, Phys. Rev. D 76 073006 (2007).
• (50) C. Anastasiou, private communication (2010).
• (51) J. Baglio and A. Djouadi, JHEP 1010, 064 (2010) [arXiv:1003.4266v2 [hep-ph]].
• (52) The Fortran program can be found on Michael Spira’s web page http://people.web.psi.ch/mspira/proglist.html.
• (53) O. Brein, A. Djouadi, and R. Harlander, Phys. Lett. B 579, 149 (2004).
• (54) M. L. Ciccolini, S. Dittmaier, and M. Kramer, Phys. Rev. D 68, 073003 (2003).
• (55) P. Bolzoni, F. Maltoni, S.-O. Moch, and M. Zaro, Phys. Rev. Lett. 105, 011801 (2010) [arXiv:1003.4451v2 [hep-ph]].
• (56) M. Ciccolini, A. Denner, and S. Dittmaier, Phys. Rev. Lett. 99, 161803 (2007) [arXiv:0707.0381 [hep-ph]];
M. Ciccolini, A. Denner, and S. Dittmaier, Phys. Rev. D 77, 013002 (2008) [arXiv:0710.4749 [hep-ph]].
We would like to thank the authors of the hawk program for adapting it to the Tevatron.
• (57) H. L. Lai et al., Phys. Rev D 55, 1280 (1997).
• (58) A. Djouadi, J. Kalinowski and M. Spira, Comput. Phys. Commun. 108, 56 (1998).
• (59) J. Baglio and A. Djouadi, arXiv:1012.0530 [hep-ph] (2010).
• (60) T. Junk, Nucl. Instrum. Meth. A 434, 435 (1999);
A.L. Read, “Modified Frequentist analysis of search results (the method)”, in F. James, L. Lyons and Y. Perrin (eds.), Workshop on Confidence Limits, CERN, Yellow Report 2000-005, available through cdsweb.cern.ch.
• (61) W. Fisher, “Systematics and Limit Calculations,” FERMILAB-TM-2386-E.
• (62) R. Barate et al. [ LEP Working Group for Higgs boson searches and ALEPH and DELPHI and L3 and OPAL Collaborations ], Phys. Lett. B 565, 61-75 (2003).
• (63) S. Moch and P. Uwer, U. Langenfeld, S. Moch and P. Uwer, Phys. Rev. D 80, 054009 (2009).
• (64) M. Cacciari, S. Frixione, M. L. Mangano, P. Nason and G. Ridolfi, JHEP 0809, 127 (2008).
N. Kidonakis and R. Vogt, Phys. Rev. D 78, 074005 (2008).
• (65) N. Kidonakis, Phys. Rev. D 74, 114012 (2006).
• (66) N. Kidonakis, private communication.
• (67) N. Kidonakis, arXiv:1005.3330 [hep-ph].
• (68) B. W. Harris, E. Laenen, L. Phaf, Z. Sullivan and S. Weinzierl, Phys. Rev. D 66, 054024 (2002).
• (69) J. Campbell and R. K. Ellis, Phys. Rev. D 65, 113007 (2002).