# Diboson Production in Proton-Proton Collisions at TeV

###### Abstract

This review article summarizes results on the production cross section measurements of electroweak boson pairs (, , , and ) at the Large Hadron Collider (LHC) in collisions at a center-of-mass energy of TeV. The two general-purpose detectors at the LHC, ATLAS and CMS, recorded an integrated luminosity of in 2011, which offered the possibility to study the properties of diboson production to high precision. These measurements test predictions of the Standard Model (SM) in a new energy regime and are crucial for the understanding and the measurement of the SM Higgs boson and other new particles. In this review, special emphasis is drawn on the combination of results from both experiments and a common interpretation with respect to state-of-the-art SM predictions.

## 1 Introduction

The Standard Model (SM) of particle physics is a quantum field theory based on the gauge symmetry group and describes the strong, weak and electromagnetic interactions among elementary particlesNakamura:2010zzi (). As a direct consequence of the non-Abelian gauge symmetry of the electroweak sector, i.e. the gauge group, the SM predicts self-coupling vertices of the gauge bosons. At the Large Hadron Collider (LHC), these vertices lead to the production of diboson final states. Moreover, the discovery of the Higgs boson in the year 2012 Aad:2012tfa (), Chatrchyan:2012ufa () was due to its diboson decay channels. The production of pairs of vector bosons is therefore not only an important background for studies of the newly-discovered Higgs boson, but also provides a unique opportunity to test the electroweak sector of the SM.

These precision tests have already a long history in particle physics. The LEP experiments performed precise measurements of the and cross sections as a function of center-of-mass energySchael:2004tq (); Abbiendi:2003mk (); Achard:2004ji (); Abdallah:2007ae (). The clean experimental signature and nature of the purely electroweak calculations allowed for a precise study of the gauge-group nature of the SM. Limits on possible extensions and deviations from the SM predictions were also drawn. Some of those limits are still the most stringent ones available. Since the cross section depends crucially on the mass of the boson (), the cross section dependence on the center-of-mass energy allowed for an indirect determination of .

The Tevatron experiments allowed studies of all possible diboson final states , , , and Neubauer:2011zz (); Abulencia:2007tu (); Abazov:2007ai (). In contrast to the LEP collider, the bulk part of the production process is governed by the Quantum Chromodynamic (QCD) processes. Nevertheless, precise measurements of these production cross sections allow again tests of the predicted gauge-boson self-interactions and searches for physics beyond the SM.

With the start of the Large Hadron Collider at CERN, a completely new energy regime became accessible, and the corresponding production cross sections increased by more than a factor of five compared to the Tevatron. The increase in available statistics and the higher center of mass energies allow for an improved study of perturbative Quantum Chromodynamic predictions, which are by now available to next-to-next-to-leading order in the strong coupling constant for some processesCascioli:2014yka (). In addition, for the first time, limits on new physics scenarios that modify the triple gauge coupling vertices could be improved compared to the LEP experiments.

This review article summarizes the results of diboson production cross section measurements at the LHC with a center-of-mass energy of , based on data collected by the ATLAS and CMS experiments in 2011. We present the experimental signatures and the differences between the measurement strategies in detail. Special focus is drawn on combinations of various results from both experiments and their interpretations within the SM. The results published by the ATLAS and CMS collaborations form the basis for this review. The combinations of these results and further derived information have been conducted with great care, but are solely based on the private work of the authors of this article and do not reassemble any official joint ATLAS and CMS effort.

This article is structured as follows. In Sect. 2, we briefly summarize the most important features of the electroweak sector of the SM, the theoretical methodology for the predictions of diboson production in collisions, and the description of new physics scenarios regarding diboson production. The LHC collider, the ATLAS and CMS experiments, as well as further experimental aspects are discussed in Sect. 3. Measurements that are sensitive to the and vertices, i.e.measurements of the , the and the final states, are presented in Sect. 4. The and final states are sensitive to , and vertices that are not allowed in the SM; they are discussed in Sect. 5. The results on the cross section measurements and the limits on anomalous triple gauge couplings (aTGCs) are interpreted in Sect. 6. In addition, the sensitivity to quartic gauge couplings (QGCs) is briefly discussed. Section 7 summarizes all the results and gives an outlook for future measurements at the LHC.

## 2 Dibosons in the Standard Model

### 2.1 The Electroweak Sector

The Lagrangian of the electroweak sector of the SM can be written as

(1) |

where the different terms are schematically shown in Fig. 1 as tree-level Feynman diagramsNakamura:2010zzi (). The free movements of all fermions and bosons are described by the kinematic term . The neutral current interactions, i.e. the exchange of photon and the boson, are summarized in . The boson interaction to left-handed particles and right-handed anti-particles is represented by the charged current term . Since the weak interaction is based on an group structure, three-point and four-point interactions of the electroweak gauge bosons appear, denoted by and . The self-interactions of the Higgs boson and the interaction of the Higgs boson with the electroweak gauge bosons are represented by the terms and , respectively. The Yukawa couplings between the Higgs field and the fermions is denoted by .

The production of single photons and bosons within the SM is given by

(2) |

where and denote the electromagnetic and weak current, is the photon field, and is the boson. Each current contains the sum over all fermionic fields weighted by their respective charges. In addition, the weak current involves only left-handed particles and right-handed anti-particles. The relative strength of the electromagnetic and weak interactions is described by the weak mixing angle, .

The charged current term,

(3) |

describes the production of charged bosons via fermions, i.e. . For simplicity, only the terms of the first generation are shown and the quark and lepton spinor fields are labeled by and , respectively. The mixing of quark generations is described by the CKM matrix . The operator is introduced to describe the exclusive interaction of left-handed particles and right-handed anti-particles with the boson.

It should be noted that the terms and can lead to diboson final states at hadron colliders through - and - exchange of a fermion. However, as a direct consequence of the gauge group of the weak interaction, the production of vector boson pairs through the -channel process is also allowed. The corresponding term in the Lagrangian reads as

(4) |

This term leads to TGCs at the tree level. It is thus interesting to experimentally demonstrate that gauge bosons couple not only to electric charge but also to weak isospin. It is apparent from Eqn. 4 that the SM allows only and couplings. Various models of physics beyond the SM predict aTGCs but also new vertices like , and .

The last contribution to diboson final states comes from the decay of the Higgs boson, described by

(5) |

However, for a SM Higgs boson with a mass close to 125 GeV, the decay into a diboson pair must involve at least one off-shell vector boson. This production mode is therefore significantly suppressed if we only consider on-shell vector bosons in the final state.

The vector boson fusion or four-point self-interaction as described by , is not discussed in this review, as the expected production cross sections at collisions are negligible. However it should be noted that this channel has never been experimentally observed and measurements of these processes at 14 TeV would be a crucial test of the SM predictions.

### 2.2 Diboson Production at the LHC

Several introductory articles on the production of vector bosons in hadron collisions are available. Before discussing the experimental results, we summarize here the essential aspects of the theoretical predictions of the diboson production at the LHC along the lines of the following publications. Neubauer:2011zz (); Campbell:2011bn (); Nunnemann:2007qs (); Schott:2014 ().

The diboson production at hadron colliders is significantly different than the production mechanism at lepton colliders due to the complicated internal structure of protons. The proton structure can be described phenomenologically by parton density functions (PDFs), written as for parton-type in proton with a relative momentum of in the direction of the proton’s motion for an energy scale of the scattering process. Here, and denote the momenta of the parton and proton, respectively. For the production of vector boson pairs (), the energy scale is often set to the invariant mass of the two vector bosons, i.e. .

The factorization theorem states that the production cross section in collisions can be expressed by combining PDFs with a fundamental partonic cross section , illustrated in Fig. 3:

(6) |

where is the cross section of the inclusive hard scattering process of two partons leading to two vector bosons in the final state. The sum runs over all quark flavors, and the integration is performed over the momentum fractions of the two colliding partons and . The factorization theorem holds not only for inclusive hard-scattering processes but also for perturbative QCD corrections.

The partonic cross section is governed by the , , and terms of the SM Lagrangian given in Eqn. 1. The corresponding leading-order (LO) and some next-to-leading-order (NLO) Feynman diagrams are shown in Fig. 3 for different production channels. It should be noted that and final states can also be realized by initial and/or final state radiation processes of the participating fermions.

Of special importance is the -channel as it involves TGCs that are predicted within the SM. Any new physics model that involves new or alternative interactions between the SM electroweak gauge bosons may change these TGCs and hence change the corresponding observables. An enhancement of the -channel contribution to the full diboson production cross section is predicted to increase with rising parton collision energy , i.e. with the invariant mass of diboson system . This dependence is shown in Fig. 4 where an aTGC scenario is assumed for the production vertex. Regions in the phase space that correspond to high center-of-mass energies of the interacting partons therefore provide a high sensitivity to new physics scenarios.

Figure 3 shows some examples of Feynman diagrams due to the
NLO QCD corrections. In particular, the quark-gluon fusion in the
initial state leads to a significant increase in the production cross
section at a collision energy of ^{2}^{2}2E.g. the
cross section increases by .. The gluon-gluon fusion
with fermions in the loop contribute approximately . The NLO QCD
corrections have been first calculated in Ohnemus:1990za (); Ohnemus:1991kk (); Ohnemus:1992jn (); Baur:1993ir (); Baur:1994aj (); Baur:1995uv (); Baur:1997kz (); Campbell:1999ah (); Dixon:1999di ().
It should be noted that usually additional jets in the
diboson production are experimentally vetoed to reduce the background
from the pair production of top quarks. Hence, a significant
contribution of the NLO QCD corrections to the process are not directly studied experimentally.

The most up-to-date QCD calculations include off-shell effects in gluon fusion processes, subsequent decay, and effects of massive quarks in the loop Glover:1988fe (). Leptonic decay modes are accounted for in the narrow-width approximation and include all spin-correlation effects. For predictions beyond the narrow-width approximation, gauge invariance requires the inclusion of single resonant diagrams in the calculations, which was first done in Campbell:1999ah (). The next-to-next-to-leading order (NNLO) QCD calculation for the production in collisions became available recently Cascioli:2014yka ().

Electroweak corrections have been calculated so far only for the process Hollik:2004tm (); Accomando:2004de (); Accomando:2001fn (); Accomando:2005xp (); Billoni:2013pca (). Since the electroweak coupling parameter () is small compared to the strong coupling constant (), the effects of the EW corrections are expected to be minor. However, EW corrections increase with the square of and can reach several percent at the energies While electroweak corrections modify the inclusive production cross section by less than , they induce variations up to in the rapidity distribution of the diboson pair. For large diboson invariant masses (), EW corrections could modify the inclusive production cross section by more than Billoni:2013pca (). EW corrections also open the production via photon-photon fusion (as shown in Fig. 6) since the photon PDFs inside protons are non-vanishing. This channel results in a predicted cross section enhancement of compared to the LO calculations at .

Finally, the and boson pairs can also be produced via the decay of the SM Higgs boson (as shown in Fig. 6), which is described by the term of the SM Lagrangian. The Higgs boson production is known to NNLO in Harlander:2002wh () and to NLO in Actis:2008ug (). Since diboson production from the decay of the SM Higgs bosons involves at least one off-shell vector boson, the contribution of this decay channel to the full inclusive diboson production is typically less than , depending on the experimental analysis requirements. Further details are discussed in the experimental sections of this review.

In summary, the rich phenomenology of the electroweak sector can be tested via studies of diboson production at hadron colliders. Four terms on the Lagrangian contribute to the production, namely , , , and , and therefore not only the gauge group structure but also higher-order calculations in QCD and partially the Higgs sector can be tested.

### 2.3 Event Generation and Available Computer Programs

The central part of the prediction of the diboson production cross sections at the LHC is the calculation of the matrix element of the hard scattering process as introduced in Sect. 2. The integration over the full phase space in Eqn. 6 including spin- and color-effects is a complicated task, and can be only achieved with Monte Carlo (MC) sampling methods that are implemented in MC event generators.

The connection of hard scattering processes at high energy scales to the partons within the proton at low scales is described by parton shower models. The basic idea of these models is to relate the partons from the hard interaction at to partons near the energy scale of via an all-order approximation in the soft and collinear regions. The commonly used approach is the leading-log approximation, where parton showers are modeled as a sequence of the splitting of a mother parton to two daughter partons. The implementation of parton showers is achieved with MC techniques. A detailed discussion of these models can be found elsewhere Butterworth:2010ym (). Phenomenological models have to be applied at the scale to describe the process of hadronization, i.e. a description for the formation of hadrons from final state partons, such as the Lund string model Andersson:1983ia () and the cluster model Field:1982dg (); Webber:1983if ().

Multiple purpose event generators include the following aspects of collisions: the description of the proton via an interface to PDF sets; initial state parton shower models; hard scattering processes and the subsequent decay of unstable particles; and the simulation of the final state parton shower and hadronization.

The commonly used event generators in the relevant analyses of this review article are pythia Sjostrand:2006za (), Pythia8 Sjostrand:2007gs (), Herwig Corcella:2000bw (), Herwig++ Bahr:2008pv () and Sherpa Gleisberg:2008ta (). All generators contain an extensive list of SM and BSM processes, calculated with LO matrix elements. Higher-order corrections are also available for some important processes. Several programs such as MadGraph Alwall:2011uj (), MCFM Campbell:2003hd (), Alpgen Mangano:2002ea () and Blackhat-Sherpa Berger:2008sj () calculate matrix elements for LO and NLO processes, but do not provide a full event generation with parton showers and hadronization effects. The subsequent event generation starting from the final state parton configuration is often performed by the Herwig and Sherpa libraries.

Since the matrix element calculations give not only a good description of the hard emission of jets in the final states, but also handle interferences of initial and final states correctly, it is desirable to combine matrix element calculations and parton showers. This combination is complicated by the fact that the phase space of NLO matrix element calculation partially overlaps with parton shower approaches. Several matching schemes have been suggested to provide a combination methodology of LO matrix element calculations for different parton multiplicities and parton shower models. These schemes avoid double counting of final state particles by reweighting techniques and veto-algorithms. The Mangano (MLM) scheme Mangano:2001xp () and the Catani-Krauss-Kuler-Webber (CKKW) matching scheme Catani:2001cc () Krauss:2002up () are widely used for tree-level generators.

Matching schemes can only be applied for LO calculations, while the matching between parton showers and NLO matrix element calculations is more sophisticated and advanced methods have to be used. The MC@NLO approach Frixione:2002ik () was the first prescription to match NLO matrix elements to the parton shower framework of the Herwig generator. The basic idea is to remove all terms of the NLO matrix element expression which are generated by the subsequent parton shower. The Powheg procedure Frixione:2007vw () which is implemented in the PowhegBox framework Alioli:2010xd () was the second approach developed for NLO matrix element matching. The Powheg procedure assumes that the highest energy emitted parton is generated first and then feed into the shower generators of the subsequent softer radiation. In contrast to the MC@NLO approach, only positive weights appear and in addition the procedure can be interfaced to other event generators besides herwig.

The matching in the Sherpa generator relies on the CKKW matching scheme for LO ME and on the Powheg scheme for NLO calculation. Pythia8 also includes the possibility of matching NLO matrix elements via the Powheg approach.

A crucial ingredient for all MC event generators is the knowledge of the proton PDF. The determination of the PDFs has been performed by several groups. The CTEQ Nadolsky:2008zw (), MRST Martin:2009iq () and NNPDF Ball:2011us () collaborations include all available data for their fits but differ in the treatment of the parametrization assumptions of in Eqn. 6. The HeraPDF CooperSarkar:2011aa () group bases their PDF fits on a subset of the available data, i.e. mainly on the deep inelastic scattering measurements from the HERA collider. The results presented in this review rely mainly on the CTEQ and MRST PDFs.

### 2.4 SM Predictions of Diboson Production Cross Sections

Table 1 summarizes LO and NLO predictions of the diboson production in collisions at based on the MCFM generator. The uncertainties due to scales and PDF variations are also shown. The difference between LO and NLO predictions varies up to 25% depending on the final state. The difference between different PDF sets is on the order of 3%.

The most recent NNLO predictions for suggests a further increase by 11% on the inclusive cross section Cascioli:2014yka (). However, electroweak effects are not yet taken into account, which will lead to a reduction of the cross section.

### 2.5 Description of SM Extensions and aTGCs

Since this article focuses on the LHC results at , we restrict ourselves to the discussion of the triple gauge boson interaction vertices and their new physics modifications, commonly called aTGCs. There are only two triple gauge vertices allowed in the SM, with . The vertices are fully determined by the gauge group structure. A generic anomalous contribution to the vertex can be parametrized in terms of a purely phenomenological effective Lagrangian. The most general Lagrangian that describes the trilinear interaction of electroweak gauge bosons with the smallest number of degrees of freedom Hagiwara:1986vm (), Zeppenfeld:1987ip (), Baur:1988qt (), reads as

where and are the two couplings, is the mass of the boson, , , and are the gauge boson vector fields and their field tensors. The anomalous couplings are described by seven parameters for each of the vertices, , , , , , and . All aTGCs are set to be zero in the SM.

The first three terms in Eqn. 2.5 are -invariant, while the remaining four terms violate the - and/or - symmetry. Furthermore, electromagnetic gauge invariance requires that , while the corresponding boson coupling parameters and can differ from their SM values. We are left with five independent - and -conserving parameters , , , and , and six - and/or -violating parameters , , , , and . The studies presented in this paper assume gauge invariance and conservation of and separately, resulting in five independent aTGC parameters.

In order to further reduce the number of independent parameters and therefore allow for a simpler experimental derivation of limits, several additional assumptions can be made. The ‘equal coupling’ scenario assumes that the anomalous triple gauge couplings are the same for and bosons, i.e., , and , and hence leads to two free parameters. The ‘LEP’ scenario is motivated by the gauge invariance Gounaris:1996rz () and assumes and , leading to three independent parameters. The ‘HISZ’ scenario Hagiwara:1993ck () assumes no cancellations between tree-level and loop contributions, leading to the constraints, , , and , which leave two free parameters. A review of various parametrization scenarios for aTGCs can be found in aTGCReview ().

Non-zero aTGCs will lead to a change in the calculation of matrix elements. As an example, we discuss here the change of the matrix element , describing the production of the process for large center-of-mass energies of the interacting partons Zeppenfeld:1987ip (). The two subscripts and denote the helicity of the final state bosons (0, , and ). The expected changes read as

where denotes the production angle of the boson with respect to the incoming quark direction. In general, the matrix element and thus the production cross section increases with increasing center-of-mass energy of the interacting partons. The LHC allows for a larger sensitivity to the and parameters as their contribution to the production cross section increases with the squared center-of-mass energy . In addition, different aTGCs lead to different angular distributions of the final state particles. A multi-dimensional differential cross section measurement can constrain these parameters individually.

Even though the vertex is forbidden in the SM, new physics scenarios might allow for such interaction vertices. The corresponding phenomenological effective Lagrangian reads as

(8) |

where the anomalous on-shell production is parametrized by two -conserving () and two -violating () parameters. Similarly, the vertex Baur:1992cd () can be described by an effective Lagrangian, where the anomalous couplings are described by and for the vertex, and and for the vertex. It should be noted that the parameters and are partly correlated. In contrast to the vertex, the effective parameters for the and vertices vanish at tree level in the SM and only higher-order corrections allow for small -conserving couplings in the order of .

An overview of the properties of all aTGC parameters is given in Table 2.

The unitarity of the SM electroweak Lagrangian is preserved due to gauge invariance. The introduction of aTGCs in the Lagrangian alters its gauge structure and can lead to unitarity violations at relatively low energies. This can be seen in Eqn. 2.5, where some matrix elements are proportional to the center-of-mass energy. To avoid unitarity violations at high energies, the Lagrangian approach in Eqn. 2.5 is replaced by a form factor as

(9) |

where is the aTGC parameter at low energies and
is the energy cut-off scale at which new physic effects become
dominant Ellison:1998uy (). By convention, is usually
chosen. In some sense, the form factor can be interpreted by treating
the couplings in the Lagrangian as energy dependent^{3}^{3}3Strictly
speaking, the Lagrangian couplings must remain constant and these
are actually two different approaches..

The choice of the form factor parametrization and the cut-off scale is arbitrary as long as it conserves unitarity for reasonably small aTGC coupling parameters. It is not important at colliders as the fixed center-of-mass energy of the interaction particles allows for a well-defined translation between different choices of parametrization. The situation is different for hadron colliders, where only the center-of-mass energy of the interacting protons is known, but not the energy of the interacting partons. The measured production cross sections are always integrated over a certain energy range and their interpretation in terms of aTGCs depends on the form factors chosen.

The cut-off scale is usually chosen such that the extracted limits on aTGCs still preserve unitarity within a given analysis. In order to give experimental limits that are free from the arbitrary choice of the form factor, the cut-off scale is also set to , i.e. using a constant form factor and hence violating unitarity at high energies. However, the aTGC limits based on are the more stringent and less conservative.

The introduction of the form factor is conceptually overly restrictive, since the only physics constraint is that the theory respects the unitarity bound in the region where there is data. A newly-proposed approach for the study of aTGCs is based on effective field theories Degrande:2012wf (). An effective field theory of the SM can be written as

where are dimension-six operators, and are coupling parameters that describe the interaction strength of new physics with the SM fields. It can be shown Hagiwara:1993ck () that only three dimension-six operators affect the electroweak gauge boson self-interaction. The corresponding coefficients can be related to the aTGCs that are discussed above.

There are several advantages of using effective field theories for the description of aTGCs. By construction, an effective quantum field theory is only useful up to energies of the order . As long as the effective theory describes data, it automatically respects the unitarity bound. Hence, no further assumptions on the energy scale have to be applied. Furthermore, the effective field theory approach has fewer parameters and is renormalizable by construction. While the results on aTGC presented in this review article are based on the modified Lagrangian approach, future measurements might use effective field theories for aTGC studies.

## 3 Experimental Aspects of Diboson Measurements at the LHC

### 3.1 The LHC machine

The LHC Evans:2008zzb () is currently the world’s most powerful particle accelerator. It consists of several stages that successively increase the energy of the protons (and heavy ions). Protons are accelerated by the LINAC to 50 MeV, the Proton Synchrotron Booster to 1.4 GeV, the Proton Synchrotron to 26 GeV, and Super Proton Synchrotron to 450 GeV) before the final injection into the LHC ring. With a circumference of 27 km and a magnetic field of 8.3 T, the LHC can accelerate each proton beam up to 7 TeV. With a revolution frequency of 11.25 kHz, and a maximum of 2808 bunches that can be filled with up to 115 billion protons per bunch, the instantaneous luminosity can reach cms with a beam emittance of 16 m, providing a bunch collision rate of 40 MHz. The two proton beams are brought together and collide head-on in four points around the LHC ring, where four large detectors - ALICE, ATLAS, CMS and LHCb - are located.

First collisions at the LHC were carried out in November 2009 with a proton-proton center-of-mass energy of 0.9 TeV. The LHC started operations at a center-of-mass energy of 7 TeV in March 2010 and delivered an integrated luminosity of about 50 pb in 2010 and 5 fb in 2011 to both ATLAS and CMS experiments. The peak instantaneous luminosity delivered by the LHC at the start of a fill increased from cms in 2010 to cms by the end of 2011. The machine increased its center-of-mass energy to 8 TeV in 2012 and delivered 25 fb to both experiments. The peaks instantaneous luminosity of cms was reached in November 2012. This is close to the design luminosity of cms, albeit at twice the beam crossing time. The data analyses for the 2012 run are still ongoing, and this paper only covers results from the 7 TeV run in 2011. During this period, the maximum number of bunch pairs colliding was 1331, and the minimum bunching spacing was 50 ns with a typical bunch population of protons. The maximum number of inelastic interactions per bunch crossing (’pile-up’) was 20, and the average was 9.1.

### 3.2 The ATLAS and CMS Detectors

Since the topologies of new physics processes are unknown, detectors should be designed to be sensitive to all detectable particles and signatures (, , , , , jets, -quarks) produced in the interactions. Both ATLAS Aad:2008zzm () and CMS Chatrchyan:2008aa () detectors are general-purpose detectors and are composed of many sub-detectors, each of which has a specific task in the reconstruction of the events. Although these two detectors are differ in design and conception, the basic detection structure is similar. Both detectors have fast, multi-level trigger systems to select complex signatures online; fast data acquisition systems to record all selected events; excellent inner tracking detectors allowing efficient high- tracking and secondary vertex reconstruction; fine-grained, high-resolution electromagnetic calorimeters for electron and photon reconstruction; full coverage hadronic calorimetry for jet and missing transverse energy measurements; and high-precision muon systems with standalone tracking capability. The layouts of the ATLAS and CMS detectors are shown in Fig. 7. ATLAS emphasizes jet, missing transverse energy, and standalone muon measurements, while CMS has prioritized electron, photon, and inner tracking measurements.

CMS has chosen to have a single huge solenoid immersing the inner tracker and electromagnetic and hadronic calorimeters inside a 4 T axial magnetic field. To reduce the occupancy of the inner tracking detector at the LHC design luminosity, the inner tracker consists solely of silicon pixel and microstrip detectors, which provide high granularity at all radii. CMS relies on the fine lateral granularity of its lead tungstate scintillating crystal electromagnetic calorimeter for electron and photon measurements. The hadronic sampling calorimeter system consists of brass absorbers and scintillating tiles readout via wavelength-shifting optical fibers guiding the light to photomultiplier tubes. The strong constraints imposed by the solenoid have resulted in a barrel hadronic calorimeter with insufficient absorption before the coil, so a tail catcher has been added around the coil to provide better protection against punch-through to the muon system. Driven by the design of the magnet, CMS relies on the high solenoidal field to bend muon tracks in the transverse plane, requiring the extrapolation of the track into the inner tracker.

ATLAS has chosen to have three different magnet systems: a thin solenoid around the inner tracking system, one eight-fold barrel and two eight-fold endcap air-core toroid magnets arranged radially around the hadron calorimeters. The inner tracker consist of silicon pixel and microstrip detectors at small radii and transition radiation tracker (TRT) at large radii. For electron and photon measurements, ATLAS relies on the fine segmentation along both the lateral and longitudinal directions of electromagnetic shower development using a lead and liquid-argon sampling technique. The hadronic calorimeter system uses a sampling technique similar to that used by CMS except iron and copper are used as absorbers. The structure of barrel and endcap toroid magnets allows standalone muon tracking inside the large area spanned by the toroids.

### 3.3 Reconstruction Objects for Physics Analysis

The -axis of the coordinate system of both detectors is chosen to be along the beam-direction, the -axis points to the center of the LHC, and the -axis points upwards. The origin of the coordinate system is placed at the nominal collision point, i.e. in the center of the detectors. Two radial coordinates are used to describe event kinematics: the azimuthal angle is defined in the -plane and the polar angle is defined with respect to the -axis. The polar angle is commonly used to define the pseudorapidity parameter .

Observations of heavy diboson pair production processes at the LHC resulted from analyses of the fully leptonic (in which decays to and bosons decay to , where ) and semi-leptonic decay channels (in which one boson decays leptonically while the other boson decays to hadrons or neutrinos). The fully leptonic decay channels produce clean experimental signatures of one or more high- charged leptons and, in the case of , large missing transverse energy . The semi-leptonic decay (or neutrino decay) channels result in the leptonjets (or lepton+) final states and are harder to detect experimentally due to the large jets background despite higher production cross sections.

The main analyses that are discussed in this review are based on the reconstruction of the electron and muon kinematics. Since the initial momenta of interacting partons in the plane transverse to the beamline is zero, the projected momenta of reconstructed objects in this plane, called transverse momenta , are from special importance in event reconstruction. Similar concept is introduced for measured energies, where the transverse energy is defined as .

The data analyzed were often selected online by a single lepton ( or ) or dilepton trigger with a threshold on the transverse energy in the electron case and on the transverse momentum in the muon case. Different thresholds (normally around 20 GeV for single lepton triggers and around 12 GeV for dilepton triggers) were applied depending on the average instantaneous luminosity of running periods.

The reconstruction of electrons combines electromagnetic calorimeter and inner tracker information Khachatryan:2010pw (); Aad:2010bx () and makes use of standard electron reconstruction algorithms at ATLAS and CMS. Candidate electrons are required to pass certain () threshold cuts and to be located inside the detector fiducial regions. Additional electron identification requirements are imposed which rely on electromagnetic shower shape observables, on associated track quality variables and on track-cluster matching observables, so as to preserve the highest possible efficiency while reducing the multijet background Aad:2011mk (); CMS:ECAL (); Chatrchyan:2013dga (). The -coverage of the electron and photon candidate reconstruction is and for ATLAS and CMS, respectively. The regions in ATLAS and in CMS are typically excluded for most analyses, as they contain a significant amount of service infrastructure which reduces the reconstruction quality.

The reconstruction of photon candidates is similar to the electron case. However, specific cuts are applied on the shower-shape of the reconstructed electromagnetic clusters and on tracking information. If no track can be associated to the electromagnetic cluster, then the photon is called ‘unconverted’. An association of two tracks to the electromagnetic cluster imply a previous photon conversion into and therefore the corresponding photon candidates are labeled ‘converted’ photons.

Muon is reconstructed using hits collected in both the inner tracker and the outer muon spectrometer and corrected for energy loss measured by the calorimeter. Good quality reconstruction is ensured by requiring a minimum numbes of hits associated with the track from both inner and outer tracking systems. Due to limited pseudorapidity coverage of the inner tracker and trigger detectors, muon candidates are reconstructed within for CMS and for ATLAS. However, the ATLAS inner detector only covers a region up to and therefore most analyses using reconstructed muons limit themselves also to in order to have combined muon candidates, i.e. tracks which are reconstructed in the inner detector and the muon spectrometer ATLAS:Muon (); Chatrchyan:2012xi ().

To ensure candidate electrons and muons originate from the primary interaction vertex, some analyses required these lepton candidates to have small longitudinal and transverse impact parameters. These requirements reduce contamination from heavy flavor quark decays and cosmic rays. Leptons from heavy boson decays tend to be isolated from other particles in the event, while fake leptons or leptons from heavy quark decays will usually be close to a jet. To suppress the contribution from hadronic jets which are misidentified as leptons, electron and muon candidates are often required to be isolated in the inner tracker and (or) the electromagnetic calorimeter. Certain cuts are made on the sum of transverse energies of all clusters around the lepton or the sum of the of all tracks that originate from the primary vertex and are within a certain cone around the lepton candidate. A typical cone-size is 0.3 in the -plane. For the CMS case, a relative isolation variable combining the tracker and calorimeter isolation information is used.

Jets are reconstructed from topological clusters of energy in the calorimeter using the anti- algorithm with a certain radius parameter Cacciari:2008gp (). Jet energies are often calibrated using - and -dependent correction factors derived from studies based on the Geant4 simulation Agostinelli:2002hh (), dijet, jets, and jets collision data. Jets are classified as originating from -quarks by using algorithms that combine information about the impact parameter significance of tracks in a jet which has a topology of semileptonic - or -hadron decays. ATLAS and CMS reconstruct particle jets within regions of and , respectively Aad:2011he (); CMS:2009nxa ().

A summary of the identification and reconstruction features for electrons, muons, and jets of the ATLAS and CMS experiment is given in Tab. 3. The given kinematic constraints are also the basis for event selections that are discussed in the following chapters.

Weakly interacting particles such as neutrinos leave the detector unseen and can be only reconstructed indirectly. The concept of this indirect measurement is based on the fact that the momentum in the transverse plane before the collision is zero. Undetected energy and momentum carried out of the detector will therefore lead to a missing transverse energy, , in an event. The two-dimensional vector of is based on the calorimeter information and is calculated as the negative vector sum of the transverse energies deposited in the calorimeter towers. The latter is corrected for the under-measurement of the hadronic energy in the calorimeters and muon tracks reconstructed by the inner tracker and the muon spectrometer, leading to the following schematic definition:

(10) |

For the traditional calorimeter-based algorithm, the correction for the under-measurement of the hadronic energy in the calorimeter is performed by replacing energies deposited by reconstructed jets with those of the jet energy scale corrected jets. A different track-based algorithm to correct for the under-measurement of the hadronic energy in the calorimeter was also developed by both experiments. In this algorithm, the transverse momentum of each reconstructed charged particle track is added to the total missing transverse momentum, from which the corresponding transverse energy expected to be deposited in the calorimeters is subtracted.

At both ATLAS and CMS, tau reconstruction and identification concentrates on the tau hadronic decay modes which are characterized by either one or three charged pions accompanied by neutral pions ATLAS:2011oka (); Chatrchyan:2012zz (). They are classified according to the number of reconstructed charged decay particles (prongs). Several sophisticated tau identification algorithms have been developed by both collaborations using different sets of identification variables such as tracking and calorimeter information to find the optimal set of cuts in a multi-dimensional phase space.

Detailed simulations of the ATLAS and CMS detector response have been developed over the recent years. Both simulations are based on the Geant4 package, which describes the interactions of all final state particles with the detectors at a microscopic level. In a second step, the digitization of the simulated detector interactions is performed and the nominal data reconstruction algorithms are applied.

Despite of the great detail of the simulation software, several differences between data and MC predictions remain. To improve the agreement between data and MC simulations, several quantities such as reconstruction efficiencies or energy scales are measured independently in data and simulation. Correction factors are then determined and applied to the simulations to account for the observed differences.

### 3.4 Methodology of Cross-Section Measurements at the LHC

For the measurement of the diboson production at the LHC, it is generally assumed that both bosons are on-shell. Three different modes can be considered in the decay of heavy diboson pairs: the full hadronic decay channel where both bosons decay into quarks; the semi-leptonic decay channel in which one boson decays into quarks and the other to leptons; and the leptonic decay channel where the final state contains four leptons. The hadronic decay modes of the vector bosons are hard to be distinguished in hadron colliders due to the overwhelmingly large cross-section of jet-induced background processes. The CMS collaboration has published a combined cross section measurement based on semi-leptonic decays in and pairs where the boson decays leptonically while the second boson decays hadronically Chatrchyan:2012bd (). However, the systematic and statistical uncertainties of this measurement are significantly larger than the results based on studies of the fully leptonic decay channel, which allows for a rather clean signal selection. Due to this precise signal selection and the fact that the branching ratios of vector bosons are well known, the fully leptonic decay channel is the best channel in which precision measurements of the production cross section of diboson pairs at the LHC can be performed.

The theoretical prediction and calculation of diboson production cross sections has been discussed in Sect. 2.2. On the experimental side, the inclusive production cross section can be calculated via the following equation:

(11) |

The number of signal events is determined as , where is the number of selected events in data, and is the number of background events surviving the signal selection. The factor gives the fraction of signal events which pass the signal selection criteria. In order to correct the inclusive cross section for the choice of a specific decay channel, the total value has to be corrected by the appropriate branching ratio BR. These ratios are known to high accuracy from the LEP experiments Beringer:1900zz (). The last term in the denominator of Eqn. 11 is the integrated luminosity, i.e. a measure of the size of the data sample used.

The efficiency correction factor is usually estimated from the fraction of signal MC events passing all cuts through the full detector simulation. It should be noted that certain requirements on the final states can be applied directly at the generator level, for example, final state leptons may have to pass some minimal cut. The factor is thus defined as the ratio of all events which pass the signal selection at the reconstruction level over the number of all generated events. However, the simulation usually exhibits some differences compared to the real detector. These differences are corrected in the estimation of , following the methods described in Sect. 3.3.

The efficiency correction can be decomposed as the product of a fiducial acceptance () and a detector-induced correction factor (): . The fiducial acceptance, which is the ratio of the number of events which pass the geometric and kinematic cuts in the analysis at the generator level () over the total number of generated events in a simulated sample of signal process (). These selection cuts at the generator level usually require geometric and kinematic constraints close to the cuts applied on the reconstructed objects. The dominant uncertainties on the fiducial acceptance are the scale and PDF uncertainties.

The detector correction factor, , is defined as the number of selected events in the simulated sample () over the number of events in the fiducial phase space at the generator level (). Hence can be written as

(12) |

The separation of into and allows a separation of theoretical and experimental uncertainties, assuming the definition of the fiducial volume at the generator level resembles to a good extent the signal selection cuts at the reconstruction level. The fiducial cross section is defined as

(13) |

which allows a measurement only effected to a small extent by theoretical uncertainties. It can therefore be used to compare measurements to theoretical predictions which might become available at a later date.

The uncertainties associated with the detector correction parameter are dominated by experimental sources, such as limited knowledge of reconstruction or cut efficiencies and the accuracy of the energy/momentum measurements. In principle, this parameter can be larger than unity due to events outside of the fiducial region at the generator level which may migrate to the fiducial region defined at the reconstruction level. However, in practice this is usually not the case, as detector inefficiency and quality criteria on reconstructed objects have to be considered.

Equation 13 can be interpreted as the removal of all experimental effects due to detector acceptance, efficiencies, and resolutions from an experimental quantity to make it comparable to the theoretical prediction. Within this fiducial region, the distributions of variables can be unfolded, e.g. the transverse momenta of leptons in the final state. These distributions provide a differential cross section measurement and allow a full shape comparison with theoretical predictions.

The so-called ‘bin-by-bin’ unfolding method can be used if the purity of the underlying distribution is high, typically above . This method is equivalent to calculating the cross section for each bin using Eqn. 13. The purity of one bin is defined as the ratio of events which have been reconstructed in the same bin as they have been generated in to the number of events generated in the chosen bin. For lower purities, advanced unfolding methods have to be used. One commonly used approach in diboson studies at the LHC is the Bayesian unfolding D'Agostini:1994zf () which takes into account bin migration effects and reduces the impact of the underlying theoretical distribution which is used as input information.

## 4 Studies of the , and final states

The production of , and final states can occur via -channel processes in the SM and therefore provide a test of the vertex (with ). In the following sections we describe in detail the event selection, the background estimation methods, and results of both experiments for the production cross section measurements of the , and final states. In particular, we will highlight the differences in the published analyses and derive combinations of the measured cross sections.

### 4.1 WW Analysis

ATLAS and CMS have published analyses of the boson production based on the full available dataset at ATLAS:2012mec (); Chatrchyan:2013yaa (). Both analysis are based on final states in which both bosons decay leptonically with . We discuss and compare these two measurements in the following sections.

#### 4.1.1 Event Selection

The experimental signature of the production in the leptonic decay channel is two high energetic leptons with no distinct invariant mass peak, together with a significant missing transverse momentum due to the presence of two neutrinos in the final state. Since two different-flavor leptons appear in the final state, three decay channels are studied, namely , and . The -decays of the bosons are not directly taken into account due to the relatively low -reconstruction efficiency and significant fake rate. However, the cascade decay of the boson via is considered as signal.

The dominant SM background processes are the Drell-Yan process (), the top-pair production, the production of bosons in association with jets where one jet is incorrectly identified as a signal lepton, and other diboson processes. The relevant Feynman diagrams are shown in Fig. 8.

The Drell-Yan process has two same-flavor opposite-charge leptons with an invariant mass peaks at the boson mass. A mis-measurement of the missing transverse energies of the two leptons can lead to a signal similar to the signal process. It should be noted that the signal region is affected by the Drell-Yan process via the -decay channel which could also lead to an final state. The top-pair production and its subsequent leptonic decay also leads to a signal-like signature. These events are always produced with at least two additional -jets. production in association with jets, if the boson decays leptonically and one of the jets is falsely identified as a signal lepton, will also fake the signal process. The leptonic decays of other diboson processes such as and are also considered as background processes, in the case that one or two leptons are not reconstructed and cause large missing transverse energy in the event.

The full signal selections of the ATLAS and CMS measurements are rather complex. Since the final results do not explicitly depend on the details of the signal selection, we discuss here only the basic concept of the signal selection and the reasoning behind it. A detailed discussion can be found in ATLAS:2012mec () and Chatrchyan:2013yaa ().

A schematic signal selection requires exactly two high energetic, oppositely-charged and isolated leptons (e.g. GeV) are reconstructed and a minimal missing transverse momentum in each event. Since the latter cut is intended to suppress the Drell-Yan background, the variable is used since it reduces the impact of mis-measured leptons or jets compared to the standard definition. The background events due to the top-pair production can be effectively rejected by vetoing events with additional jets. The ATLAS analysis vetoes events that contain a jet with , while CMS vetoes events with jets above 30 GeV. CMS imposes additional vetoes from two top-quark tagging techniques. It should be noted that the jet-veto criteria also rejects a significant fraction of events which are predicted by NLO QCD corrections. A summary of the detailed selection cuts is given in Tab. 4.

The resulting signal and background predictions compared to the yield in data, where the SM prediction of the cross-section is used, are shown in Tab. 5. ATLAS expects , and of signal events in the , and channels, respectively. CMS has not published number for each individual channel in Chatrchyan:2013yaa (). However, previous studies based on a smaller integrated luminosity CMS:2011dqa (), suggest very similar numbers. The final results are dominated by the contribution from the channel, as the corresponding selection cuts are relaxed due to the reduced background from the Drell-Yan process.

#### 4.1.2 Background Estimation

The expected background contributions are summarized in Tab. 5. The dominant background contributions come from top-pair and jets events, which will be discussed in more detail in this section.

The background contribution due to top-quark pairs is estimated in both analysis with a data-driven method. ATLAS defines a data sample, named extended signal region (ESR), including all events which pass the signal selection cuts but without the jet-veto requirement. This sample is dominated by events from and single top processes with more than one jet in each event. A control region is defined by applying the full signal selection criteria except requiring at least one -tagged jet with . The expected jet multiplicity distribution in the ESR is estimated from the measured control region distribution, and extrapolated to the ESR using MC predictions. The expected jet-multiplicity distribution in the ESR is fitted to the measured ESR in the higher jet-multiplicity region and the nominal value in the 0-jets bin taken as background estimate. The dominating uncertainty of this approach is due to the limited statistics in the control region.

CMS also defines a control region dominated by top-quark background events by requiring that a positive top-quark identification algorithm tags the given event. The normalization of the top-quark background is estimated via , where is the efficiency to tag a event. This efficiency is estimated in a data sample selected by the nominal signal selection criteria but requiring one jet with . The dominant uncertainty in the estimation of is due to statistical and systematic uncertainties on .

Since the probability for a jet to be misidentified as an isolated lepton might not be modeled correctly in the MC simulations, a similar data-driven method has been used in both experiments for the jets background estimation. A jets enriched sample is selected by loosening the isolation or identification requirements on one lepton. The number of jet events in the signal region is then estimated via fake-factor , which is defined as the ratio of the probability of a jet passing the nominal lepton selection over the probability of passing the loosened selection. The factor is determined in data for muons and electrons, separately, using QCD multijet events.

The Drell-Yan background in both the and channels is estimated by inverting the boson veto cuts and then extrapolating from this control region into the signal region. The remaining background contributions are estimated with MC simulations, where all theoretical and experimental uncertainties have been taken into account.

#### 4.1.3 Cross Section Measurement

The ATLAS cross section measurement is performed in each of the three decay channels and then combined, while CMS does not distinguish between the final states and directly derives a combined cross section. Furthermore, ATLAS defines a fiducial volume and separates the signal selection efficiency in acceptance and detector effects. The corresponding parameters , and are shown in Tab. 6.

The efficiency factor of the CMS analysis averages over all lepton flavors and is defined with respect to a phase space that includes all possible leptonic decay modes. The correction factor BR due to the branching ratio in Eqn. 11 is therefore given by BR. The efficiency correction factor in the ATLAS analysis is defined for each decay channel with respect to a phase space that includes only the respective final states. The contributions from the cascade decay and are also included.

Experimental uncertainties in Tab. 6 are dominated by lepton reconstruction efficiencies and energy/momentum scale uncertainties. Theoretical uncertainties are significantly different in both analyses even though the signal selection requirements are similar. Theoretical uncertainties contain contributions from the uncertainties on strong coupling constant (), renormalization () and factorization () scales, and PDFs. The two scales are varied in the range of and () to estimate the uncertainty. ATLAS chose to use while CMS chose to use . A second and more significant difference comes from the fact that CMS calculates the above theoretical uncertainties with the jet-veto scale factor applied, while ATLAS estimates the theoretical uncertainties before applying the jet-veto requirement.

Hence the estimation of the jet-veto scale factor needs to be discussed in more detail. Both analyses use a data-driven approach to estimate the probability () for a signal event failing the jet-veto requirement in data. This probability is calculated as

(14) |

where denotes the probability of boson events to pass a jet-veto requirement. Events containing a boson can be selected with a high purity in data and the kinematic distributions of jets are expected to be similar to those in events. Most uncertainties on the jet-veto requirement cancel in the ratio () and therefore a reduction on the uncertainty of is achieved. The cancellation in the ratio is also the reason why ATLAS chose to estimate the PDF and scale uncertainties on before applying the jet-veto requirement. The overall uncertainty on the jet-veto requirement is significantly lower in ATLAS compared to that in the CMS analysis. Another possible contribution is that ATLAS estimates the effect of higher order corrections using MC@NLO , while CMS uses MCFM. Since MC@NLO includes parton shower effects in contrast to MCFM, a better description of the jets is expected which could lead to a smaller effects from higher order corrections.

It is worthwhile noting that the uncertainties of the jet-veto probability on are approximately and therefore significantly larger than its impact on . This is mainly due to the fact that the method of Eqn. 14 cannot be applied directly at the generator level. A naive estimate of the scale and PDF uncertainties on the jet-veto requirement leads to an underestimate of the corresponding uncertainty and therefore more sophisticated methods have to be applied Stewart:2011cf ().