Acknowledgments

Study of Inclusive Jet Production and Jet Shapes in proton-proton collisions at  TeV using the ATLAS Detector1


Francesc Vives Vaqué

Institut de Física d’Altes Energies

Universitat Autònoma de Barcelona

Departament de Física

Edifici Cn, Campus UAB

E-08193 Bellaterra (Barcelona)


Barcelona, July 2011


supervised by

Prof. Mario Martínez Pérez

ICREA / Institut de Física d’Altes Energies /

Universitat Autònoma de Barcelona

The studies presented in this thesis are part of the following publications:

  • The ATLAS Collaboration, Study of jet shapes in inclusive jet production in collisions at  TeV using the ATLAS detector, Physical Review D 83, 052003 (2011)

  • The ATLAS Collaboration, Measurement of inclusive jet and dijet cross sections in proton-proton collisions at 7 TeV centre-of-mass energy with the ATLAS detector, European Physical Journal C71 1512 (2011)

and of the following public ATLAS notes:

  • The ATLAS Collaboration, Jet Shapes in ATLAS and Monte Carlo modeling, ATL-PUB-2011-010 (2011)

  • The ATLAS Collaboration, Measurement of inclusive jet and dijet cross sections in proton-proton collision data at 7 TeV centre-of-mass energy using the ATLAS detector, ATLAS-CONF-2011-047 (2011)

  • The ATLAS Collaboration, Properties and internal structure of jets produced in soft proton-proton collisions at  GeV, ATLAS-CONF-2010-018 (2010)

Acknowledgments

First of all, I would like to thank my supervisor Mario Martínez. His broad knowledge of physics, involvement in the analysis I was performing and love for rigorous work have played a crucial role in this thesis. I am grateful to Enrique Férnandez, Matteo Cavalli and Martine Bosman, who welcomed me at IFAE, and from whom I admire their passion for physics.

I also want to thank the ATLAS collaboration. In particular, I want to mention Fabiola Gianotti, the spokesperson, Kevin Einsweiler and Jon Butterworth, the Standard Model group conveners, and Tancredi Carli and Richard Teuscher, conveners of the Jet/EtMiss group.

I started the jet shapes analysis with Monica D’Onofrio. I would like to thank her for this collaboration, but mainly for her multiple advises and friendship. Other postdocs have helped me during my Ph.D., and I am grateful to all of them: Bilge Demirköz, with whom I analyzed the first ATLAS data, Christophe Ochando, Ilya Korolkov, Luca Fiorini and Sebastian Grinstein.

Estel, Evelin and Valerio have been excellent office-mates, and actually more than that. Just to put an example, they were the first friends to visit my daughter Gemma when she was born! I also want to thank my other colleagues at IFAE, in particular Machi, Jordi and Volker.

I am grateful to Juan and Sonia, for their encouragement during the Ph.D., and to all my friends working at CERN: Ila, Peppe, Alessio, Ciccio, Simone, Lidia, Delo, Paola…

Finally, I want to thank the most important people in my life: my family. I want to particularly mention my wife, Maria Laura, not only for moving to life with me close to CERN, but mainly for her daily support.

Contents

List of Figures

Introduction

The Standard Model (SM) is the theory that provides the best description of the properties and interactions of elementary particles. The strong interaction between quarks and gluons is described by the Quantum Chromodynamics (QCD) field theory. Jet production is the high- process with the largest cross section at hadron colliders. The jet cross section measurement is a fundamental test of QCD and it is sensitive to the presence of new physics. It also provides information on the parton distribution functions and the strong coupling. One of the fundamental elements of jet measurements is the proper understanding of the energy flow around the jet core and the validation of the QCD description contained in the event generators, such as parton shower cascades, and the fragmentation and underlying event models. Jet shapes observables are sensitive to these phenomena and thus very adequate to this purpose. The first measurement of the inclusive jet cross section in collisions at  TeV delivered by the LHC was done using an integrated luminosity of 17 nb recorded by the ATLAS experiment. The measurement was performed for jets with  GeV and , reconstructed with the anti- algorithm with radius parameters and .

This Ph.D. Thesis presents the updated measurement of the inclusive jet cross section using the full 2010 data set, corresponding to 37 pb collected by ATLAS. Jets with  GeV and are considered in this analysis. The measurement of the jet shapes using the first 3 pb is also presented, for jets with  GeV and . Both measurements are unfolded back to the particle level. The inclusive jet cross section measurement is compared to NLO predictions corrected for non-perturbative effects, and to predictions from an event generator that includes NLO matrix elements. Jet shapes measurements are compared to the predictions from several LO matrix elements event generators.

The contents of this Thesis are organized as follows: Chapter 1 contains a description of the strong interaction theory and jet phenomenology. The LHC collider and the ATLAS experiment are described in Chapter 2. The inclusive jet cross section measurement is described in detail in Chapter 3, and the jet shapes measurements in Chapter 4. Additional comparison of the jet shapes measurement to Monte Carlo event generator predictions are shown in Chapter 5. There are two appendixes at the end of the document. The first one contains additional jet shapes studies, and the second one is devoted to energy flow studies at calorimeter level.

Chapter 1 QCD at Hadron Colliders

1.1 The Standard Model

The Standard Model (SM) [1] is the most successful theory describing the properties and interactions (electromagnetic, weak and strong) of the elementary particles. The SM is a gauge quantum field theory based in the symmetry group , where the electroweak sector is based in the group, and the strong sector is based in the group.

Interactions in the SM occur via the exchange of integer spin bosons. The mediators of the electromagnetic and strong interactions, the photon and eight gluons respectively, are massless. The weak force acts via the exchange of three massive bosons, the and the .

The other elementary particles in the SM are half-integer spin fermions: six quarks and six leptons. Both interact electroweakly, but only quarks feel the strong interaction. Electrons (), muons() and taus() are massive leptons and have electrical charge Q = -1. Their associated neutrinos (, , ) do not have electrical charge. Quarks can be classified in up-type (, , ) and down-type (,,) depending on their electrical charge (Q = 2/3 and Q = -1/3 respectively). For each particle in the SM, there is an anti-particle with opposite quantum numbers.

The SM formalism is written for massless particles and the Higgs mechanism of spontaneous symmetry breaking is proposed for generating non-zero boson and fermion masses. The symmetry breaking requires the introduction of a new field that leads to the existence of a new massive boson, the Higgs boson, that has still not been observed.

1.2 Quantum Chromodynamics Theory

Quantum Chromodynamics (QCD) [2] is the renormalizable gauge field theory that describes the strong interaction between colored particles in the SM. It is based in the symmetric group, and its lagrangian reads:

(1.1)

where the sum runs over the six different types of quarks, , that have mass . The field strength tensor, is derived from the gluon field :

(1.2)

are the structure constants of , and the indices A, B, C run over the eight color degrees of freedom of the gluon field. The third term originates from the non-abelian character of the group, and is the responsible of the gluon self-interaction, giving rise to triple and quadruple gluon vertexes. This leads to a strong coupling, that is large at low energies and small at high energies (see Figure 1.1). Two consequences follow from this:

  • Confinement: The color field potential increases linearly with the distance, and therefore quarks and gluons can never be observed as free particles. They are always inside hadrons, either mesons (quark-antiquark) or baryons (three quarks each with a different color). If two quarks separate far enough, the field energy increases and new quarks are created forming colorless hadrons.

  • Asymptotic freedom: At small distances the strength of the strong coupling is that low that quark and gluons behave as essentially free. This allows to use the perturbative approach in this regime, where .

Figure 1.1: Summary of measurements of as a function of the energy scale Q, from  [3].

1.3 Deep inelastic scattering

The scattering of electrons from protons, as illustrated in Figure 1.2, has played a crucial role in the understanding of the proton structure. If the energy of the incoming electron () is low enough, the proton can be considered as a point charge (without structure). The differential cross section with respect to the solid angle of the scattered electron is2:

(1.3)

where () is the fine structure constant, is the angle at which the electron is scattered, is the outgoing electron energy and the mass of the proton. is kinematically determined by .

Figure 1.2: Electron scattering from a proton.

For higher energies of the incoming electrons, the interaction is sensitive to the proton structure, and the cross section becomes:

(1.4)

and are functions that contain information on the proton structure and should be determined experimentally. Given that is kinematically determined by , and only dependent on one variable.

Figure 1.3: Electron-proton deep inelastic scattering.

Finally, for even higher electron energies, the proton breaks in a multi-hadron final state as illustrated in Figure 1.3. The cross section is then:

(1.5)

Now is the sum of the momenta of the hadrons originating from the proton, and it is not constrained by . Therefore, and are functions of two independent variables, and . Theoretically it is more convenient to use the Lorentz-invariant variables and , where is the momenta of the incoming proton.

The Parton Model describes the proton as built out of three point-like quarks (‘valence quarks’) with spin 1/2, and interprets as the fraction of the proton momentum carried by the quark. From the idea that at high the virtual photon interacts with a quark essentially free, Bjorken predicted that and depend only on at large ( GeV):

(1.6)
(1.7)

According to the Parton Model:

(1.8)

where , called Parton Distribution Function (PDF), is the probability that the th quark carries a fraction of the proton momentum , and is the electrical charge of the quark. Therefore, it is expected that

(1.9)

but it was found experimentally that the result of this integral is 0.5. The rest of the proton momentum is carried by gluons. The introduction of gluons leads to a more complex description of the protons structure: quarks radiate gluons, and gluons produce pairs (‘sea quarks’) or radiate other gluons. Figure 1.4 shows the PDFs of the valence quarks of the proton, the gluon, and the sea quarks. The valence quarks dominate at large , whereas the gluon dominates at low .

Figure 1.4: Example of PDFs of the valence quarks of the proton, the gluon, and the sea quarks as a function of .

The radiation of gluons results in a violation of the scaling behavior of and , introducing a logarithmic dependence on , which is experimentally observed (see Figure 1.6). The functional form of the PDFs can not be predicted from pQCD, but it is possible to predict their evolution with .

The parton interactions at first order in are gluon radiation (), gluon splitting () and quark pair production (). The probability that a parton of type radiates a quark or gluon and becomes a parton of type , carrying fraction of the momentum of parton (see Figure 1.5) is given by the splitting functions:

Figure 1.5: Diagrams at LO of the different parton interactions.
(1.10)
(1.11)
(1.12)
(1.13)

The evolution of the PDFs as a function of follow the DGLAP (Dokshitzer, Gribov, Lipatov, Altarelli and Parisi) equations [4]:

(1.14)
(1.15)

The first equation describes the evolution of the quark PDF with due to gluon radiation and quark pair production, whereas the second equation describes the change of the gluon PDF with due to gluon radiation and gluon splitting. The equations assume massless partons and therefore are only valid for gluons and the light quarks (u, d and s).

Figure 1.6: Structure function of the proton as measured by ZEUS, BCDMS, E665 and NMC experiments.

1.4 Perturbative QCD

1.4.1 The factorization theorem

The QCD factorization theorem is a crucial concept of QCD, that states that cross sections in hadron-hadron interactions can be separated into a a hard partonic cross section (short-distance) component and a long-distance component, described by universal PDFs:

(1.16)

where , are the momenta of the interacting hadrons, the sum runs over all parton types, and is the partonic cross section of the incoming partons with hadron momenta fraction , . is the scale at which the renormalization is performed, and is an arbitrary parameter that separates the hard from the soft component. Both scales are typically chosen to be of the order of .

Partonic cross sections in leading order (LO) calculations for jet production are O(), since they are based on parton interactions (), as shown in Figure 1.7. The dominant process is the scattering because of the larger color charge of the gluons.

Next-to-leading-order (NLO) diagrams include contributions from gluon initial- or final-state radiation and loops on the diagrams already shown. The partonic cross sections at NLO reduce the dependence on the normalization and factorization scales, and are calculable using programs such as JETRAD [5] and NLOJET++ [6]. Predictions at higher orders are not yet available due to the large number of diagrams involved.

Figure 1.7: Leading order diagrams for parton interactions.

1.4.2 Parton Distribution Functions

As already explained, perturbative QCD (pQCD) can predict the evolution of the PDFs [7] with respect to using the DGLAP equations, but not their functional form. Therefore, PDFs should be extracted from experimental data at a given . In particular, seven functions should be determined, one for the gluon and the others for each one of the light quarks and anti-quarks. Experimental data from a large variety of processes is used to constrain several aspects of the PDFs: measurements of Drell-Yan production, inclusive jet cross sections and W-asymmetry in collisions, and deep-inelastic , or scattering.

Typically, specific functional forms are postulated for the PDFs with a set of free parameters. These parameters are obtained optimizing the comparison between experimental data and predictions using the PDFs, for example by minimizing a . The functional form assumed for several sets of PDFs is:

(1.17)

where and are the free fit parameters and is a function that tends to a constant in the limits and . This functional form is motivated by counting rules [8] and Regge theory [9], that suggest that and respectively. Both limits are approximate, and even if these theories predict the values of and , they are taken as free fit parameters when computing the PDFs. This approach is used by three of the PDFs used in the analyses presented in this Thesis: CTEQ [61], MSTW [11] and HERA [12] PDFs. For example in the case of HERAPDFs, is:

(1.18)

NNPDFs [13] follow a different approach, using neural networks as a parton parametrization. Neural networks are functional forms that can fit a large variety of functions.

1.4.3 Uncertainties

There are three main sources of uncertainties in the calculation of pQCD observables:

  • The lack of knowledge of higher order terms neglected in the calculation. It is estimated by varying the renormalization scale, , usually by a factor of two with respect to the default choice. The factorization scale, , is independently varied to evaluate the sensitivity to the choice of scale where the PDF evolution is separated from the partonic cross section. The envelope of the variation that these changes introduce in the observable is taken as a systematic uncertainty.

  • Uncertainties on parameters of the theory, like the and the heavy quark masses, that are propagated into the observable.

  • PDFs have another uncertainty coming from the way the experimental data is used to determine the PDFs. This uncertainty is typically estimated using the Hessian method. If is the vector of the PDF parameters where is minimized, all parameters such that are considered acceptable fits, where is a parameter called tolerance. PDF parameters are expressed in terms of an orthogonal basis, and variations along the positive and negative directions of each eigenvector (, ) such that are performed. The uncertainty in the observable is:

    (1.19)
    (1.20)

    where is the observable computed using the PDFs with the parameters in vector . NNPDF use a Monte Carlo approach to evaluate the uncertainties, in which the probability distribution in parameter space derives from a sample of MC replicas of the experimental data. Figure 1.8 shows the PDF of the gluon with its uncertainties obtained following different approaches.

Figure 1.8: PDF of the gluon as a function of according to different PDF groups at  GeV.

1.5 Monte Carlo simulation

Complete pQCD calculations are always performed only up to a fixed order in , but the enhanced soft-gluon radiation and collinear configurations at higher orders can not be neglected. They are taken into account in the parton shower (PS) approximation, that sum the leading contributions of these topologies to all orders. Monte Carlo (MC) generator programs include the PS approximation, as well as models to reproduce non-perturbative effects, such as the hadronization of the partons to colorless hadrons and the underlying event (UE).

1.5.1 Parton Shower

The PS approximation describes successive parton emission from the partons in the hard interaction, as illustrated in Figure 1.9. The evolution of the showering is governed by DGLAP equations 1.14 and 1.15, from which the Sudakov form factors [14] are derived for the numerical implementation of the parton shower. These factors represent the probability that a parton does not branch between an initial scale () and a lower scale (). Once a branching occurs at a scale , , subsequent branchings are derived from the scales and . They can be angle-, - or -ordered. In the first case subsequent branchings have smaller opening angles than this between and , whereas in the second, parton emissions are produced in decreasing order of intrinsic .

Figure 1.9: Illustration of the parton shower from the outgoing partons of the hard interaction.

Successive branching stops at a cutoff scale, , of the order of , after producing a high-multiplicity partonic state. Since quark and gluons can not exist isolated, MC programs contain models for the hadronization of the partons into colorless hadrons.

1.5.2 Hadronization

The hypothesis of local parton-hadron duality states that the momentum and quantum numbers of the hadrons follow those of the partons. This hypothesis is the general guide of all hadronization models, but do not give details on the formation of hadrons, that is described in models with parameters that are tuned to experimental data. There are two main models of hadron production.

The string model [16] describes the behavior of pairs using string dynamics. The field between each pair is represented by a string with uniform energy per unit length. As the and the move apart from each other and thus the energy of the color field increases, the string connecting the two is tightened, until it breaks into a new pair. If the invariant mass of either of these string pieces is large enough, further breaks may occur. In the string model, the string break-up process is assumed to proceed until only on-mass-shell hadrons remain. In the simplest approach of baryon production, a diquark is treated just like an ordinary antiquark. A string can break either by quark-antiquark or antidiquark-diquark pair production, leading to three-quark states. There are more sophisticated models, but the formation of baryons is still poorly understood.

The cluster model [17] is based on the confinement property of perturbative QCD, exploited to form color-neutral clusters. After the perturbative parton showering, all gluons are split into light quark-antiquark or diquark-antidiquark pairs. Color-singlet clusters are formed from the quarks and anti-quarks. The clusters thus formed are fragmented into two hadrons. If a cluster is too light to decay into two hadrons, it is taken to represent the lightest single hadron of its flavor. Its mass is shifted to the appropriate value by an exchange of momenta with a neighboring cluster. If the cluster is too heavy, it decays into two clusters, that are further fragmented into hadrons.

1.5.3 Underlying Event

The UE comes from the partons that do not participate in the hard interaction. They contribute to the final state via their color-connection to the hard interaction, and via extra parton-parton interactions. Its simulation is based on the eikonal model, that describes the underlying event activity as additional uncorrelated partonic scatters. The number of interactions per event depends on the impact parameter . A small value corresponds to a large overlap between the two colliding hadrons, and therefore a higher probability for multiple interactions. For a given , the parton-parton cross section is computed as a function of the transverse momentum in the center-of-mass frame of the scattering process . Since this cross section diverges as a cut-off parameter is introduced, where experimentally  GeV. is extracted from the ratio between the total hadron cross section and the parton-parton cross section, , and assumed to be Poisson-distributed.

The UE models are tuned using experimental data, such as the jet shapes described in Chapters 4 and 5.

1.5.4 Monte Carlo Generator Programs

PYTHIA Monte Carlo The PYTHIA [18] MC event generator includes hard processes at LO, and uses the PS model for initial- and final-state radiation. The hadronization is performed using the string model. It includes an underlying event model to describe the interactions between the proton remnants.

The PYTHIA tunes DW [19] and Perugia2010 [20] use CTEQ5L PDFs, and both have been produced using Tevatron data. In the former the PS is -ordered, whereas in the latter it is -ordered.

In autumn 2009, the MRST LO* PDFs [21] were used in PYTHIA for the first time in ATLAS. This required to tune the PYTHIA model parameters, resulting in the MC09 [22] tune. It was done using Tevatron data, mainly from underlying event and minimum bias analyzes. The PS is -ordered.

The PYTHIA-AMBT1 [23] tune followed the MC09 one, and also uses MRST LO* PDFs and -ordered PS. It was derived using ATLAS data, in particular charged particle multiplicities in interactions at 0.9 and 7 TeV center-of-mass energy.

Herwig HERWIG [24] is a general-purpose MC event generator for hard processes in particle colliders. It uses an angular-ordered parton-shower for initial- and final-state QCD radiation, and a cluster model to reproduce the hadronization of the partons. The Fortran version of HERWIG is interfaced with JIMMY [25] to simulate multiple parton-parton interactions.

HERWIG++ [26] is the C++ version of HERWIG, that is expected to replace the Fortran one at a given point. The underlying event is modeled inside the program, that therefore do not use JIMMY.

ME + Parton Shower: Alpgen and Powheg Alpgen [27] is an event generator of multi-parton hard processes in hadronic collisions, that performs the calculation of the exact LO matrix elements for a large set of parton-level processes. It uses the ALPHA algorithm [28] to compute the matrix elements for large parton multiplicities in the final state. The advantage of this algorithm is that its complexity increases slower than the Feynman diagrams approach when increasing the particles in the final state. Powheg [29] is a MC event generator that includes NLO matrix elements. Alpgen and Powheg contain an interface to both PYTHIA and HERWIG for the parton showering, the hadronization and the underlying event simulation.

1.6 Jet Algorithms

Quarks and gluons from the hard scattering result on a collimated flow of particles due to parton shower and hadronization. This collimated flow of particles is called jet. There are several jet definitions [30] with the main purpose of reconstructing jets with kinematics that reflect that of the initial parton. These definitions can be classified in two main types of jet algorithms: cone algorithms and sequential recombination algorithms.

1.6.1 Cone algorithms

Typically, cone jet algorithms start by forming cones of radius in the space around a list of seeds, that can be all particles in the final state or those above a given energy threshold. The center of the cone is recomputed from the particles inside by following either the snowmass or the four-momentum recombination. In the four-momenta recombination, the jet momenta is the sum of the momenta of its constituents:

(1.21)

whereas in the snowmass scheme, the jet is considered massless, its transverse energy is the sum of the transverse energy of its constituents and the jet () are the average of the () of the constituents weighted by its transverse energy:

(1.22)
(1.23)
(1.24)

A cone is formed from the new center and the process repeated until the particles inside the cone are no longer changed by further iterations. Usually the algorithm is allowed to form overlapping cones and then decides whether to merge or split them depending on the fraction of energy they share.

This last step makes the cone algorithms collinear or infrared unsafe, and affects the definition of the parton-level jet cross section to all orders in pQCD. A jet algorithm is infrared safe if the addition of an extra particle with infinitesimal energy do not change the jet configuration in the final state. If the replacement of a particle by two collinear particles (which momenta sum is equal to that of the original particle) do not change the jet configuration in the final state, the jet algorithm is collinear safe.

In order to solve this, cone-based jet algorithms have been formulated such that they find all stable cones through some exact procedure, avoiding the use of seeds. These algorithms are very time-consuming from the computational point of view, which constitutes a disadvantage in high-multiplicity events such as those at the LHC.

1.6.2 Sequential recombination jet algorithms

Sequential recombination jet algorithms cluster particles according to their relative transverse momentum, instead of spacial separation. This is motivated by the parton shower evolution as described in Section 1.5.1. For all particles in the final state, the algorithm computes the following distances:

(1.25)
(1.26)

where is the transverse momentum of particle , between particles and , a parameter of the algorithm that approximately controls the size of the jet, and depends on the jet algorithm: for the algorithm, for the Cambridge/Aachen algorithm, and for the anti- algorithm. The distance is introduced in order to separate particles coming from the hard interaction than those coming from the interaction between remnants. The smallest distance is found, and if it is , particles and are combined into one single object. If instead it is , particle is considered a jet an removed from the list. The distances are recalculated with the remaining objects, and the process repeated until no particle is left in the list. Jets are defined as those objects with above a given threshold.

These algorithms are very convenient, mainly because they are infrared and collinear safe and computationally fast. In particular, the anti- algorithm [31] produces jets with a conical structure in (), as illustrated in Figure 1.10, that facilitates dealing with pile-up. It is the default jet finding algorithm in the LHC experiments.

Figure 1.10: A sample parton-level event clustered with the anti- algorithm.

Chapter 2 The ATLAS Detector at the Large Hadron Collider

The analyses described in this Thesis are performed using proton-proton collision data produced by the Large Hadron Collider (LHC) and collected by the ATLAS detector. In this Chapter, the LHC and the ATLAS detector are described, giving more emphasis to the elements that are relevant for the analyses.

2.1 The Large Hadron Collider

The LHC [32] is a superconducting accelerator built in a circular tunnel of 27 km in circumference that is located at CERN. The tunnel is situated between 45 to 170 m underground, and straddles the Swiss and French borders on the outskirts of Geneva. Two counter rotating proton beams injected into the LHC from the SPS accelerator at 450 GeV are further accelerated to 3.5 TeV while moving around the LHC ring guided by magnets inside a continuous vacuum. During 2010, the instantaneous luminosity was increased over time, with a maximum peak at , and the total integrated luminosity delivered by the LHC was of 48 pb from which ATLAS recorded 45 pb (see Figure 2.1).

Figure 2.1: Maximum instantaneous luminosity (left) and cumulative integrated luminosity (right) versus day delivered by the LHC and recorded by ATLAS for pp collisions at 7 TeV center-of-mass energy during stable beams in 2010.

There are four main detectors placed along the accelerator line: ATLAS and CMS, that are general-purpose detectors, ALICE, dedicated to heavy-ions physics, and LHCb, dedicated to B-physics.

2.2 The ATLAS experiment

The ATLAS detector [33] is an assembly of several sub-detectors arranged in consecutive layers around the beam axis, as shown in Figure 2.2. The main sub-detectors are the Inner Detector, the Calorimeters and the Muon System, that are described in the next Sections. ATLAS is 46 m long, 25 m wide and 25 m high, and weights 7000 t.

The ATLAS coordinate system and its nomenclature will be used repeatedly throughout this Thesis, and is thus described here. The ATLAS reference system is a Cartesian right-handed coordinate system, with the nominal collision point at the origin. The anti-clockwise beam direction defines the positive -axis, while the positive -axis is defined as pointing from the collision point to the center of the LHC ring and the positive -axis pointing upwards. The azimuthal angle is measured around the beam axis, and the polar angle is measured with respect to the -axis. The pseudorapidity is defined as . The rapidity is defined as , where denotes the energy and is the component of the momentum along the beam direction.

The ATLAS detector was designed to optimize the search for the Higgs boson and a large variety of physics phenomena at the TeV scale proposed by models beyond the Standard Model. The main requirements that follow from these goals are:

  • Given the high LHC luminosity, detectors require fast, radiation-hard electronics and sensor elements. In addition, high detector granularity is needed to handle the particle fluxes and to reduce the influence of overlapping events.

  • Large acceptance in pseudorapidity with almost full azimuthal angle coverage.

  • Good charged-particle momentum resolution and reconstruction efficiency in the inner tracker.

  • Very good electromagnetic calorimetry for electron and photon identification and measurements, complemented by full-coverage hadronic calorimetry for accurate jet and missing transverse energy measurements.

  • Good muon identification and momentum resolution over a wide range of momenta and the ability to determine unambiguously the charge of high muons.

  • Highly efficient triggering on low transverse-momentum objects with sufficient background rejection is a prerequisite to achieve an acceptable trigger rate for most physics processes of interest.

Figure 2.2: View of the full ATLAS detector.

2.3 Inner Detector

The Inner Detector (ID) was designed in order to perform high precision measurements with fine detector granularity in the very large track density events produced by the LHC. The ID, that is  m long and 1150 mm in radius, is built out of three components, in increasing order of distance with respect to beam axis: the Pixel detector, the Semiconductor Tracker (SCT) and the Transition Radiation Tracker (TRT). The precision tracking detectors (pixels and SCT) cover the region , and are segmented in and z, whereas the TRT cover the region and is only segmented in . The ID has around 87 million readout channels, 80.4 millions in the pixel detector, 6.3 millions in the SCT and 351 thousand in the TRT. All three are immersed in a 2 T magnetic field generated by the central solenoid, which extends over a length of 5.3 m with a diameter of 2.5 m.

The ID is used to reconstruct tracks and production and decay vertices, and provides a position resolution of 10, 17 and 130 m (Pixel, SCT, TRT) in the plane as well as 115 and 580 m (Pixel, SCT) in the plane. The momentum resolution as a function of of the track is parametrized as:

(2.1)

and the values and  GeV were determined using cosmic rays [34]. Extrapolation of the fit result yields to a momentum resolution of about 1.6% at low momenta and of about 50% at 1 TeV.

2.4 Calorimeters

The calorimeter systems of ATLAS, illustrated in Figure 2.3 surround the Inner Detector system and cover the full -space and  4.9, extending radially 4.25 m. The calorimeter systems can be classified in electromagnetic calorimeters, designed for precision measurements of electrons and photons, and hadronic calorimeters, that collect the energy from hadrons. Calorimeter cells are pseudo-projective towards the interaction region in . The granularity of the electromagnetic calorimeter is typically in , whereas the hadronic calorimeters have granularity of in most of the regions. The energy response of the calorimeter to single particles is discussed in the next Chapter.

Figure 2.3: View of the calorimeter system.

2.4.1 Liquid Argon Calorimeter

The electromagnetic calorimeter is a lead-LAr detector with accordion-shaped kapton electrodes and lead absorber plates over its full coverage. The accordion geometry provides complete symmetry without azimuthal cracks. The calorimeter is divided into a barrel part () and two end-cap components (), each housed in their own cryostat. Over the central region (), the EM calorimeter is segmented in three layers in depth, whereas in the end-cap it is segmented in two sections in depth. Figure 2.4 shows an sketch of a module of the LAr calorimeter.

Figure 2.4: Sketch of a module of the LAr calorimeter.

2.4.2 Hadronic calorimeters

The Tile Calorimeter is placed directly outside the electromagnetic calorimeter envelope. Its barrel covers the region , and its two extended barrels the range . It is a sampling calorimeter using steel as the absorber and scintillating tiles as the active material, and it is segmented in depth in three layers, approximately 1.5, 4.1 and 1.8 interaction lengths () thick for the barrel and 1.5, 2.6, and 3.3 for the extended barrel, as illustrated in Figure 2.5. Two sides of the scintillating tiles are read out by wavelength shifting fibers into two separate photomultiplier tubes.

Figure 2.5: Barrel and extended barrel sections of the Tile Calorimeter.

The Hadronic End-cap Calorimeter (HEC) consists of two independent wheels per end-cap, located directly behind the end-cap electromagnetic calorimeter and sharing the same LAr cryostats. They cover the region , and each wheel is divided into two segments in depth, for a total of four layers per end-cap. The HEC is a sampling calorimeter built out of copper plates intervealed with LAr gaps, that are the active medium.

The Forward Calorimeter (FCal) is integrated into the end-cap cryostats, and it extends in from 3.1 to 4.9. It consists of three modules in each end-cap: the first, made of copper, is optimized for electromagnetic measurements, while the other two, made of tungsten, measure predominantly the energy of hadronic interactions. In all three modules LAr is the sensitive medium.

2.4.3 Calorimeter Topological Clusters

Calorimeter clusters [35] are used as input to the jet finding algorithm in the studies presented in this Thesis. They are built out of neighboring calorimeter cells with significant signal over noise. Therefore, the 3-D shape and the number of cells of clusters are not fixed. The noise is computed for each cell independently, and it is defined as the expected RMS of the electronic noise for the current gain and conditions plus the contribution from pileup added in quadrature.

In order to make clusters, all cells with a signal to noise ratio above 4 are taken as seed cells. These cells are considered in descending order of signal to noise ratio, adding all neighboring cells to them forming the so called proto-clusters. Neighboring cells with signal to noise ratio between 2 and 4 are taken as seed cells in the next iteration. If a cell is adjacent to more than one proto-cluster and its signal to noise ratio is above 2 the proto-clusters are merged, whereas if it is smaller than 2 the cell is only added to the first proto-cluster. Once there are no more cells in the seed list, an splitting algorithm based on local maxima is run over the proto-clusters in order to separate clusters that are not isolated.

Final clusters are treated as massless and their energy, at the electromagnetic (EM) scale, is the sum of the energies of the cells belonging to the cluster. The EM scale is the appropriate scale for the reconstruction of the energy deposited by electrons or photons in the calorimeter. The EM energy scale was first determined using electron test-beam measurements in the barrel and endcap calorimeters [36] [37]. Muons from test-beams and produced in cosmic rays were used to validate the EM scale in the hadronic calorimeter [36] [38]. Recently, a recalibration of the electromagnetic calorimeters has been derived from events in pp collisions. The EM scale uncertainty is 1.5% in the EM LAr barrel calorimeter, and 3% in the Tile calorimeter.

2.5 Muon System

The calorimeter is surrounded by the Muon Spectrometer. The air-core toroid system, with a long barrel and two inserted end-cap magnets, generates a large magnetic field volume with strong bending power within a light and open structure. Multiple-scattering effects are thereby minimized, and excellent muon momentum resolution is achieved with three stations of high-precision tracking chambers. The muon instrumentation includes trigger chambers with very fast time response.

2.6 Luminosity measurement

The luminosity, , of a collider that operates at a revolution frequency and bunches cross at the interaction point can be expressed as

(2.2)

where and are the numbers of particles in the two colliding bunches and and characterize the widths of the horizontal and vertical beam profiles, that are measured using van der Meer scans. The observed event rate is recorded while scanning the two beams across each other first in the horizontal (), then in the vertical () direction. This measurement yields two bell-shaped curves, with the maximum rate at zero separation, from which one extracts the values of and . The luminosity at zero separation can then be computed using Equation 2.2.

ATLAS measures the luminosity [39] in inelastic interactions using different detectors and algorithms, all based on event-counting techniques, that is, on determining the fraction of bunch crossings (BCs) during which a specified detector registers an event satisfying a given selection requirement:

(2.3)

where is the average number of inelastic interactions per BC, is the inelastic cross section, is the efficiency for one inelastic collision to satisfy the event-selection criteria, and is the averaged number of visible (passing that event selection criteria) inelastic interactions per BC. The visible cross section is the calibration constant that relates the measurable quantity to the luminosity . Both and depend on the pseudorapidity distribution and particle composition of the collision products, and are therefore different for each luminosity detector and algorithm.

In the limit , the average number of visible inelastic interactions per BC is given by the intuitive expression

(2.4)

where is the number of events passing the selection criteria that are observed during a given time interval, and is the number of bunch crossings in that same interval. When increases, the probability that two or more interactions occur in the same BC is not negligible, and is no longer linearly related to the raw event count . Instead must be calculated taking into account Poisson statistics, and in some cases, instrumental or pile-up related effects.

can be extracted from Equation 2.3 using the measured values of