paper.bib \addbibresourcebibtex/bib/susy.bib \addbibresourcebibtex/bib/ATLAS.bib \addbibresourcebibtex/bib/CMS.bib \addbibresourcebibtex/bib/ConfNotes.bib \addbibresourcebibtex/bib/PubNotes.bib \addbibresourceacknowledgements/Acknowledgements.bib \AtlasTitleSearch for top-squark pair production in final states with one lepton, jets, and missing transverse momentum using 36 fb of \sqrtS collision data with the ATLAS detector \AtlasRefCodeSUSY-2016-16 \AtlasNoteSUSY-2016-16 \PreprintIdNumberCERN-EP-2017-246 \AtlasJournalJHEP \AtlasAbstractThe results of a search for the direct pair production of top squarks, the supersymmetric partner of the top quark, in final states with one isolated electron or muon, several energetic jets, and missing transverse momentum are reported. The analysis also targets spin-0 mediator models, where the mediator decays into a pair of dark-matter particles and is produced in association with a pair of top quarks. The search uses data from proton–proton collisions delivered by the Large Hadron Collider in 2015 and 2016 at a centre-of-mass energy of \sqrtSand recorded by the ATLAS detector, corresponding to an integrated luminosity of 36 fb. A wide range of signal scenarios with different mass-splittings between the top squark, the lightest neutralino and possible intermediate supersymmetric particles are considered, including cases where the bosons or the top quarks produced in the decay chain are off-shell. No significant excess over the Standard Model prediction is observed. The null results are used to set exclusion limits at 95% confidence level in several supersymmetry benchmark models. For pair-produced top-squarks decaying into top quarks, top-squark masses up to 940 are excluded. Stringent exclusion limits are also derived for all other considered top-squark decay scenarios. For the spin-0 mediator models, upper limits are set on the visible cross-section.
The hierarchy problem [Weinberg:1975gm, Gildener:1976ai, Weinberg:1979bn, Susskind:1978ms] has gained additional attention with the observation of a particle consistent with the Standard Model (SM) Higgs boson [HIGG-2012-27, CMS-HIG-12-028] at the Large Hadron Collider (LHC) [LHC:2008]. Supersymmetry (SUSY) [Miyazawa:1966, Ramond:1971gb, Golfand:1971iw, Neveu:1971rx, Neveu:1971iv, Gervais:1971ji, Volkov:1973ix, Wess:1973kz, Wess:1974tw], which extends the SM by introducing supersymmetric partners for every SM particle, can provide an elegant solution to the hierarchy problem. The partner particles have identical quantum numbers except for a half-unit difference in spin. The superpartners of the left- and right-handed top quarks, \tleft and \tright, mix to form the two mass eigenstates \tone and \ttwo (top squark or stop), where is the lighter of the two.111Similarly the \bone and \btwo (bottom squark or sbottom) are formed by the superpartners of the bottom quarks, \bleft and \bright. If the supersymmetric partners of the top quarks have masses 1 , loop diagrams involving top quarks, which are the dominant divergent contribution to the Higgs-boson mass, can largely cancel out [Dimopoulos:1981zb, Witten:1981nf, Dine:1981za, Dimopoulos:1981au, Sakai:1981gr, Kaul:1981hi, Barbieri:1987fn, deCarlos:1993yy].
Significant mass-splitting between the \tone and \ttwo is possible due to the large top-quark Yukawa coupling. Furthermore, effects of the renormalisation group equations are strong for the third-generation squarks, usually driving their masses to values significantly lower than those of the other generations. These considerations suggest a light stop222The soft mass term of the superpartner of the left-handed bottom quark can be as light as that of the superpartner of the left-handed top quark in certain scenarios as they are both governed mostly by a single mass parameter in SUSY models at tree level. The mass of the superpartner of the right-handed bottom quark is governed by a separate mass parameter from the stop mass parameters, and it is assumed to be larger than 3 having no impact on the signal models considered in this paper. [Inoue:1982pi, Ellis:1983ed] which, together with the stringent LHC limits excluding other coloured supersymmetric particles with masses below the level, motivates dedicated stop searches.
The conservation of baryon number and lepton number can be violated in SUSY models, resulting in a proton lifetime shorter than current experimental limits [Regis:2012sn]. This is commonly resolved by introducing a multiplicative quantum number called -parity, which is and for all SM and SUSY particles (sparticles), respectively. A generic -parity-conserving minimal supersymmetric extension of the SM (MSSM) [Fayet:1976et, Fayet:1977yc, Farrar:1978xj, Fayet:1979sa, Dimopoulos:1981zb] predicts pair production of SUSY particles and the existence of a stable lightest supersymmetric particle (LSP).
The charginos \chinoOneTwopm and neutralinos \ninoOneTwoThreeFour are the mass eigenstates formed from the linear superposition of the charged and neutral SUSY partners of the Higgs and electroweak gauge bosons (higgsinos, winos and binos). They are referred to in the following as electroweakinos. In a large variety of SUSY models, the lightest neutralino (\ninoone) is the LSP, which is also the assumption throughout this paper. The LSP provides a particle dark-matter (DM) candidate, as it is stable and interacts only weakly [Goldberg:1983nd, Ellis:1983ew].
This paper presents a search for direct \tone pair production in final states with exactly one isolated charged lepton (electron or muon,333Electrons and muons from decays are included. henceforth referred to simply as ‘lepton’) from the decay of either a real or a virtual boson. In addition the search requires several jets and a significant amount of missing transverse momentum , the magnitude of which is referred to as \met, from the two weakly interacting LSPs that escape detection. Results are also interpreted in an alternative model where a spin-0 mediator is produced in association with top quarks and subsequently decays into a pair of DM particles.
Searches for direct \tone pair production were previously reported by the ATLAS [Aaboud:2017wqg, Aaboud:2017nfd, SUSY-2016-20, SUSY-2015-02, SUSY-2014-07] and CMS [CMS-SUS-16-008, CMS-SUS-15-005, CMS-SUS-15-004, CMS-SUS-14-006, CMS-SUS-13-024, CMS-SUS-13-014, CMS-SUS-13-011, CMS-SUS-12-028, CMS-SUS-12-005] collaborations, as well as by the CDF and DØ collaborations (for example [PhysRevLett.104.251801, D0_stopSearch]) and the LEP collaborations [lepsusy_web_stop]. The exclusion limits obtained by previous ATLAS searches for stop models with massless neutralinos reach \GeV for direct two-body decays \topLSP, \GeV for the three-body process \threeBody, and \GeV for four-body decays \fourBody, all at the 95% confidence level. Searches for spin-0 mediators decaying into a pair of DM particles and produced in association with heavy-flavour quarks have also been reported with zero or two leptons in the final state by the ATLAS collaboration [DMhfRun2], and by the CMS collaboration [CMS-EXO-16-005].
2 Search strategy
2.1 Signal models
The experimental signatures of stop pair production can vary dramatically, depending on the spectrum of low-mass SUSY particles. Figure 1 illustrates two typical stop signatures: \topLSP and \bChargino. Other decay and production modes such as and , and sbottom direct pair production are also considered. The analysis attempts to probe a broad range of possible scenarios, taking the approach of defining dedicated search regions to target specific but representative SUSY models. The phenomenology of each model is largely driven by the composition of its lightest sparticles, which are considered to be some combination of the electroweakinos. In practice, this means that the most important parameters of the SUSY models considered are the masses of the electroweakinos and of the colour-charged third-generation sparticles.
In this search, the targeted signal scenarios are either simplified models [Alwall:2008ve, Alwall:2008ag, Alves:2011wf], in which the masses of all sparticles are set to high values except for the few sparticles involved in the decay chain of interest, or models based on the phenomenological MSSM (pMSSM) [Djouadi:1998di, Berger:2008cq], in which all of the 19 pMSSM parameters are set to fixed values, except for two which are scanned. The set of models used are chosen to give a broad coverage of the possible stop decay patterns and phenomenology that can be realised in the MSSM, in order to best demonstrate the sensitivity of the search for direct stop production. The simplified models used are designed with a goal of covering distinct phenomenologically different regions of pMSSM parameter space.
The pMSSM parameters and specify the \tright and \tleft soft mass terms, with the smaller of the two controlling the \tone mass. In models where the \tone is primarily composed of \tleft, the production of light sbottoms (\bone) with a similar mass is also considered. The mass spectrum of electroweakinos and the gluino is given by the running mass parameters , , , and , which set the masses of the bino, wino, gluino, and higgsino, respectively. If the mass parameters, , , and , are comparably small, the physical LSP is a mixed state, composed of multiple electroweakinos. Other relevant pMSSM parameters include , which gives the ratio of vacuum expectation values of the up- and down-type Higgs bosons influencing the preferred decays of the stop, the SUSY breaking scale () defined as , and the top-quark trilinear coupling (). In addition, a maximal – mixing condition, (where ), is assumed to obtain a low-mass stop (\tone) while the models remain consistent with the observed Higgs boson mass of 125 \GeV [HIGG-2012-27, CMS-HIG-12-028].
In this search, four scenarios444For the higgsino LSP scenarios, three sets of model assumptions are considered, each giving rise to different stop branching ratios for , , and \topNLSP. are considered, where each signal scenario is defined by the nature of the LSP and the next-to-lightest supersymmetric particle (NLSP): (a) pure bino LSP, (b) bino LSP with a light wino NLSP, (c) higgsino LSP, and (d) mixed bino/higgsino LSP, which are detailed below with the corresponding sparticle mass spectra illustrated in Figure 2. Complementary searches target scenarios where the LSP is a pure wino (yielding a disappearing track signature [SUSY-2013-01] common in anomaly-mediated models [Giudice:1998xp, Randall:1998uk] of SUSY breaking) as well as other LSP hypotheses (such as gauge-mediated models [Dine:1981gu, AlvarezGaume:1981wy, Nappi:1982hm]), which are not discussed further.
Pure bino LSP model:
A simplified model is considered for the scenario where the only light sparticles are the stop (composed mainly of \tright) and the lightest neutralino. When the stop mass is greater than the sum of the top quark and LSP masses, the dominant decay channel is via \topLSP. If this decay is kinematically disallowed, the stop can undergo a three-body decay, , when the stop mass is above the sum of masses of the bottom quark, boson, and . Otherwise the decay proceeds via a four-body process, , where and are two distinct fermions, or via a flavour-changing neutral current (FCNC) process, such as the loop-suppressed \charmDecay. Given the very different final state, the FCNC decay is not considered further in this search, and therefore a 100% branching ratio to is assumed. The various \tone decay modes in this scenario are illustrated in Figure 3. The region of phase space along the line of is especially challenging to target because of the similarity of the stop signature to the \ttbar process, and is referred to in the following as the ‘diagonal region’.
Wino NLSP model:
A pMSSM model is designed such that a wino-like chargino (\chinoonepm) and neutralino (\ninotwo) are mass-degenerate, with the bino as the LSP. This scenario is motivated by models with gauge unification at the GUT scale such as the cMSSM or mSugra [Chamseddine:1982jx, Barbieri:1982eh, Kane:1993td], where is assumed to be twice as large as , leading to the \chinoonepm and \ninotwo having masses nearly twice as large as that of the bino-like LSP.
In this scenario, additional decay modes for the stop (composed mainly of \tleft) become relevant, such as the decay into a bottom quark and the lightest chargino (\bChargino) or the decay into a top quark and the second neutralino (\topNLSP). The and subsequently decay into via emission of a (potentially off-shell) boson or /Higgs () boson, respectively. The \bChargino decay is considered for a chargino mass above about \GeV since the LEP limit on the lightest chargino is \GeV [lepsusy_web_chargino].
An additional \bChargino decay signal model (simplified model) is designed, motivated by a scenario with nearly equal masses of the \toneand \chinoonepm. The model considered assumes the mass-splitting between the and , \GeV and that the top squark decays via the process with a branching ratio of 100%. In this scenario, the jets originating from the bottom quarks are too low in energy (soft) to be reconstructed and hence the signature is characterised by large and no jets initiated by bottom quarks (referred to as -jets).
Higgsino LSP model:
‘Natural’ models of SUSY [naturalSUSY, Barbieri:1987fn, deCarlos:1993yy] suggest low-mass stops and a higgsino-like LSP. In such scenarios, a typical varies between a few hundred to several tens of \GeV depending mainly on the mass relations amongst the electroweakinos. For this analysis, a simplified model is designed for various of up to 30 \GeV satisfying the mass relation as follows:
The stop decays into either , , or , followed by the \chinoonepm and \ninotwo decay through the emission of a highly off-shell boson. Hence the signature is characterised by low-momentum leptons or jets from off-shell bosons, and the analysis benefits from reconstructing low-momentum leptons (referred to as soft leptons). The stop decay branching ratio strongly depends on the \tright and \tleft composition of the stop. Stops composed mainly of \tright have a large branching ratio , whereas stops composed mainly of \tleft have a large or . In this search, the three cases are considered separately: , , and a case in which the stop decays democratically into the three decay modes.
Bino/higgsino mix model:
The ‘well-tempered neutralino’ [ArkaniHamed:2006mb] scenario seeks to provide a viable dark-matter candidate while simultaneously addressing the problem of naturalness by targeting an LSP that is an admixture of bino and higgsino. The mass spectrum of the electroweakinos (higgsinos and bino) is expected to be slightly compressed, with a typical mass-splitting between the bino and higgsino states of – . A pMSSM signal model is designed such that only a low level of fine-tuning [finetune, SUSY-2014-08] of the pMSSM parameters is needed and the annihilation rate of neutralinos is consistent with the observed dark-matter relic density555The quantities and are the density parameter and Hubble constant, respectively. () [relic_density].
The final state produced by many of the models described above is consistent with a final state. Exploiting the similarity, signal models with a spin-0 mediator decaying into dark-matter particles produced in association with \ttbar are also studied assuming either a scalar () or a pseudoscalar () mediator [Abercrombie:150700966, DMhfRun2]. An example diagram for this process is shown in Figure 4.
2.2 Analysis strategy
The search presented is based on 16 dedicated analyses that target the various scenarios mentioned above. Each of these analyses corresponds to a set of event selection criteria, referred to as a signal region (SR), and is optimised to target one or more signal scenarios. Two different analysis techniques are employed in the definition of the SRs, which are referred to as ‘cut-and-count’ and ‘shape-fit’. The former is based on counting events in a single region of phase space, and is employed in the 16 analyses. The latter is used in some SRs in addition to the ‘cut-and-count‘ technique and employs SRs split into multiple bins in a specific discriminating kinematic variable, that can cover a range that is larger than the ‘cut-and-count’ SR. By utilising different signal-to-background ratios in the various bins, the search sensitivity is enhanced in challenging scenarios where it is particularly difficult to separate signal from background.
The main background processes after the signal selections include \ttbar, single-top , , and +jets. Each of those SM processes are estimated by building dedicated control regions (CRs) enhanced in each of the processes, making the analysis more robust against potential mis-modelling effects in simulated events and reducing the uncertainties in the background estimates. The backgrounds are then simultaneously normalised in data using a likelihood fit for each SR with its associated CRs. The background modelling as predicted by the fits is tested in a series of validation regions (VRs).
3 ATLAS detector and data collection
The ATLAS detector [PERF-2007-01] is a multipurpose particle physics detector with nearly coverage in solid angle around the collision point.666ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as . Angular distance is measured in units of .The transverse momentum, \pt, is defined with respect to the beam axis (– plane). It consists of an inner tracking detector (ID), surrounded by a superconducting solenoid providing a axial magnetic field, a system of calorimeters, and a muon spectrometer (MS) incorporating three large superconducting toroid magnets.
The ID provides charged-particle tracking in the range . During the LHC shutdown between Run 1 (2010–2012) and Run 2 (2015–2018), a new innermost layer of silicon pixels was added [ATLAS-TDR-19], which improves the track impact parameter resolution, vertex position resolution and -tagging performance [ATL-PHYS-PUB-2016-012].
High-granularity electromagnetic and hadronic calorimeters cover the region . The central hadronic calorimeter is a sampling calorimeter with scintillator tiles as the active medium and steel absorbers. All the electromagnetic calorimeters, as well as the endcap and forward hadronic calorimeters, are sampling calorimeters with liquid argon as the active medium and lead, copper, or tungsten absorbers. The MS consists of three layers of high-precision tracking chambers with coverage up to and dedicated chambers for triggering in the region . Events are selected by a two-level trigger system [Aaboud:2016leb]: the first level is a hardware-based system and the second is a software-based system.
This analysis is based on a dataset collected in 2015 and 2016 at a collision energy of . The data contain an average number of simultaneous interactions per bunch crossing, or “pile-up”, of approximately 23.7 across the two years. After the application of beam, detector and data-quality requirements, the total integrated luminosity is \ourLumi with an associated uncertainty of 3.2%. The uncertainty is derived following a methodology similar to that detailed in Ref. [DAPR-2011-01] from a preliminary calibration of the luminosity scale using a pair of – beam separation scans performed in August 2015 and June 2016.
The events were primarily recorded with a trigger logic that accepts events with \met above a given threshold. The trigger is fully efficient for events passing an offline-reconstructed \GeV requirement, which is the minimum requirement deployed in the signal regions and control regions relying on the \met triggers. To recover acceptance for signals with moderate \met, events having a well-identified lepton with a minimum \pt at trigger level are also accepted for several selections. Events in which the offline reconstructed is measured to be less than \GeV are collected using single-lepton triggers, where the thresholds are set to obtain a constant efficiency as a function of the lepton \pT of 90% (80%) for electrons (muons).
4 Simulated event samples
Samples of Monte Carlo (MC) simulated events are used for the description of the SM background processes and to model the signals. Details of the simulation samples used, including the matrix element (ME) event generator and parton distribution function (PDF) set, the parton shower (PS) and hadronisation model, the set of tuned parameters (tune) for the underlying event (UE) and the order of the cross-section calculation, are summarised in Table 1.
The samples produced with \MGaMC [Alwall:2014hca] and Powheg-Box [Alioli:2010xd, Re:2010bp, Frixione:2007nw, Frederix:2012dh, Alioli:2009je] used EvtGen v1.2.0 [EvtGen] for the modelling of -hadron decays. The signal samples were all processed with a fast simulation [SOFT-2010-01], whereas all background samples were processed with the full simulation of the ATLAS detector [SOFT-2010-01] based on GEANT4 [Agostinelli:2002hh]. All samples were produced with varying numbers of minimum-bias interactions overlaid on the hard-scattering event to simulate the effect of multiple interactions in the same or nearby bunch crossings. The number of interactions per bunch crossing was reweighted to match the distribution in data.
4.1 Background samples
The nominal \ttbar sample and single-top sample cross-sections were calculated to next-to-next-to-leading order (NNLO) with the resummation of soft gluon emission at next-to-next-to-leading-logarithm (NNLL) accuracy and were generated with Powheg-Box (NLO) interfaced to Pythia6 for parton showering and hadronisation. Additional samples were generated with \MGaMC (NLO)+Pythia8, Sherpa, and Powheg-Box+Herwig++ [Bahr:2008pv, Bellm:2015jjp] for modelling comparisons and evaluation of systematic uncertainties.
Additional samples for , , and were generated with \MGaMC leading order (LO) interfaced to Pythia8, in order to assess the effect of interference between the singly and doubly resonant processes as a part of the theoretical modelling systematic uncertainty.
Samples for \Wjets, \Zjets and diboson production were generated with Sherpa 2.2.0 [Gleisberg:2008ta] (and Sherpa 2.1.1 – 2.2.1 for the latter) using Comix [Gleisberg:2008fv] and OpenLoops [Cascioli:2011va], and merged with the Sherpa parton shower [Schumann:2007mg] using the ME+PS@NLO prescription [Hoeche:2012yf]. The NNPDF30 PDF set [Ball:2014uwa] was used in conjunction with a dedicated parton shower tuning developed by the Sherpa authors. The + jets events were further normalised with the NNLO cross-sections.
The samples were generated with \MGaMC (NLO) interfaced to Pythia8 for parton showering and hadronisation. Sherpa (NLO) samples were used to evaluate the systematic uncertainties related to the modelling of production.
More details of the \ttbar, \Wjets, \Zjets, diboson and samples can be found in Refs. [ATL-PHYS-PUB-2016-004, ATL-PHYS-PUB-2016-003, ATL-PHYS-PUB-2016-002, ATL-PHYS-PUB-2016-005].
4.2 Signal samples
Signal SUSY samples were generated at leading order (LO) with \MGaMC including up to two extra partons, and interfaced to Pythia8 for parton showering and hadronisation. For the pMSSM models, the sparticle mass spectra were calculated using Softsusy 3.7.3 [Allanach:2001kg, Allanach:2013kza]. The output mass spectrum was then interfaced to HDECAY 3.4 [hdecay] and SDECAY 1.5/1.5a [sdecay] to generate decay tables for each of the sparticles. The decays of the \ninotwo and \chinoonepm via highly off-shell bosons were computed by taking into account the mass of leptons and charm quarks in the low regime. For all models considered the decays of SUSY particles are prompt. The details of the various simulated samples in the four LSP scenarios targeted are given below. The input parameters for the pMSSM models are summarised in Table 2.
Pure bino LSP:
For the \topLSP samples, the stop was decayed in Pythia8 using only phase space considerations and not the full matrix element. Since the decay products of the samples generated did not preserve spin information, a polarisation reweighting was applied777A value of cos is assumed, corresponding to a \tone composed mainly of \tright(70%) following Refs. [stopPol1, stopPol2]. For the \threeBody and \fourBody samples, the stop was decayed with MadSpin [Artoisenet:2012st], interfaced to Pythia8. MadSpin emulates kinematic distributions such as the mass of the system to a good approximation without calculating the full ME. For the MadSpin samples, the stop was assumed to be composed mainly of \tright(70%), consistent with the \topLSP samples.
In the wino NLSP model, the \tone was assumed to be composed mainly of \tleft (i.e. ). The stop was decayed according to %, or %, followed by and decays into the LSP, in a large fraction of the phase space. Since the coupling of \tleft to the wino states is larger than the one to the bino state, the stop decay into the bino state (\topLSP) is suppressed. The BR can be significantly different in the regions of phase space where one of the decays is kinematically inaccessible. In the case that a mass-splitting between the \tone and \ninotwo is smaller than the top-quark mass (), for instance, the \topNLSP decay is suppressed, while the \bChargino decay is enhanced. Similarly, the \bChargino decay is suppressed near the boundary of while the \topLSP decay is enhanced.
The signal model was constructed by performing a two-dimensional scan of the pMSSM parameters and . For the models considered, 2.2 and 1.2 were assumed in order for the produced models to evade the current gluino and stop mass limits [Aaboud:2017ayj, Aaboud:2017hdf, SUSY-2015-10, SUSY-2015-06].
The \ninotwo decay modes are very sensitive to the sign of . The \ninotwo decays into the lightest Higgs boson and the LSP (with %) if and decays into a boson and the LSP (with %) if . Hence, the two scenarios were considered separately.888When the \ninotwo decay into the LSP via /Higgs boson is kinematically suppressed, the decay is instead determined by the LSP coupling to squarks. In the low- scenario considered, the decay via a virtual sbottom becomes dominant due to the large sbottom–bottom–LSP coupling, resulting in a decay with a branching ratio up to 95%.
Both the stop and sbottom pair production modes were included. The stop and sbottom masses are roughly the same since they are both closely related to . The sbottom decays largely via \tChargino and \bottomNLSP with a similar BR as for \bChargino and , respectively.
For the higgsino LSP case, a simplified model was built. Similar input parameters to those of the wino NLSP pMSSM model were assumed when evaluating the stop decay branching ratios, except for the electroweakino mass parameters, , , and . These mass parameters were changed to satisfy .
The stop decay BR in scenarios with were found to be % for and % for both and , independent of . On the other hand, in scenarios with and , the was suppressed to % while and were each increased to %. A third scenario with and was also studied. In this scenario, the stop BR was found to be % for each of the three decay modes. The \chinoonepm and \ninotwo subsequently decayed into the \ninoone via a highly off-shell boson. The exact decay BR of \chinoonepm and \ninotwo depend on the size of the mass-splitting amongst the triplet of higgsino states. For the baseline model, \GeV and \GeV were assumed, which roughly corresponds to – . An additional signal model with varying between 0 and 30 \GeV was also considered.
In the signal generation, the stop decay BR was set to 33% for each of the three decay modes (\bChargino, \topNLSP, \topLSP). The polarisation and stop BR were reweighted to match the BR described above for each scenario. Samples were simulated down to for the scan. The \topLSP samples generated for the pure bino scenario were used in the region below 2 \GeV, scaling the cross section by , under the assumption that the decay products from \chinoonepm and \ninotwo are too soft to be reconstructed.
For the well-tempered neutralino, the signal model was built in a similar manner to the wino NLSP model. Signals were generated by scanning in and parameter space, with , and (corresponding to a gluino mass of ).999The light sbottom and/or stop become tachyonic when their radiative corrections are large in the low- regime, as the correction to squark masses is proportional to (/), which can change the sign of the physical mass. This was an important consideration when choosing the value of . The value of was varied in the range of 700–1300 \GeV in the large – mixing regime in order for the lightest Higgs boson to have a mass consistent with the observed mass. Since the dark-matter relic density is very sensitive to the mass-splitting , was chosen to satisfy given the value of considered (), which resulted in – \GeV.
The dark-matter relic density was computed using MicrOMEGAs 4.3.1f [micromegas1, micromegas2]. Softsusy-3.3.3 was used to evaluate the level of fine-tuning () [finetune] of the pMSSM parameters. The signal models were required to have a low level of fine-tuning corresponding to (at most 1% fine-tuning).
For scenarios with , only stop pair production was considered while both stop and sbottom pair production were considered in scenarios with . The sbottom mass was found to be close to the stop mass as they were both determined mainly by . The stop and sbottom decay largely into a higgsino state, , , and with BR similar to those of the higgsino models. The stop and sbottom decay BR to the bino state were found to be small.
Signal cross-sections for stop/sbottom pair production were calculated to next-to-leading order in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leading-logarithm accuracy (NLO+NLL) [Beenakker:1997ut, Beenakker:2010nq, Beenakker:2011fu]. The nominal cross-section and the uncertainty were taken from an envelope of cross-section predictions using different PDF sets and factorisation and renormalisation scales, as described in Ref. [Borschensky:2014cia].
Signal events for the spin-0 mediator model were generated with \MGaMC (LO) with up to one additional parton, interfaced to Pythia8. The couplings of the mediator to the DM and SM particles ( and ) were assumed to be equal and a common coupling with value is used. The kinematics of the decay was found not to depend strongly on the values of these couplings. The cross-section was computed at NLO [dMtt_xsec1, dMtt_xsec2] and decreased significantly when the mediator was produced off-shell.
|Scenario||Wino NLSP||Higgsino LSP||Bino/higgsino mix|
|20||20 or 60||20|
|Scanned mass parameters||(, )||(, /)||(, /)|
|Electroweakino masses |
|Sbottom pair production||considered||–||considered|
|\tone decay modes and their BR [%]||(a) / (b) / (c)||(a) / (b)|
|\bonedecay modes and their BR [%]||–|
5 Event reconstruction
Events used in the analysis must satisfy a series of beam, detector and data-quality criteria. The primary vertex, defined as the reconstructed vertex with the highest , must have at least two associated tracks with \pt 400 .
Depending on the quality and kinematic requirements imposed, reconstructed physics objects are labelled either as baseline or signal, where the latter describes a subset of the former. Baseline objects are used when classifying overlapping physics objects and to compute the missing transverse momentum. Baseline leptons (electrons and muons) are also used to impose a veto on events with more than one lepton, which suppresses background contibutions from \ttbar and production where both -bosons decay leptonically, referred to as dileptonic \ttbar or events. Signal objects are used to construct kinematic and multiplicity discriminating variables needed for the event selection.
Electron candidates are reconstructed from electromagnetic calorimeter cell clusters that are matched to ID tracks. Baseline electrons are required to have \GeV, , and to satisfy ‘VeryLoose’ likelihood identification criteria that are defined following the methodology described in Ref. [ATL-PHYS-PUB-2015-041]. Signal electrons must pass all baseline requirements and in addition satisfy the ‘LooseAndBLayer’ or ‘Tight’ likelihood identification criteria depending on the signal region selection, and are classified as ‘loose’ or ‘tight’ signal electrons, respectively. They must also have a transverse impact parameter evaluated at the point of closest approach between the track and the beam axis in the transverse plane () that satisfies , where is the uncertainty in , and the distance from this point to the primary vertex along the beam direction () must satisfy mm. Furthermore, lepton isolation, defined as the sum of the transverse energy deposited in a cone with a certain size excluding the energy of the lepton itself, is required. The isolation criteria for ‘loose’ electrons use only track-based information, while the ‘tight’ electron isolation criteria rely on both track- and calorimeter-based information with a fixed requirement on the isolation energy divided by the electron’s .
Muon candidates are reconstructed from combined tracks that are formed from ID and MS tracks, ID tracks matched to MS track segments, stand-alone MS tracks, or ID tracks matched to an energy deposit in the calorimeter compatible with a minimum-ionising particle (referred to as calo-tagged muon) [PERF-2015-10]. Baseline muons up to are used and they are required to have \GeV and to satisfy the ‘Loose’ identification criteria. Signal muons must pass all baseline requirements and in addition have impact parameters mm and , and satisfy the ‘Medium’ identification criteria. Furthermore, signal muons must be isolated according to criteria similar to those used for signal electrons, but with a fixed requirement on track-based isolation energy divided by the muon’s . No separation into ‘loose’ and ‘tight’ classes is performed for signal muons.
Dedicated scale factors for the requirements of identification, impact parameters, and isolation are derived from and data samples for electrons and muons to correct for minor mis-modelling in the MC samples [ATLAS-CONF-2016-024, PERF-2015-10]. The \pt thresholds of signal leptons are raised to 25 \GeV for electrons and muons in all signal regions except those that target higgsino LSP scenarios.
Jet candidates are built from topological clusters [PERF-2014-07, PERF-2011-03] in the calorimeters using the anti- algorithm [Cacciari:2008gp] with a jet radius parameter implemented in the FastJet package [Cacciari:2011ma]. Jets are corrected for contamination from pile-up using the jet area method [Cacciari:2007fd, Cacciari:2008gn, PERF-2014-03] and are then calibrated to account for the detector response [PERF-2012-01, PERF-2016-04]. Jets in data are further calibrated according to in situ measurements of the jet energy scale [PERF-2016-04]. Baseline jets are required to have \GeV. Signal jets must have \GeV and . Furthermore, signal jets with \GeV and are required to satisfy track-based criteria designed to reject jets originating from pile-up [PERF-2014-03]. Events containing a jet that does not pass specific jet quality requirements (“jet cleaning”) are vetoed from the analysis in order to suppress detector noise and non-collision backgrounds [DAPR-2012-01, ATLAS-CONF-2015-029].
Jets containing -hadrons are identified using the MV2c10 -tagging algorithm (and those identified are referred to as -tagged jets), which incorporates quantities such as the impact parameters of associated tracks and reconstructed secondary vertices [ATL-PHYS-PUB-2016-012, Aad:2015ydr]. The algorithm is used at a working point that provides a 77% -tagging efficiency in simulated events, and corresponds to a rejection factor of about 130 for jets originating from gluons and light-flavour quarks (light jets) and about 6 for jets induced by charm quarks. Corrections derived from data control samples are applied to account for differences between data and simulation for the efficiency and mis-tag rate of the -tagging algorithm [Aad:2015ydr].
Jets and associated tracks are also used to identify hadronically decaying leptons using the ‘Loose’ identification criteria described in Refs. [ATL-PHYS-PUB-2015-045, PERF-2016-04], which have a 60% (50%) efficiency for reconstructing leptons decaying into one (three) charged pions. These candidates are required to have one or three associated tracks, with total electric charge opposite to that of the selected electron or muon, \GeV, and . The candidate \pT requirement is applied after a dedicated energy calibration [ATL-PHYS-PUB-2015-045, PERF-2016-04].
To avoid labelling the same detector signature as more than one object, an overlap removal procedure is applied. Table 3 summarises the procedure. Given a set of baseline objects, the procedure checks for overlap based on either a shared track, ghost-matching [Cacciari:2008gn], or a minimum distance101010Rapidity () is used instead of pseudorapidity () when computing in the overlap removal procedure. between pairs of objects. For example, if a baseline electron and a baseline jet are separated by , then the electron is retained (as stated in the ‘Precedence’ row) and the jet is discarded, unless the jet is -tagged (as stated in the ‘Condition’ row) in which case the electron is assumed to originate from a heavy-flavour decay and is hence discarded while the jet is retained. If the matching requirement in Table 3 is not met, then both objects under consideration are kept. The order of the steps in the procedure is given by the columns in Table 3, which are executed from left to right. The second () and the third () steps of the procedure ensure that leptons and jets have a minimum separation of . Jets overlapping with muons that satisfy one or more of the following conditions are not considered in the third step: the jet is -tagged, the jet contains more than three tracks (), or the ratio of muon \pt to jet \pt satisfies . Therefore, the fourth step () is applied only to the jets that satisfy the above criteria or that are well separated from leptons with . For the remainder of the paper, all baseline and signal objects are those that have passed the overlap removal procedure.
The missing transverse momentum is reconstructed from the negative vector sum of the transverse momenta of baseline electrons, muons, jets, and a soft term built from high-quality tracks that are associated with the primary vertex but not with the baseline physics objects [ATL-PHYS-PUB-2015-027, ATL-PHYS-PUB-2015-023]. Photons and hadronically decaying leptons are not explicitly included but enter either as jets, electrons, or via the soft term.
6 Discriminating variables
The background processes contributing to a final state with one isolated lepton, jets and \met are primarily semileptonic \ttbar events with one of the bosons from two top quarks decaying leptonically, and +jets events with a leptonic decay of the W boson. Both backgrounds can be effectively reduced by requiring the transverse mass of the event, ,111111The transverse mass is defined as , where is the azimuthal angle between the lepton and missing transverse momentum directions. The quantity is the transverse momentum of the charged lepton. to be larger than the -boson mass. In most signal regions, the dominant background after this requirement arises from dileptonic \ttbarevents, in which one lepton is not identified, is outside the detector acceptance, or is a hadronically decaying lepton. On the other hand, the \mt selection is not applied in the signal regions targeting the higgsino LSP scenarios, hence the background is dominated by semileptonic \ttbar events. A series of additional variables described below are used to discriminate between the \ttbar background and the signal processes.
6.1 Common discriminating variables
The asymmetric \mtTwo(\amtTwo) [Barr:2009jv, Konar:2009qr, Bai:2012gs, Lester:2014yga] and \mtTwoTauare both variants of the variable \mtTwo [Lester:1999tx], a generalisation of the transverse mass applied to signatures where two particles are not directly detected. The \amtTwo variable targets dileptonic \ttbar events where one lepton is not reconstructed, while the \mtTwoTauvariable targets \ttbar events where one of the two bosons decays via a hadronically decaying lepton. In addition, the \HTmissSig variable is used in some signal regions to reject background processes without invisible particles in the final state. It is defined as:
where is the negative vectorial sum of the momenta of the signal jets and signal lepton. The denominator is computed from the per-event jet energy uncertainties, while the lepton is assumed to be well measured. The offset parameter , which is a characteristic scale of the background processes, is fixed at \GeV in this analysis. These variables are detailed in Ref. [SUSY-2013-15]. Figure 5 shows distributions of the and variables.
Reconstructing the hadronic top-quark decay (top-tagging) can provide additional discrimination against dileptonic \ttbar events, which do not contain a hadronically decaying top quark. In events where the top quark is produced with moderate , a technique is used to reconstruct candidate hadronic top-quark decays. For every selected event with four jets of which at least one is -tagged, the \mTopChi variable is defined as the invariant mass of the three jets in the event most compatible with the hadronic decay products of a top quark, where the three jets are selected by a minimisation using the jet momenta and energy resolutions.
After reconstructing the hadronic top-quark decay through the minimisation, the remaining -tagged jet121212If the event has exactly one -tagged jet, the highest-\pt jet is used instead of the second highest-\pt -tagged jet. is paired with the lepton to reconstruct the semileptonically decaying top quark candidate (leptonic top quark). Based on these objects, the azimuthal separation between the \pt of hadronic and of leptonic top-quark candidates, and between the missing transverse momentum vector and the \pt of hadronic top-quark candidate, , are defined.
An alternative top-tagging method is used to target events where the top quark is produced with a significant boost. The top-quark candidates are reconstructed by considering all small-radius jets in the event and clustering them into large-radius jets using the anti- algorithm with a radius parameter . The radius of each jet is then iteratively reduced to an optimal radius, , that matches their \pt. If a candidate loses a large fraction of its \pt in the shrinking process, it is discarded. In events where two or more top-quark candidates are found, the one with the mass closest to the top-quark mass is taken. The same algorithm is also used to define boosted hadronic -boson candidates, where only non--tagged jets are considered, and the mass of the boson is used to define the optimal radius. The masses of the reclustered top-quark and -boson candidates are referred to as \mTopRecluster and \mWRecluster, respectively.
The \Ptmiss in semileptonic \ttbar events is expected to be closely aligned with the direction of the leptonic top quark. After boosting the leptonic top-quark candidate and the \Ptmiss into the \ttbar rest frame, the magnitude of the perpendicular component of the \Ptmiss with respect to the leptonic top quark is computed. This \perpmet is expected to be small for the background, as the dominant contribution to the total \met is due to the neutrino emitted in the leptonic top-quark decay.
6.2 Discriminating variables for boosted decision trees
In the diagonal region where , the momentum transfer from the to the is small, and the stop signal is kinematically very similar to the \ttbar process. In order to achieve good separation between \ttbar and signal, a boosted decision tree (BDT) implemented in the TMVA framework [Hocker:2007ht] is used. Additional discriminating variables are developed to use as inputs to the BDT, or as a part of the preselection in the BDT analyses.
Some of the selections targeting the diagonal region in the pure bino LSP scenarios rely on the presence of high-\pt initial-state radiation (ISR) jets, which serves to boost the di-stop system. A powerful technique to discriminate these signal models from the \ttbarbackground is to attempt to reconstruct the ratio of the transverse momenta of the the di-neutralino and di-stop systems. This ratio can be directly related to the ratio of the masses of the and the [An:2015uwa, Macaluso:2015wja]:
The observed \met would also include a contribution from the neutrino produced in the leptonic -boson decay, in addition to that due to the LSPs. A light \ninoone and a mass close to the mass of the top quark would result in the neutralinos having low momenta, making the reconstruction of the neutrino momentum and its subtraction from the \Ptmiss vital. In the signal region targeting this scenario, a modified minimisation using jet momenta only is applied to define the hadronic top-quark candidate . One or two light jets and one -tagged jet are selected in such a way that they are most compatible with originating from hadronic -boson and top-quark decays. The leading-\pt light jet is excluded, as it is assumed to originate from ISR.
Out of the two jets with the highest probabilities of being a -jet according to the -tagging algorithm, the one not assigned to is assigned to the leptonic top-quark candidate, together with the lepton. For the determination of the neutrino momentum, two hypotheses are considered: that of a event and that of a signal event. For the hypothesis, the entire \Ptmiss is attributed to the neutrino. Under the signal hypothesis, collinearity of each with both of its decay products is assumed. This results in the transverse-momentum vector of the neutrino from the leptonic -boson decay being calculable by subtracting the momenta of the LSPs from \Ptmiss, when assuming a specific mass ratio :
where represents the neutrino four-vector for a given value of , is the -jet candidate assigned to the semileptonic top-quark candidate and is the charged lepton. The resulting momentum of is then used to calculate further variables under the signal hypothesis, such as the leptonically decaying boson’s transverse mass or the mass of the top-quark candidate including the leptonic -boson decay, . The lepton pseudorapidity is used as a proxy for the neutrino pseudorapidity in the calculation. Further variables are the difference in between the calculation under the hypothesis of a event and under the signal hypothesis, , where is calculated using the lepton and , and the of the reconstructed system under the SM hypothesis, . The mass ratio 0.135 is used throughout the paper, as is calculated from \GeV and \GeV. This signal point was chosen since it is close to the exclusion limit from previous analyses.
Larger stop-mass values in compressed bino LSP scenarios boost the \ninoone such that neglecting the neutrino momentum in the determination of is a good approximation. A recursive jigsaw reconstruction (RJR) technique [Jackson:2016mfb] is used to divide each event into an ISR hemisphere and a sparticle (S) hemisphere, where the latter contains both the invisible (I) and visible (V) decay products of the stops. Objects are grouped together according to their proximity in the lab frame’s transverse plane by maximising the of the S and ISR hemispheres over all choices of object assignment. In events with high- ISR jets, the axis of maximum back-to-back , also known as the thrust axis, should approximate the direction of the ISR and the di-stop system’s back-to-back recoil.
The RJR variables used in the corresponding signal regions are the transverse mass of the S system, , the ratio of the momenta of the I and ISR systems, (an approximation of ), the azimuthal separation between the momenta of the ISR and I systems, , and the number of jets assigned to the V system, .
7 Signal selections
SR selections are optimised using simulated MC event samples. The metric of the optimisation is the discovery sensitivity for the various decay modes and for different regions of SUSY parameter space and masses in the spin-0 mediator models. A set of benchmark signal models, selected to cover the various stop and spin-0 mediator scenarios, is used for the optimisation. The optimisations of signal-region selections are performed using an iterative algorithm and considering all studied discriminating variables, accounting for statistical and systematic uncertainties.
All regions are required to have exactly one signal lepton (except for the control regions, where three signal leptons are required), no additional baseline leptons, and at least four (or in some regions two or three) signal jets. In most cases, at least one -tagged jet is also required. A set of preselection criteria (high-\met, low-\met, and soft-lepton) is defined for monitoring the MC modelling of the kinematic variables. The preselection criteria are also used as the starting point for the SR optimisation.
In the SRs relying only on the \met trigger, all events are required to have \GeV to ensure that the trigger is fully efficient. In SRs that use a combination of \met and lepton triggers, this requirement is relaxed to \GeV. In order to reject multijet events, requirements are imposed on the transverse mass (\mt) and the azimuthal angles between the leading and sub-leading jets (in \pt) and \met () in most of SRs. For events with hadronic candidates, the requirement \GeV is applied in most SRs.
The exact preselection criteria can be found in Table 4. The preselections do not include requirements on the and variables, but these are often used to define SRs. Figure 8 shows various relevant kinematic distributions at preselection level. The backgrounds are normalised with the theoretical cross-sections, except for the \met distribution where the \ttbar events are scaled with normalisation factors obtained from a simultaneous likelihood fit of the CRs, described in Section 10.
|Trigger||\met triggers only||\met and lepton triggers||\met triggers only|
|Data quality||jet cleaning, primary vertex|
|Second-lepton veto||no additional baseline leptons|
|Number of leptons, tightness||‘loose’ lepton||‘tight’ lepton||‘tight’ lepton|
|Number of (jets, -tags)||(, )||(, )||(, )|
Table 5 summarises all SRs with a brief description of the targeted signal scenarios. For the pure bino LSP scenario, seven SRs are considered in total. Five SRs target the decay. The corresponding SR labels begin with tN, which is an acronym for ‘top neutralino’. Additional text in the label describes the stop mass region. For example, tN_diag targets the diagonal region where + . The third part of the labels low, med, and high denote the targeted stop mass range, relative to other regions of the same type (for example, \tNdiaglow targets a stop mass of 190 GeV, while \tNdiaghigh is optimised for 450 GeV). Furthermore, two additional SRs labelled bWN and bffN are dedicated to the three-body (\threeBody) and four-body (\fourBody) decay searches, respectively.
Six SRs target various \bChargino scenarios, and the SR labels follow the same logic: the first two characters bC stand for ‘bottom chargino’. The consecutive labels, 2x, bv, or soft, denote the targeted electroweakino spectrum. For the wino NLSP scenario, three SRs are designed with the label bC2x denoting the mass relation in the signal model. The label bCbv is used for the no -tagged jets (-veto) SR. For the higgsino LSP scenario, three SRs are labelled as bCsoft because their selections explicitly target soft-lepton signatures.
Finally, three SRs labelled as DM target the spin-0 mediator scenario, with the consecutive labels, low and low_loose for low mediator masses and high for high mediator masses.
With the exception of the tN and bCsoft regions, the above SRs are not designed to be mutually exclusive. A dedicated combined fit is performed using \tNmed and (or ) in the higgsino LSP and well-tempered neutralino scenarios in order to improve exclusion sensitivity. The SRs with the requirement of lepton \GeV ( \GeV) are referred to as hard-lepton SRs (soft-lepton SRs) in the following sections.
|SR||Signal scenario||Benchmark||Exclusion technique||Table|
|\tNmed||Pure bino LSP (\topLSP)||m()(600,300)||shape-fit (\met)||6|
|\tNhigh||Pure bino LSP (\topLSP)||m()(1000,1)||cut-and-count||6|
|\tNdiaglow||Pure bino LSP (\topLSP)||m()(190,17)||BDT cut-and-count||7|
|\tNdiagmed||Pure bino LSP (\topLSP)||m()(250,62)||BDT shape-fit||7|
|\tNdiaghigh||Pure bino LSP (\topLSP)||m()(450,277)||BDT shape-fit||7|
|\bWN||Pure bino LSP (\threeBody)||m()(350,230)||shape-fit (\amtTwo)||LABEL:tab:SRs_other|
|\bffN||Pure bino LSP (\fourBody)||m()(400,350)||shape-fit (\lepPtoverMET)||LABEL:tab:SRs_other|
|\bCmed||Wino NLSP (\bChargino, \topNLSP)||m()(750,300,150)||cut-and-count||8|
|\bCdiag||Wino NLSP (\bChargino, \topNLSP)||m()(650,500,250)||cut-and-count||8|
|\bCbv||Wino NLSP (\bChargino, \topNLSP)||m()(700,690,1)||cut-and-count||8|
|\bCsoftdiag||Higgsino LSP (\topLSP, \topNLSP, \bChargino)||m()(400,355,350)||shape-fit (\lepPtoverMET)||9|
|\bCsoftmed||Higgsino LSP (\topLSP, \topNLSP, \bChargino)||m()(600,205,200)||shape-fit (\lepPtoverMET)||9|
|\bCsofthigh||Higgsino LSP (\topLSP, \topNLSP, \bChargino)||m()(800,155,150)||shape-fit (\lepPtoverMET)||9|
7.1 Pure bino LSP scenario
The signature of stop pair production with subsequent \tone decays is determined by the masses of the two sparticles, \tone and \ninoone. It often leads to a final state similar to that of \ttbar production, except for the additional \met due to the two additional neutralinos in the event. A set of event selections is defined targeting various signals.
Two signal regions are designed to target the majority of signal models with , \tNmed and , which are optimised for medium and high \tone mass, respectively. For the compressed region with , three BDT selections (, , and ) target different masses. For the \threeBody region, a signal selection (\bWN) is defined by utilising the distinctive shape of the invariant mass of the system. For the \fourBody region, the signal region (\bffN) is defined by making use of the soft-lepton selection designed for the higgsino LSP scenarios. The event selection for each signal region is detailed in the following subsections.
Table 6 details the event selections for the \tNmed and \tNhigh SRs. In addition to the high-\met preselection described in \Tab4, at least one reconstructed hadronic top-quark candidate based on the recursive reclustered jet algorithm is required in both SRs. Stringent requirements are also imposed on , and . Furthermore, a requirement is placed on \amtTwo to reduce the dileptonic \ttbar background. The main background processes after all selection requirements are , dileptonic \ttbar and +heavy-flavour processes.
For the \tNmed SR, a shape-fit technique is employed, with the SR subdivided in bins of \met, which allows the model-dependent exclusion fits to be more sensitive than the cut-and-count analysis.
|Number of (jets, -tags)||(, )||(, )|
|based -veto [\GeV]|
|Exclusion technique||shape-fit in \met||cut-and-count|
7.1.2 Compressed decay
The three BDT selections (, , and ) are summarised in Table 7 and detailed in the following.
|Number of (jets, -tags)||(, )||(, )||(, )|
|based -veto [\GeV]||–||–|
|Exclusion technique||cut-and-count||shape-fit in BDT score||shape-fit in BDT score|
For masses close to the top-quark mass a BDT is trained for the \tNdiaglow signal region. The preselection is based on the low- selection in Table 4.
The variables input to the BDT are and , the difference in between the SM and signal hypothesis, the two top-quark candidate masses and under the signal hypothesis, and the azimuthal angles between the lepton and the system, as well as between the lepton and .
The BDT output, from here on referred to as BDT_low, is used to define a single-bin cut-and-count signal region, using the optimal point of BDT_low , determined by maximising the expected discovery significance. To avoid a significant extrapolation between control and signal regions an additional selection of GeV and is applied for all selected regions in the \tNdiaglow .
Stop masses from about 200 to 400 \GeV in the compressed scenario are targeted by a BDT using the low- preselection given in Table 4. The input variables of the BDT are and \HTmissSig, the angular variables , and , mass variables and \mTopChi, as well as the number of jets and the third and fourth jet .
The BDT output score, referred to in the following as BDT_med, is used to define a signal region called , based on the expected significance for a mass of 250 . The known signal shape is exploited for the exclusion of signal models, using five bins in the BDT score, including also BDT bins lower than the SR.
For compressed bino LSP scenarios with high mass, a BDT is trained using the following variables: , the angular variables , , and , masses , and \mTopChi as well as the number of jets in the di-stop decay system and the third and fourth jet , derived using the RJR techniques as described in Section 5. In addition to the high-\met preselection, a tightened selection of \GeV is imposed to control the multijet background. An additional selection of is applied to further reduce the background while retaining high efficiency for the considered signal events.
The resulting BDT output score, hereafter called BDT_high, is used to define the \tNdiaghigh signal region. In addition, three BDT bins are employed in a shape-fit to improve the exclusion sensitivity.
7.1.3 \threeBody and \fourBody decays
When the mass difference between the \tone and the \ninoone is smaller than the top-quark mass but greater than the sum of the -boson and bottom-quark masses, the
7.2 Wino NLSP scenario
If the wino mass parameter is small enough, the stop may decay directly into \chinoonepm and \ninotwo (in addition to the \ninoone, as the bino is still assumed to be the LSP). In this case, the decays \bChargino and \topNLSP become relevant, leading to a more complex phenomenology than that probed in the pure bino LSP scenario. The SRs targeting this scenario are referred to as bC2x.
Two SRs target the \bChargino decay: the \bCmed and \bCdiag SRs. The kinematics of the decay products are governed by the different mass-splittings, with high- -jets produced from large (,) and high- bosons from large (,). In addition to the high-\met preselection, two -tagged jets and a hadronic -boson candidate with a mass satisfying \GeV are required. Tight requirements on \mt and \amtTwo are placed to reduce the \ttbar background. The main backgrounds after the full signal selection are the , dileptonic \ttbar, and single-top processes.
An additional SR, , is designed for the simplified model scenario with (, , leading to a signature where the \bjetsare too soft to be reconstructed.
The event selections for \bCdiag, \bCmed and \bCbv are summarised in Table 8.
|Number of (jets, -tags)||(, )||(, )||(, )|
|-tagged jet [\GeV]||–|
|based -veto [\GeV]||–|
7.3 Higgsino LSP scenario
The SRs optimised for the pure bino LSP scenarios such as have sensitivity to the higgsino model in events where a lepton is produced by a top quark from the stop decay. However, three additional SRs, , , and , are designed to target the case when the lepton is soft, originating instead from a decay via a highly off-shell boson (). This is particularly important in scenarios with where the is large. These soft-lepton SRs are defined to be orthogonal to the \tNmed SR so that they can be statistically combined to profit from covering both decay chains.
The \bCsoftdiag SR targets a region where the mass difference between the stop and higgsinos is less than the mass of the top quark, so the stop must decay via the \bChargino mode. Since none of the decay products receive a large momentum transfer, a high- ISR jet is required, resulting in a boost of the system in order to achieve better separation between the signal and background. As a result, the signature is characterised by a high- jet, large \met, and a soft lepton. The main background after all selection requirements is semileptonic \ttbar and +jets processes. The \bCsoftdiag SR with relaxed \mt requirement is found to be sensitive to the \fourBody signature and is described further in Section 7.1.3.
The second SR, , targets generic higgsino models where each of the decays \bChargino, \topLSP, and \topNLSP are allowed. In particular, it is designed to select the large fraction of events that produce “mixed” decays, where one \tonedecays via a chargino and the other via a neutralino. In such cases, the \bChargino decay produces a high-\pt -jet, while the -jet from the other branch, \topLSP or \topNLSP, can be much softer. The third SR, , targets the higher stop masses, focusing on the \bChargino signature. The -jet is boosted due to the large mass difference between the stop and higgsino states, and therefore the signature is characterised by two high- -jets, large \met, and a soft lepton. The remaining background after all signal selection requirements is dominated by semileptonic , single-top , and +heavy-flavour jets events.
In all three SRs, / is a powerful discriminant as the higgsino signature is characterised by low- leptons and large , while the SM backgrounds are dominated by events where the \metarises from a leptonic -boson decay, producing lepton \ptand \metof a similar magnitude. A shape-fit in /\met is performed, similar to the shape-fits implemented for the \tNmed and \bWN SRs.
The event selections for , , and are detailed in Table 9.
|Number of (jets, -tags)||(, )||(, )||(, )|
|-tagged jet [\GeV]|
|\mTopRecluster [\GeV]||top veto||–||–|
|Exclusion technique||shape-fit in /\met||shape-fit in /\met||shape-fit in /\met|
7.4 Bino/higgsino mix scenario
For the bino/higgsino mix scenario, the SRs designed for other scenarios are found to have good sensitivity for this scenario, and are therefore used.
7.5 Spin-0 mediator scenario
Two SRs, \DMlow and , are designed to search for dark matter particles that are pair-produced via a spin-0 mediator (either scalar or pseudoscalar) produced in association with \ttbar. The \DMlow SR is optimised for mediator masses around \GeV, while the \DMhigh SR targets mediator masses around \GeV.
In addition, a predecessor to the \DMlow signal region, originally designed for a search using a smaller data set (13.2
8 Background estimates
The dominant background processes in this analysis originate from \ttbar, single-top , , and +jets production. Most of the \ttbar and events in the hard-lepton signal regions have both bosons decaying leptonically, where one lepton is ‘lost’ (meaning it is either not reconstructed, not identified, or removed by the overlap removal procedure) or one boson decaying leptonically and the other via a hadronically decaying lepton. This is in contrast to the soft-lepton signal regions, where most of the \ttbar and contribution arises from semileptonic decays.
These \ttbar background decay components are treated separately, referred to as 1L and 2L, which also includes the dileptonic \ttbar process where a boson decays into a lepton that subsequently decays hadronically. The background combined with the subdominant contribution is referred to as . Other background contributions arise from dibosons, +jets, and multijet production. The multijet background is estimated from data using a fake-factor method [HIGG-2013-13], and it is found to be negligible in all regions.
The main background processes are estimated via a dedicated CR, used to normalise the simulation to the data with a simultaneous fit, discussed in Section 10. The CRs are defined with event selections that are kinematically close to the SRs but with a few key variable requirements inverted to significantly reduce the potential signal contribution and enhance the yield and purity of a particular background. Each SR has dedicated CRs for the background processes that have the largest contributions. The following background processes are normalised in dedicated CRs: semileptonic \ttbar (T1LCR), dileptonic \ttbar (T2LCR), +jets (WCR), single-top (STCR), and (TZCR) processes. All other backgrounds are normalised with the most accurate theoretical cross-sections available.
Several signal regions (, , and ) that are dominated exclusively by either semileptonic or dileptonic \ttbar events have only one associated CR, denoted generically TCR. Signal regions can have fewer associated CRs when the fractional contribution of the corresponding background is small. For the shape-fit analyses, the CRs of each background are not binned and only one normalisation factor is extracted for each background process, which is applied in all SR bins.131313The binned CR approach has been tested by comparing the results to a one-bin CR. The normalisation factors were found to be consistent with each other within the statistical uncertainties.
The background estimates are tested using VRs, which are disjoint from both the CRs and SRs. Background normalisations determined in the CRs are extrapolated to the VRs and compared with the observed data. Each SR has associated VRs for the \ttbar (T1LVR and T2LVR) and +jets (WVR) processes, which are constructed by inverting or relaxing the selection requirements to be orthogonal to the corresponding SR and CRs. A single-top VR (STVR) is defined for the \bCsoftmed and \bCsofthigh SRs, where is one of the dominant background processes.
The VRs are not used to constrain parameters in the fit, but provide a statistically independent test of the background estimates made using the CRs. The potential signal contamination in the VRs is studied for all considered signal models and mass ranges, and is found to be less than a few percent in most of the VRs, and less than 15% in VRs for the tN_diag SRs.
The background estimation techniques are categorised using several different approaches. The requirement of the presence of hadronic top-quark candidates (top-tagging) is used for the background estimate in the SRs targeting signals with high-\pt top quarks. Compared to previous analyses this background estimation technique has the advantage that the \ttbar background composition does not change in the extrapolation from CR to SR. Similarly hadronic -boson reconstruction (-tagging) is employed for the background estimate in the SRs targeting signals with high-\pt bosons decaying hadronically. In the following subsections the two approaches are described in detail together with the background estimates for the remaining SRs. Table 10 summarises the approaches for each SR with a brief description of the targeted signal scenarios, and each of those approaches are detailed in Sections 8.1–8.5.
|SR||Signal scenario||Background strategy||Sections|
|\tNmed||Pure bino LSP||top-tagging + CR||8.1|
|\tNhigh||Pure bino LSP||top-tagging + CR||8.1|
|\tNdiaglow||Pure bino LSP||BDT||8.2|
|\tNdiagmed||Pure bino LSP||BDT||8.2|
|\tNdiaghigh||Pure bino LSP||BDT||8.2|
|\bWN||Pure bino LSP||three-body||8.3|
|\bffN||Pure bino LSP||soft-lepton||8.5|
|\bCmed||Wino NLSP||-tagging + CR||8.4|
|\bCdiag||Wino NLSP||-tagging + CR||8.4|
|\DMlowloose||DM+\ttbar||\mt extrapolation + CR||8.1|
|\DMlow||DM+\ttbar||top-tagging + CR||8.1|
|\DMhigh||DM+\ttbar||top-tagging + CR||8.1|
8.1 Hadronic top-tagging approach
In SRs targeting signals with high-\pt top quarks (, , , and ), a requirement is made that events contain a recursively reclustered jet with a mass consistent with the top-quark mass. While the requirement on \mTopRecluster is powerful for identifying signals, it is also useful in defining CRs that are enriched in background processes with hadronically decaying top quarks (“top-tagged”) or depleted in such backgrounds (“top-vetoed”).
The CR for dileptonic \ttbar (T2LCR) requires \mt above the -boson endpoint. The SR requirement on \amtTwo is inverted (to select events with values below the top-quark mass) and a hadronic top-quark veto is required to reduce the potential signal contamination and improve the purity. The semileptonic \ttbar CR (T1LCR) requires a tagged hadronic top-quark candidate and that the \mt be within a window around the -boson mass. The background from semileptonic \ttbar events is negligible in the SR but can be sizeable in the other CRs.
The CRs for +jets (WCR) and single-top (STCR) require \mt to be below the -boson mass. Both CRs also require large \amtTwo and a hadronic top-quark veto. The STCR also requires two -tagged jets to reduce the +jets contribution, and a minimum separation between the -tagged jets, . This latter requirement is useful to suppress the semileptonic \ttbar contribution, which can evade the \amtTwo endpoint when a charm quark from the hadronic -boson decay is misidentified as a -tagged jet, often leading to a small separation between the two identified -tagged jets. Events with exactly one -tagged jet or are assigned to the WCR. In order to increase the +jets purity, only events with a positively charged lepton are selected. This requirement exploits the asymmetry in the production of over events in LHC proton–proton collisions. The asymmetry is further enhanced by the requirement of large \met, as neutrinos from decays of the mostly left-handed boson are preferentially emitted in the momentum direction of the boson.
In addition, the background contribution from is large and a dedicated control region is designed, and is described in Section 8.6.
Figure 9 shows various kinematic distributions in the CRs associated with the SR. The backgrounds are scaled with normalisation factors obtained from a simultaneous likelihood fit of the CRs, described in Section 10.
A set of VRs associated with the corresponding CRs is defined by modifying the requirements on the , , and hadronic top-tagging variables. The semileptonic \ttbar validation region (T1LVR) and +jets validation region (WVR) slide the \mt window from – to – . The dileptonic \ttbar VR (T2LVR) inverts the requirement of the hadronic top-quark veto (so that a hadronic top-quark tag is required) and relaxes the requirement on . Since the \ttbar events are mostly dileptonic after the large \mt requirement, the purity of dileptonic \ttbar events remains high, despite the hadronic top-quark tag requirement. The relaxed \amtTwo requirement significantly reduces the potential signal contamination. There is no single-top VR (STVR) for these CRs. The \mt window for the STCR extends to 120 in order to increase the number of data events entering the CR.
In Figure 9, various kinematic distributions in the VRs associated with \tNmed are compared to the observed data. The backgrounds are scaled with normalisation factors obtained from a simultaneous likelihood fit of the CRs, described in Section 10.
The CRs and VRs associated with \DMlowloose are retained unchanged from the previous analysis, and are described in Table 13. The \ttbar and +jets backgrounds are estimated from a low \mt region, \gev, with and without a -tag requirement, respectively. The corresponding VRs are defined with \gev. The single-top , and backgrounds are estimated using the same strategy as the rest of the regions described in this section.
|Number of (jets, -tags)||(, )||(, )||(, )||(, )||(, )|
|-tagged jet [\GeV]|
|\mTopRecluster [\GeV]||top veto /||top veto||top veto|
|based -veto [\GeV]|
|Number of (jets, -tags)||(, )||(, )||(, )||(, )||(, )|
|-tagged jet [\GeV]|