Search for charged Higgs bosons decaying via in fully hadronic final states using collision data at with the ATLAS detector
The results of a search for charged Higgs bosons decaying to a lepton and a neutrino, , are presented. The analysis is based on 19.5 fb of proton–proton collision data at TeV collected by the ATLAS experiment at the Large Hadron Collider. Charged Higgs bosons are searched for in events consistent with top-quark pair production or in associated production with a top quark, depending on the considered mass. The final state is characterised by the presence of a hadronic decay, missing transverse momentum, -tagged jets, a hadronically decaying boson, and the absence of any isolated electrons or muons with high transverse momenta. The data are consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the product of branching ratios , between 0.23% and 1.3% for charged Higgs boson masses in the range 80–160 GeV. It also leads to 95% confidence-level upper limits on the production cross section times branching ratio, , between 0.76 pb and 4.5 fb, for charged Higgs boson masses ranging from 180 GeV to 1000 GeV. In the context of different scenarios of the Minimal Supersymmetric Standard Model, these results exclude nearly all values of above one for charged Higgs boson masses between 80 GeV and 160 GeV, and exclude a region of parameter space with high for masses between 200 GeV and 250 GeV.
Search for charged Higgs bosons decaying via in fully hadronic final states using collision data at with the ATLAS detector \PreprintIdNumberCERN-PH-EP-2014-274 \PreprintCoverAbstract The results of a search for charged Higgs bosons decaying to a lepton and a neutrino, , are presented. The analysis is based on 19.5 fb of proton–proton collision data at TeV collected by the ATLAS experiment at the Large Hadron Collider. Charged Higgs bosons are searched for in events consistent with top-quark pair production or in associated production with a top quark, depending on the considered mass. The final state is characterised by the presence of a hadronic decay, missing transverse momentum, -tagged jets, a hadronically decaying boson, and the absence of any isolated electrons or muons with high transverse momenta. The data are consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the product of branching ratios , between 0.23% and 1.3% for charged Higgs boson masses in the range 80–160 GeV. It also leads to 95% confidence-level upper limits on the production cross section times branching ratio, , between 0.76 pb and 4.5 fb, for charged Higgs boson masses ranging from 180 GeV to 1000 GeV. In the context of different scenarios of the Minimal Supersymmetric Standard Model, these results exclude nearly all values of above one for charged Higgs boson masses between 80 GeV and 160 GeV, and exclude a region of parameter space with high for masses between 200 GeV and 250 GeV. \PreprintJournalNameJHEP
Charged Higgs bosons (, ) are predicted by several
non-minimal Higgs scenarios, such as two-Higgs-doublet Models
(2HDM) [] or models containing Higgs
triplets [, , , , ].
As the Standard Model (SM) does not contain any elementary
charged scalar particle, the observation of a charged Higgs
In the MSSM, the Higgs sector can be completely determined at tree level by one of the Higgs boson masses, here taken to be , and , the ratio of the vacuum expectation values of the two Higgs doublets. For , the decay via is dominant for and remains sizeable for . For higher , the decay via is still significant, especially for large values of []. The combined LEP lower limit for the charged Higgs boson mass is about 90 GeV []. The Tevatron experiments placed upper limits on in the 15–20% range for [, ]. In a previous search based on data taken at TeV with the ATLAS and CMS detectors, the limits on were lowered to the range 0.8–4% [, ]. For all of these results, was assumed.
This paper describes a search for charged Higgs bosons with masses
in the ranges 80–160 GeV and 180–1000 GeV.
The region 160 GeV GeV is not considered in this paper, since there is currently
no reliable theoretical treatment for the interference between the different production modes in this transition region [].
The final state studied is characterised by the presence
of a hadronic decay (), missing transverse momentum (), -quark-initiated jets, a
hadronically decaying boson, and the absence of any isolated electrons or muons with high transverse momenta.
In addition to the large branching ratio for a to decay hadronically,
this final state contains only neutrinos associated with the production and decay, resulting in good discriminating
power between SM and signal processes.
Charged Higgs bosons are
searched for in a model-independent way, hence results
are given in terms of (low-mass search, )
and (high-mass search, ).
These limits are then also interpreted in different MSSM scenarios.
The results are based on 19.5 fb of data from collisions at , collected
in 2012 with the ATLAS detector at the LHC.
The final state analysed for the low-mass search is
The final state is similar or identical for
the high-mass search, depending on whether the additional -quark-initiated jet is seen in the detector, in the 5FS
in the 4FS case.
This paper is organised as follows. In section 2, the data and simulated samples used in this analysis are described. In section 3, the reconstruction of physics objects in ATLAS is discussed. The event selection and background modelling are presented in section 4. Systematic uncertainties are discussed in section 5, and the limit-setting procedure is described in section 6. Exclusion limits in terms of (low-mass) and (high-mass) as well as model-dependent exclusion contours are presented in section 7.
2 Data and simulated events
The ATLAS detector [] consists of an inner tracking
detector with coverage in pseudorapidity
Only data taken with all ATLAS subsystems operational are used. Stringent detector and data quality requirements are applied, resulting in an integrated luminosity of 19.5 fb for the 2012 data-taking period. The integrated luminosity has an uncertainty of 2.8%, measured following the methodology described in ref. []. Events are required to have a primary vertex with at least five associated tracks, each with a transverse momentum greater than 400 MeV. The primary vertex is defined as the reconstructed vertex with the largest sum of squared track transverse momenta.
The background processes to this search include SM pair production of top quarks,
as well as the production of single top-quark, +jets, +jets,
diboson and multi-jet events. These backgrounds are categorised
based on the type of reconstructed objects identified as the visible decay products
The modelling of SM and single top-quark events is performed with MC@NLO [, ], except for -channel single top-quark production, for which AcerMC [] is used. The top-quark mass is set to 172.5 GeV and the set of parton distribution functions used is CT10 []. For events generated with MC@NLO, the parton shower, hadronisation and underlying event are added using HERWIG [] and JIMMY []. PYTHIA6 [] is used instead for events generated with AcerMC. Inclusive cross sections are taken from the approximate next-to-next-to-leading-order (NNLO) predictions for \ttbar production [], for single top-quark production in the -channel and -channel [, ], as well as for production []. Overlaps between SM and final states are removed []. Single vector boson ( and ) production is simulated with up to five accompanying partons, using ALPGEN [] interfaced to HERWIG and JIMMY, and using the CTEQ6L1 [] parton distribution functions. The additional partons produced in the matrix-element part of the event generation can be light partons or heavy quarks. In the latter case, ALPGEN is also used to generate dedicated samples with matrix elements for the production of massive or pairs. Diboson events (, and ) are generated using HERWIG. The cross sections are normalised to NNLO predictions for -boson and production [, ] and to next-to-leading-order (NLO) predictions for diboson production []. The SM background samples are summarised in table 1.
Signal samples are produced with PYTHIA 6 for 80 GeV GeV in intervals of 10 GeV separately for and , where the charged Higgs bosons decay via . The process gives a very small contribution to the signal region, which is negligible after the event selection described in section 4.1. The cross section for these processes depends only on the total production cross section and the branching ratio . For 180 GeV GeV, the simulation of the signal for top-quark associated production is performed with POWHEG [] interfaced to PYTHIA 8 []. For 180 GeV GeV, samples are produced in steps of 10 GeV, then in intervals of 25 GeV up to GeV and in intervals of 50 GeV for GeV. Additionally, signal mass points at GeV and GeV are produced. The production cross section for the high-mass charged Higgs boson is computed using the 4FS and 5FS, including theoretical uncertainties, and combined according to ref. []. The samples are generated at NLO using the 5FS and the narrow-width approximation for the . Possible effects from the interference between the production of a charged Higgs boson through and top-quark associated production are not taken into account.
|Process||Generator||Cross section [pb]|
|Single top-quark -channel ( 1 lepton)||AcerMC||[]|
|Single top-quark -channel ( 1 lepton)||MC@NLO||[]|
|Single top-quark -channel (inclusive)||MC@NLO||[]|
|( 1 electron/muon)||HERWIG||[]|
|( 1 electron/muon)||HERWIG||[]|
|( 1 electron/muon)||HERWIG||[]|
|signal ( GeV)||POWHEG|
The event generators are tuned to describe the ATLAS data.
In samples where PYTHIA 6 is interfaced to AcerMC,
the AUET2B [] tune is used.
The Perugia 2011 C tune [] is used when PYTHIA 6 is interfaced to
POWHEG. For the samples generated with HERWIG,
the AUET2 [] tune is used.
In all samples with leptons, except for those simulated with PYTHIA 8
To take into account the presence of multiple proton–proton interactions occurring in the same and neighbouring bunch crossings (referred to as pile-up), simulated minimum-bias events are added to the hard process in each generated event. Prior to the analysis, simulated events are reweighted in order to match the distribution of the average number of pile-up interactions in the data. All generated events are propagated through a detailed GEANT4 simulation [, ] of the ATLAS detector and are reconstructed with the same algorithms as the data.
3 Physics object selection
Jets are reconstructed from energy deposits in the calorimeters, using the anti- algorithm [, ] with a radius parameter of . Jets are required to have GeV and . To reduce the contribution of jets initiated by pile-up, jets with GeV and 2.4 must pass the requirement that at least half of the of the tracks associated with the jet is contributed by tracks matched to the primary vertex []. An algorithm identifies jets containing -quarks by combining impact parameter information with the explicit determination of a secondary vertex [], and these are referred to as -tagged jets. A working point corresponding to a efficiency for identifying -quark-initiated jets is used.
Candidates for identification as arise from jets reconstructed from energy deposits in calorimeters, again using the anti- algorithm with a radius parameter of , which have GeV and one or three charged-particle tracks within a cone of size of , where around the axis []. These candidates are further required to have a visible transverse momentum () of at least 20 GeV and to be within . The output of boosted decision tree algorithms [, ] is used to distinguish from jets not initiated by leptons, separately for decays with one or three charged-particle tracks. In this analysis, a point with 40 (35) efficiency for identification of 1(3)-prong is used, and this requirement is referred to as the identification. Dedicated algorithms are used to reject electrons and muons that are incorrectly identified as []. After these algorithms are applied, the backgrounds arising from muons and electrons misidentified as are very small, although there is still a sizeable background from jets misidentified as .
The is defined as the magnitude of the negative vectorial sum of transverse momenta of muons and energy deposits in the calorimeter. It is computed using fully calibrated and reconstructed physics objects [].
The final states considered in this search contain no charged leptons, hence events containing isolated electron or muon candidates with high transverse momenta are rejected. Electron candidates are reconstructed from energy deposits in the calorimeter that are matched to tracks in the inner detector, taking losses due to bremsstrahlung into account. They are required to have a transverse energy () greater than 25 GeV and to be within (the transition region between the barrel and end-cap calorimeters, , is excluded) [, ]. Muon candidates must pass tracking requirements in both the inner detector and the muon spectrometer, have GeV and []. Additionally, electron candidates are required to pass pile-up-corrected 90 efficient calorimeter- and track-based isolation, with R cone sizes of 0.2 and 0.3, respectively, while muon candidates are required to pass a relative track-based isolation of 0.05 with a R cone 0.4 [].
4 Event selection and background modelling
4.1 Event selection
The analysis uses events passing a + trigger. The trigger is defined by calorimeter energy in a narrow core region and an isolation region at L1, a basic combination of tracking and calorimeter information at L2 and more sophisticated algorithms imported from the offline reconstruction at the EF. The trigger uses calorimeter information at all levels with a more refined algorithm at the EF. The EF threshold on the transverse momentum of the trigger object is 27 GeV or 29 GeV, and for the trigger the EF threshold is 40 GeV or 50 GeV. The multiple trigger thresholds are the result of slight changes of the trigger definition during the 2012 data-taking period, for which 50 of events had EF thresholds at 27 GeV and 50 GeV, 43 at 29 GeV and 50 GeV, and 7 at 29 GeV and 40 GeV, for the and triggers, respectively.
Further event filtering is performed by discarding events in which any jet with GeV fails the quality cuts discussed in ref. []. This ensures that no jet is consistent with having originated from instrumental effects or non-collision backgrounds. The following requirements are then applied:
at least four (three) selected jets for the low-mass (high-mass) signal selection;
at least one of these selected jets being -tagged at the 70%-efficient working point;
exactly one selected with GeV matched to a trigger object (trigger-matched);
no selected electron or muon in the event;
(80) GeV for the low-mass (high-mass) signal selection;
6.5 (6.0) GeV for the low-mass (high-mass) signal selection, where is the sum of transverse momenta of all tracks originating from the primary vertex. This is to reject events in which a large reconstructed \met is due to the limited resolution of the energy measurement.
For the selected events, the transverse mass () of the and is defined as:
where is the azimuthal angle between the and the direction of the missing transverse momentum. This discriminating variable takes values lower than the boson mass for background events and less than the mass for signal events, in the absence of detector resolution effects.
A minimal requirement is placed on at 20 (40) GeV in the low-mass (high-mass) search. This requirement is motivated in section 4.2. After the full event selection, the signal has an acceptance of 0.30–0.60 for the low-mass range, and 1.7–5.8 for the high-mass range, where in both cases the acceptance increases with increasing . The acceptances are evaluated with respect to signal samples where both the lepton and the associated top quark decay inclusively.
4.2 Data-driven estimation of the backgrounds with a true
An embedding method [] is used to estimate the backgrounds that contain a real from a vector boson decay. The method is based on a control data sample of +jets events satisfying criteria similar to those of the signal selection except for the requirements and replacing the detector signature of the muon by a simulated hadronic decay. The method is applied to a control region of +jets events, rather than +jets, due to the clean signature and the relative ease with which the measured muon can be removed. These new hybrid events are then used for the background prediction. An advantage of this approach, compared to simulation, is that with the exception of the , the estimate is extracted from data; this includes the contributions from the underlying event and pile-up, jets, and all sources of except for the neutrino from the decay. Furthermore, since the normalisation of the background estimate is evaluated from the data, assuming lepton universality of the boson decay, the method does not rely on theoretical cross sections and their uncertainties. This embedding method has been used in previous charged Higgs boson searches [] as well as in SM [, ] analyses.
To select the jets sample from the data, the following requirements are made:
a single-muon trigger with a threshold of GeV or GeV (single-muon triggers with two different thresholds are used, since the lower-threshold trigger also requires the muon to be isolated);
exactly one isolated muon with GeV and no isolated electron with GeV;
at least four (three) jets with GeV for the low-mass (high-mass) charged Higgs boson search, at least one of which is -tagged;
(40) GeV for the low-mass (high-mass) charged Higgs boson search.
This selection is looser than the selection defined in section 4.1 in order not to bias the sample. However, the cut in the jets sample selection removes events with very low . Thus, a cut on (40) GeV is introduced in the search for low-mass (high-mass) charged Higgs bosons to remove this bias. With this selection, there is a possible small contamination from signal events with a leptonically decaying lepton. This small contamination, which is estimated using simulation, has a much softer distribution than the signal with , and is observed to have a negligible impact on the evaluation of signal strength or exclusion limits. Contamination from leptonically decaying leptons from decays is accounted for in the overall normalisation ( in eq. (3)).
To replace a muon in the selected data, the track that is associated with the muon is removed. The energy deposited in the calorimeters is removed by simulating a event with the same kinematics as in the selected data event and identifying the corresponding cells. Thus, the removal of energy deposits not associated with the selected muon is minimised. The momentum of the muon in selected events is extracted and rescaled to account for the higher lepton mass,
where is the rescaled momentum, is the reconstructed energy of the muon, is the mass, and is the reconstructed muon momentum. The lepton with rescaled momentum is further processed by TAUOLA to produce the hadronic decay and account for the polarisation as well as for final-state radiation. The lepton decay products are propagated through the full detector simulation and reconstruction. Events referred to as containing a true are those with a genuine as expected from the embedding method.
The shape of the distributions for backgrounds with a true is taken from the distribution obtained with the embedded events, after applying the corresponding signal selection. The normalisation is then derived from the number of embedded events:
where is the estimated number of events with a true , is the number of embedded events in the signal region, is the fraction of events in which the selected muon is a decay product of a lepton (taken from simulation, about ), is the trigger efficiency (as a function of and , derived from data, see section 4.5), is the muon trigger and identification efficiency (as a function of and , derived from data) and is the branching ratio of the lepton decays to hadrons.
The distributions for selected events with a true , as obtained with the embedding method, are shown in figure b and compared to simulation. Embedded data and simulation agree well and are within uncertainties. The combined systematic and statistical uncertainties on the embedded prediction and simulation are compared directly in figure b, where the reduction provided by the use of the embedding method is shown.
4.3 Data-driven estimation of the multi-jet backgrounds
For the data-driven estimation of the backgrounds with a jet misidentified as a (multi-jet background), two data samples are defined, differing only in identification criteria. The tight sample contains a larger fraction of events with a real , which are required to pass the tight identification selection described in the object selection, in addition to the trigger matching required in the event selection of section 4.1. The loose sample, which contains a larger fraction of events with a misidentified , is obtained by removing the identification requirement that was applied in the tight sample. By construction, the tight data sample is a subset of the loose data sample.
The loose sample consists of and events with, respectively, a real or misidentified . It is also composed of events with a passing a loose but not tight selection, and events in which the fulfils the tight selection. Using the efficiencies and , respectively, for a real or misidentified loose satisfying the tight criteria, the following relation can be established:
In turn, inverting the matrix above, the number of events in which the misidentified passes the tight selection can be written as:
The final values of and are parameterised in terms of the number of charged-particle tracks in the core cone () and the number of charged-particle tracks in the hollow isolation cone () around the axis [], as well as the and of the . Correlations between the variables used for parameterisation are found to have a negligible effect on the results of the method.
The probability is determined using true in simulated events in the signal region. The probability is measured in a +jets control region in data. Events in this control region are triggered by a combined trigger requiring an electron with \GeV or a muon with \GeV in addition to a . In both cases, the trigger object has a threshold of GeV. The control region must have exactly one trigger-matched reconstructed electron or muon, in addition to a trigger-matched, reconstructed, loose . The control region is also required to have zero -tagged jets and GeV (using eq.( 1), with the replaced by the electron or muon). The contamination from correctly reconstructed () and electrons or muons mis-reconstructed as () is subtracted using simulation. Signal processes contribute negligibly to this region ().
Having computed the identification and misidentification efficiencies and , every event in the loose sample is given a weight as follows, in order to estimate the background with a misidentified in the tight sample:
for an event with a loose but not tight , ;
for an event with a tight , .
Events with jets misidentified as are a major background in the high- region ( \GeV low-mass and \GeV high-mass), but this region has less than one expected event per 20 \GeV bin. This limitation is circumvented by fitting the distribution using a power-log function in the mass range 200–800 GeV. The power-log function is defined by the following formula:
where and are fitted constants. The resulting distribution after considering each systematic uncertainty is fitted separately. An additional systematic uncertainty is added for the choice of fit function, by symmetrising the difference between the baseline fit and an alternative fit using an exponential function. The exponential is chosen to probe the effect on the expected yield in the poor statistics tail region, since it also describes the multi-jet background well in the region with many events. Figure b shows the fits obtained in the nominal case, for the systematic uncertainty due to the chosen fit function, and for all other systematic uncertainties related to this background estimation (see section 5.2).
4.4 Backgrounds with electrons or muons misidentified as
Backgrounds that arise from events where an electron or muon is misidentified as are heavily suppressed by dedicated veto algorithms, so that these events only contribute at the level of 1–2 to the total background. These backgrounds are estimated from simulated events, and they include contributions from , single top-quark, diboson, +jets and +jets processes. Leptons from in-flight decays in multi-jet events are accounted for in the multi-jet background estimate.
The analysis presented in this paper relies on triggers. To correct for any difference between the trigger efficiencies observed in simulation and those observed in data, - and -dependent correction factors are derived, whose evaluation is limited by statistical uncertainties. To increase the sample size, the and trigger efficiencies are determined separately and residual effects due to correlations are taken into account as systematic uncertainties. To measure the efficiencies, a tag-and-probe method is used in a control region enriched with events with a selection using a muon trigger with a threshold of GeV or GeV. The trigger efficiencies are fitted separately for events with a that has one or three charged-particle tracks. The () trigger efficiencies are fitted in the range of 20–100 (20–500) GeV. The ratios of the fitted functions for data and simulation are then applied to the simulated samples as continuous correction factors.
Since no trigger information is available in the embedded sample, trigger efficiencies are applied to that sample. The efficiencies for the trigger derived as described above need to be corrected for misidentified . The fraction of events with a misidentified is substantial in the sample used for the tag-and-probe method, leading to a lower efficiency than in a sample with only events that have a true . Since only events with a true are present in the embedded sample, the efficiencies determined from data are corrected by the ratio of the simulated efficiency for true to the simulated efficiency for the sample.
4.6 Event yields after the event selection
The expected numbers of background events and the results from data, together with an expectation from signal contributions in the low-mass and high-mass selections, are shown in table 2. For the low-mass search, the signal contribution is shown for a cross section corresponding to , and for the high-mass search a possible signal contribution in the scenario of the MSSM with is shown.
The number of events with a true is derived from the number of embedded events and does not depend on the theoretical cross section of the process. However, this analysis does rely on the theoretical inclusive \ttbar production cross section pb [] for the estimation of the small background with electrons or muons misidentified as .
|Sample||Low-mass selection||High-mass selection|
|True (embedding method)|
|All SM backgrounds|
5 Systematic uncertainties
Systematic uncertainties on the measurement of the trigger efficiencies arise from multiple sources: the selection of the muon in the sample, the number of misidentified , the choice of fitting function, slightly varying trigger requirements during the data-taking period, a residual correlation between the and triggers, and the effect of the energy correction on the trigger efficiency. The dominant systematic uncertainty, which arises from misidentified in the control region, is evaluated by measuring the trigger correction factors after varying the expected misidentified yield by its uncertainty. These uncertainties are relevant for background events with leptons misidentified as as well as true and signal events. The effects on a low-mass and a high-mass signal sample are summarised in table 3 and the effect on background events with true is shown in table 4. The trigger correction factors used to account for differences between the efficiencies in simulation and data are shown in figure d.
|Source of uncertainty||Low-mass selection||High-mass selection|
5.2 Data-driven background estimation
The systematic uncertainties arising from the data-driven
methods used to estimate the various backgrounds are summarised
in table 4.
|Source of uncertainty||Low-mass selection||High-mass selection|
|Parameters in normalisation|
|Statistical uncertainty on|
|Statistical uncertainty on|
The systematic uncertainties affecting the estimation of the backgrounds with true , discussed in section 4.2, consist of the potential bias introduced by the embedding method itself (embedding parameters, evaluated by varying the amount of energy that is subtracted when removing calorimeter deposits of the muon in the original event), uncertainties from the trigger efficiency measurement as discussed in section 5.1, uncertainties due to a possible contamination from multi-jet events (evaluated by varying the muon isolation requirements), uncertainties associated with the simulated ( energy scale and identification efficiency) and uncertainties on the normalisation. The latter are dominated by the statistical uncertainty of the selected control sample and the trigger efficiency uncertainties.
For the estimation of backgrounds with jets misidentified as , discussed in section 4.3, the dominant systematic uncertainties on the misidentification probability are the statistical uncertainty due to the control sample size and uncertainties due to the difference in the jet composition (gluon- or quark-initiated) between the control and signal regions. The uncertainty arising from differences in jet composition is evaluated from the difference in shape and normalisation that arises when is measured in a control region that is enriched in events with gluon-initiated jets. This control region differs from the signal region only by inverting the -tag and requirements. Other uncertainties are due to the statistical uncertainty on , the effect of the uncertainty in the simulated true identification efficiency on the measurement of and , and the effect of the simulated electron veto efficiency for true electrons on the measurement of .
5.3 Detector simulation
Systematic uncertainties originating from the simulation of pile-up and object reconstruction are considered for signal events and background events with leptons misidentified as . This background is roughly 1 of the final background in both the low-mass and high-mass searches, so the systematic uncertainties have little effect on the final results.
Uncertainties related to the energy scale and identification efficiency are also taken into account. The uncertainty on the identification is in the range % for with one charged track and % for with three charged tracks. It has been measured in data using a tag-and-probe method []. The energy scale is measured with a precision of % []. It is determined by fitting the reconstructed visible mass of events in data. Uncertainties related to jet or -tagged jet energy scale, energy resolution, flavour identification and calibration, and the effects of pile-up interactions are also taken into account [, ], as well as uncertainties related to the reconstruction of [].
The impact of most sources of systematic uncertainty is assessed by re-applying the selection cuts for each analysis after varying a particular parameter by its standard deviation uncertainty. The dominant instrumental systematic uncertainties include the jet energy scale, the energy scale, and identification. All instrumental systematic uncertainties are taken into account for the reconstruction of .
These uncertainties affect all simulated samples, i.e. the signal and the background contribution with leptons misidentified as . Since the in the embedded samples are simulated, all -related uncertainties are relevant for the background with true as well.
5.4 Generation of \ttbar and signal events
In order to estimate the systematic uncertainties arising from the and low-mass signal generation, as well as from the parton shower model, the acceptance is computed for events produced with MC@NLO interfaced to HERWIG/JIMMY and POWHEG interfaced to PYTHIA 8. Also, an uncertainty on the theoretical cross section, including both the factorisation/renormalisation scale and parton distribution function uncertainties, is taken into account for backgrounds with a lepton misidentified as a and low-mass signal samples. The estimate of the small background with electrons or muons misidentified as relies additionally on the theoretical inclusive \ttbar production cross section pb [].
The generator modelling uncertainties for the high-mass signal samples are estimated from a comparison between events produced with MC@NLO interfaced to HERWIG++ [] and POWHEG interfaced to PYTHIA 8.
The systematic uncertainties originating from initial- and final-state parton radiation, which modify the jet production rate, are computed for backgrounds and applied to low-mass signal events by using samples generated with AcerMC interfaced to PYTHIA 6, where initial- and final-state radiation parameters are set to a range of values not excluded by the experimental data []. The largest relative differences with respect to the reference sample, after full event selections, are used as systematic uncertainties. For high-mass signal samples, this uncertainty is evaluated by varying factorisation/renormalisation scale parameters in the production of signal samples (QCD scale). The uncertainty due to the choice of parton distribution function has a negligible impact for both background and signal, and is not included. An additional uncertainty, arising from the difference in acceptance between 4FS and 5FS production is evaluated using dedicated signal samples that are generated at leading order with MadGraph [] interfaced with PYTHIA 8, although the nominal signal samples are generated at NLO. The systematic uncertainties arising from the modelling of the and signal event generation and the parton shower, as well as from the initial- and final-state radiation, are summarised in table 5.
All of these uncertainties, except for production, affect only signal and background events where leptons are misidentified as .
|Source of uncertainty||Normalisation uncertainty|
|Generator model ()|
|Generator model ()|
|Jet production rate (SM and ) (QCD scale)|
|Generator model ()|
|Generator model (SM)|
|Jet production rate () (QCD scale)|
|Jet production rate (SM) (QCD scale)|
|production (4FS vs 5FS)|
6 Statistical analysis
In order to test the compatibility of the data with background-only and signal+background hypotheses, a profile log-likelihood ratio [] is used with as the discriminating variable. The statistical analysis is based on a binned likelihood function for these distributions. Systematic uncertainties in shape and normalisation, discussed in section 5, are incorporated via nuisance parameters fully correlated amongst the different backgrounds, and the one-sided profile likelihood ratio, , is used as a test statistic. The parameter of interest, the signal-strength , is either (low-mass search) or (high-mass search). Expected limits are derived using the asymptotic approximation [].
The nuisance parameters are simultaneously fitted by means of a negative log-likelihood minimisation in both low-mass and high-mass regions in order to ensure that they are well estimated. This is shown in figure b for the nuisance parameters that have the largest impact on the fitted , denoted . The black dots indicate how a given nuisance parameter deviates from expectation, while the black error bars indicate how its post-fit uncertainty compares with its nominal uncertainty. In both the low-mass and high-mass searches, the black dots and error bars indicate respectively that none of the nuisance parameters deviate by more than one standard deviation and that their uncertainties are not underestimated. The blue hatched box shows the deviations of the fitted signal-strength parameter after changing a specific nuisance parameter upwards or downwards by its post-fit uncertainty.
The results in figure b indicate the relative impact of the systematic uncertainties in the statistical analysis of the low-mass and high-mass searches. For the low-mass search, the most important systematic uncertainties are those related to the measurement of the trigger efficiency and to the simulation of the detector response to . Since the low-mass search is dominated by the presence of backgrounds with a true , this is consistent with expectations. For the high-mass search, the most important systematic uncertainties are due to jets misidentified as , including both the yield and distribution of such events, and the next dominant effect is from the true background.
In figure b, the distribution after the final fit is shown. No significant deviation of the data from the SM prediction is observed. For the low-mass charged Higgs boson search, exclusion limits are set on the branching ratio . For the high-mass search, exclusion limits are set on , and are to be understood as applying to the total production cross section times branching ratio of and combined. Using the binned log-likelihood described in section 6, all exclusion limits are set by rejecting the signal hypothesis at the 95% confidence level (CL) using the CL procedure []. These limits are based on the asymptotic distribution of the test statistic []. The exclusion limits are shown in figure a for the low-mass search and in figure b for the high-mass search. Expected and observed limits agree well and are within the uncertainties over the whole investigated mass range. The limits are in the range between 1.3% and 0.23% for the low-mass search. For the high-mass search, they range from 0.76 pb to 4.5 fb in the mass range 180 GeV GeV.
The limits on for the low-mass search and on for the high-mass search are also interpreted in the context of different scenarios of the MSSM []. In the scenario, the mass of the light CP-even Higgs boson () is maximised. Interpreting the Higgs boson discovered at the LHC as the , only a small region of the parameter space in this scenario is compatible with the observation. The and scenarios are modifications of the scenario. The discovered Higgs boson is interpreted as the as well but the requirement that be maximal is dropped. This is done by reducing the amount of mixing in the top squark sector compared to the scenario, leading to a larger region in the parameter space being compatible with the observation. The and scenarios only differ in the sign of a parameter.
Interpretations of the 95% CL limits in the , and scenarios are shown in figure f. In the low-mass range, almost all values for are excluded in the different scenarios, except for a small region GeV. Values of larger than are excluded in a mass range of 200 GeV 250 GeV. The exclusions in several additional scenarios, not shown here, were also considered. In the light top squark, light stau and tauphobic scenarios, no significant exclusion is achieved in the high-mass search. In the low-mass search, the excluded regions in these scenarios are similar to those shown in figure f. The limits for the low-mass search are also interpreted in the low- scenario, where GeV. Instead of excluding areas in the – plane, limits are interpreted in the – plane, for 300 GeV 3500 GeV and , where is the higgsino mass parameter. This model is excluded everywhere where it is tested and where it is well-defined. For the interpretation of the low-mass search, the following relative theoretical uncertainties on are considered [, , ]: 5% for one-loop electroweak corrections missing from the calculations, 2% for missing two-loop QCD corrections and about 1% (depending on ) for -induced uncertainties, where is a correction factor to the running -quark mass []. These uncertainties are added linearly, as recommended by the LHC Higgs cross section working group []. For the interpretation of the high-mass search, separate uncertainties are included for the 4FS and 5FS calculations []. For the 5FS calculation, the following theoretical uncertainties are taken into account: scale uncertainties of approximately 10–20 that vary with , the combined uncertainty on the parton distribution function, mass of the -quark, and strong coupling of approximately 10–15. For the 4FS calculation, only a scale uncertainty of approximately 30 is taken into account. Owing to the complication arising from the overlap and interference with off-shell production in the mass range of – GeV, the MSSM interpretation is shown only for 200 GeV.
Charged Higgs bosons decaying via are searched for in \ttbar events, in the decay mode (low-mass search), and for production in association with a top quark, (high-mass search). The analysis makes use of a total of 19.5 fb of collision data at TeV, recorded in 2012 with the ATLAS detector at the LHC. The final state considered in this search is characterised by the presence of a hadronic decay, missing transverse momentum, -tagged jets, a hadronically decaying boson, as well as the absence of any electrons or muons. Data-driven methods are employed to estimate the dominant background contributions. The data are found to be in agreement with the SM predictions. Upper limits at the 95% confidence level are set on the branching ratio between 0.23% and 1.3% for a mass range of 80–160 GeV, a major improvement over the previous published limits of 0.8–3.4 for the mass range of 90–160 GeV [, ]. For the mass range of 180–1000 GeV, the first upper limits from ATLAS are set for the production cross section times branching ratio, , between 0.76 pb and 4.5 fb. Interpreted in the context of the , and scenarios of the MSSM, the entire parameter space with is excluded for the low-mass range 90 GeV GeV, and almost all of the parameter space with