1 Introduction

EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)

CERN-PH-EP-2014-150 LHCb-PAPER-2014-026 July 8, 2014

Measurement of violation in decays

The LHCb collaborationAuthors are listed at the end of the paper.

A measurement of the decay time dependent -violating asymmetry in decays is presented, along with measurements of the -odd triple-product asymmetries. In this decay channel, the -violating weak phase arises from the interference between - mixing and the loop-induced decay amplitude. Using a sample of proton-proton collision data corresponding to an integrated luminosity of collected with the LHCb detector, a signal yield of approximately 4000 decays is obtained. The -violating phase is measured to be . The triple-product asymmetries are measured to be and . Results are consistent with the hypothesis of conservation.

© CERN on behalf of the LHCb collaboration, license CC-BY-4.0.

 

1 Introduction

The decay is forbidden at tree level in the Standard Model (SM) and proceeds predominantly via a gluonic loop (penguin) process. Hence, this channel provides an excellent probe of new heavy particles entering the penguin quantum loops [1, 2, 3]. In the SM, violation is governed by a single phase in the Cabibbo-Kobayashi-Maskawa quark mixing matrix [4, *Cabibbo:1963yz]. Interference between the - oscillation and decay amplitudes leads to a asymmetry in the decay time distributions of and mesons, which is characterised by a -violating weak phase. Due to different decay amplitudes the actual value of the weak phase is dependent on the decay channel. For and decays, which proceed via transitions, the SM prediction of the weak phase is given by  [6]. The LHCb collaboration has measured the weak phase in the combination of and decays to be  [7]. A recent analysis of decays using the full LHCb Run I dataset of 3.0 has measured the -violating phase to be  [8]. These measurements are consistent with the SM and place stringent constraints on violation in - oscillations [9]. The -violating phase, , in the decay is expected to be small in the SM. Calculations using quantum chromodynamics factorisation (QCDf) provide an upper limit of 0.02 for  [1, 2, 3].

Triple-product asymmetries are formed from -odd combinations of the momenta of the final state particles. Such asymmetries provide a method of measuring violation in a decay time integrated method that complements the decay time dependent measurement [10]. These asymmetries are calculated from functions of the angular observables and are expected to be close to zero in the SM [11]. Particle-antiparticle oscillations reduce non-zero triple-product asymmetries due to -conserving strong phases, known as “fake” triple-product asymmetries by a factor , where and are the decay rates and oscillation frequencies of the neutral meson system in question. Since one has for the system, “fake” triple-product asymmetries are strongly suppressed, allowing for “true” -violating triple-product asymmetries to be calculated without the need to measure the initial flavour of the meson [10].

Theoretical calculations can be tested further with measurements of the polarisation fractions, where the longitudinal and transverse polarisation fractions are denoted by and , respectively. In the heavy quark limit, is expected to be the dominant polarisation due to the vector-axial structure of charged weak currents [2]. This is found to be the case for tree-level decays measured at the factories [12, 13, 14, 15, 16, 17]. However, the dynamics of penguin transitions are more complicated. In the context of QCDf, is predicted to be for the decay [3].

In this paper, a measurement of the -violating phase in decays, along with a measurement of the -odd triple-product asymmetries is presented. The results are based on collision data corresponding to an integrated luminosity of and collected by the LHCb experiment at centre-of-mass energies in 2011 and 8 in 2012, respectively. Previous measurements of the triple-product asymmetries from the LHCb [18] and CDF [19] collaborations, together with the first measurement of the -violating phase in decays [20], have shown no evidence of deviations from the SM. The decay time dependent measurement improves on the previous analysis [20] through the use of a more efficient candidate selection and improved knowledge of the flavour at production, in addition to a data-driven determination of the efficiency as a function of decay time.

The results presented in this paper supersede previous measurements of the -violating phase [20] and -odd triple-product asymmetries [18], made using 1.0 of data collected at a .

2 Detector description

The LHCb detector [21] is a single-arm forward spectrometer covering the pseudorapidity range , designed for the study of particles containing or quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about , and three stations of silicon-strip detectors and straw drift tubes [22] placed downstream. The combined tracking system provides a momentum measurement with relative uncertainty that varies from 0.4% at low momentum to 0.6% at 100, and impact parameter resolution of 20 for tracks with large transverse momentum, . Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov (RICH) detectors [23]. Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. The trigger [24] consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. The hardware trigger selects candidates by requiring large transverse energy deposits in the calorimeters from at least one of the final state particles. In the software trigger, candidates are selected either by identifying events containing a pair of oppositely charged kaons with an invariant mass close to that of the meson or by using a topological -hadron trigger. The topological software trigger requires a two-, three- or four-track secondary vertex with a large sum of the of the charged particles and a significant displacement from the primary interaction vertices (PVs). At least one charged particle should have and with respect to any primary interaction greater than 16, where is defined as the difference in of a given PV fitted with and without the considered track. A multivariate algorithm [25] is used for the identification of secondary vertices consistent with the decay of a -hadron.

In the simulation, collisions are generated using Pythia [26, *Sjostrand:2007gs] with a specific LHCb configuration [28]. Decays of hadronic particles are described by EvtGen [29], in which final state radiation is generated using Photos [30]. The interaction of the generated particles with the detector and its response are implemented using the Geant4 toolkit [31, *Agostinelli:2002hh] as described in Ref. [33].

3 Selection and mass model

Events passing the trigger are initially required to pass loose requirements on the fit quality of the four-kaon vertex fit, the of each track, the transverse momentum of each particle, and the product of the transverse momenta of the two candidates. In addition, the reconstructed mass of meson candidates is required to be within 25 of the known mass [34].

In order to further separate the signal from the background, a boosted decision tree (BDT) is implemented [35, 36]. To train the BDT, simulated events passing the same loose requirements as the data events are used as signal, whereas events in the four-kaon invariant mass sidebands from data are used as background. The signal mass region is defined to be less than 120 from the known mass,  [34]. The invariant mass sidebands are defined to be inside the region , where is the four-kaon invariant mass. Separate BDTs are trained for data samples collected in 2011 and 2012, due to different data taking conditions in the different years. Variables used in the BDT consist of the minimum and maximum kaon and , the minimum and the maximum and of the candidates, the and of the candidate, the minimum probability of the kaon mass hypothesis using information from the RICH detectors, the quality of the four-kaon vertex fit, and the of the candidate. The BDT also includes kaon isolation asymmetries. The isolation variable is calculated as the scalar sum of the of charged particles inside a region defined as , where is the difference in azimuthal angle (pseudorapidity), not including the signal kaon from the decay. The asymmetry is then calculated as the difference between the isolation variable and the of the signal kaon, divided by the sum. After the BDT is trained, the optimum requirement on each BDT is chosen to maximise , where represent the expected number of signal (background) events in the signal region of the data sample.

The presence of peaking backgrounds is extensively studied. The decay modes considered include , , , and , of which only the last two are found to contribute, and are the result of a mis-identification of a pion or proton as a kaon, respectively. The number of events present in the data sample is determined from scaling the number of events seen in data through a different dedicated selection with the relative efficiencies between the two selections found from simulated events. This method yields values of and events in the 2011 and 2012 datasets, respectively. The amount of decays is estimated directly from data by changing the mass hypothesis of the final-state particle most likely to have the mass of the proton from RICH detector information. This method yields and events in the 2011 and 2012 datasets, respectively.

In order to correctly determine the number of events in the final data sample, the four-kaon invariant mass distributions are fitted with the signal described by a double Gaussian model, and the combinatorial background component described using an exponential function. The peaking background contributions are fixed to the shapes found in simulated events. The yields of the peaking background contributions are fixed to the numbers previously stated. This consists of the sum of a Crystal Ball function [37] and a Gaussian to describe the reflection and a Crystal Ball function to describe the reflection. Once the BDT requirements are imposed, an unbinned extended maximum likelihood fit to the four-kaon invariant mass yields and events in the 2011 and 2012 datasets, respectively. The combinatorial background yield is found to be and in the 2011 and 2012 datasets, respectively. The fits to the four-kaon invariant mass are shown in Fig. 1.

Figure 1: Four-kaon invariant mass distributions for the (left) 2011 and (right) 2012 datasets. The data points are represented by the black markers. Superimposed are the results of the total fit (red solid line), the (red long dashed), the (blue dotted), the (green short-dashed), and the combinatoric (purple dotted) fit components.

The use of the four-kaon invariant mass to assign signal weights allows for a decay time dependent fit to be performed with only the signal distribution explicitly described. The method for assigning the signal weights is described in greater detail in Sec. 8.1.

4 Phenomenology

The decay is composed of a mixture of eigenstates, that are disentangled by means of an angular analysis in the helicity basis, defined in Fig. 2.

Figure 2: Decay angles for the decay, where the momentum in the rest frame and the parent momentum in the rest frame of the meson span the two meson decay planes, is the angle between the track momentum in the meson rest frame and the parent momentum in the rest frame, is the angle between the two meson decay planes and is the unit vector normal to the decay plane of the meson.

4.1 Decay time dependent model

The decay is a decay, where denotes a pseudoscalar and a vector meson. However, due to the proximity of the resonance to that of the , there will also be contributions from -wave () and double -wave () processes, where denotes a spin-0 meson or a pair of non-resonant kaons. Thus the total amplitude is a coherent sum of -, -, and double -wave processes, and is accounted for during fitting by making use of the different functions of the helicity angles associated with these terms. The choice of which meson is used to determine and which is used to determine is randomised. The total amplitude () containing the -, -, and double -wave components as a function of decay time, , can be written as [38] \linenomath

(1)
\endlinenomath

where , , and are the -even longitudinal, -even parallel, and -odd perpendicular polarisations of the decay. The and processes are described by the and amplitudes, respectively. The differential decay rate may be found through the square of the total amplitude leading to the 15 terms [38] \linenomath

(2)
\endlinenomath

The term can be written as \linenomath

(3)
\endlinenomath

where the coefficients are shown in Table 1, is the decay width difference between the light and heavy mass eigenstates, is the average decay width, and is the - oscillation frequency. The differential decay rate for a meson produced at is obtained by changing the sign of the and coefficients.

Table 1: Coefficients of the time dependent terms and angular functions used in Eq. 2. Amplitudes are defined at .

The three -violating terms introduced in Table 1 are defined as \linenomath

(4)
(5)
(6)
\endlinenomath

where measures violation in the interference between the direct decay amplitude and that via mixing, , and are the complex parameters relating the flavour and mass eigenstates, and is the decay amplitude ( conjugate decay amplitude). Under the assumption that , measures direct violation. The violation parameters are assumed to be helicity independent. The association of and with -wave and double -wave terms implies that these consist solely of contributions with the same flavour content as the meson, i.e. an resonance.

In Table 1, and are the strong phases of the and processes, respectively. The -wave strong phases are defined to be and , with the notation .

4.2 Triple-product asymmetries

Scalar triple products of three momentum or spin vectors are odd under time reversal, . Non-zero asymmetries for these observables can either be due to a -violating phase or a -conserving phase and final-state interactions. Four-body final states give rise to three independent momentum vectors in the rest frame of the decaying meson. For a detailed review of the phenomenology the reader is referred to Ref. [10].

The two independent terms in the time dependent decay rate that contribute to a -odd asymmetry are the and terms, defined in Eq. 3. The triple products that allow access to these terms are

(7)
(8)

where () is a unit vector perpendicular to the decay plane and is a unit vector in the direction of in the rest frame, defined in Fig. 2. This then provides a method of probing violation without the need to measure the decay time or the initial flavour of the meson. It should be noted that while the observation of non-zero triple-product asymmetries implies violation or final state interactions (in the case of meson decays), the measurements of triple-product asymmetries consistent with zero do not rule out the presence of -violating effects, as strong phase differences can cause suppression [10].

In the decay, two triple products are defined as and where the positive sign is taken if and negative sign otherwise.

The -odd asymmetry corresponding to the observable, , is defined as the normalised difference between the number of decays with positive and negative values of , \linenomath

(9)
\endlinenomath

Similarly is defined as

(10)

Extraction of the triple-product asymmetries is then reduced to a simple counting exercise.

5 Decay time resolution

The sensitivity to is affected by the accuracy of the measured decay time. In order to resolve the fast - oscillation period of approximately , it is necessary to have a decay time resolution that is much smaller than this. To account for decay time resolution, all decay time dependent terms are convolved with a Gaussian function, with width that is estimated for each event, , based upon the uncertainty obtained from the vertex and kinematic fit. In order to apply an event-dependent resolution model during fitting, the estimated per-event decay time uncertainty must be calibrated. This is done using simulated events that are divided into bins of . For each bin, a Gaussian function is fitted to the difference between reconstructed decay time and the true decay time to determine the resolution . A first-order polynomial is then fitted to the distribution of versus , with parameters denoted by and . The calibrated per-event decay time uncertainty used in the decay time dependent fit is then calculated as . Gaussian constraints are used to account for the uncertainties on the calibration parameters in the decay time dependent fit. Cross-checks, consisting of the variation of an effective single Gaussian resolution far beyond the observed differences in data and simulated events yield negligible modifications to results, hence no systematic uncertainty is assigned. The results are verified to be largely insensitive to the details of the resolution model, as supported by tests on data and observed in similar measurements [7].

The effective single Gaussian resolution is found from simulated datasets to be and for the 2011 and 2012 datasets, respectively. Differences in the resolutions from 2011 and 2012 datasets are expected due to the independent selection requirements.

6 Acceptances

The four observables used to analyse events consist of the decay time and the three helicity angles, which require a good understanding of efficiencies in these variables. It is assumed that the decay time and angular acceptances factorise.

6.1 Angular acceptance

The geometry of the LHCb detector and the momentum requirements imposed on the final-state particles introduce efficiencies that vary as functions of the helicity angles. Simulated events with the same selection criteria as those applied to data events are used to determine this efficiency correction. Efficiencies as a function of the three helicity angles are shown in Fig. 3.

Figure 3: Angular acceptance found from simulated events (top-left) integrated over and as a function of , (top-right) integrated over and as a function of , and (bottom) integrated over and as a function of .

Acceptance functions are included in the decay time dependent fit through the 15 integrals , where are the angular functions given in Table 1 and is the efficiency as a function of the set of helicity angles, . The inclusion of the integrals in the normalisation of the probability density function (PDF) is sufficient to describe the angular acceptance as the acceptance factors for each event appear as a constant in the log-likelihood, the construction of which is described in detail in Sec. 8.1, and therefore do not affect the fitted parameters. The method for the calculation of the integrals is described in detail in Ref. [39]. The integrals are calculated correcting for the differences between data and simulated events. This includes differences in the BDT training variables that can affect acceptance corrections through correlations with the helicity angles.

The fit to determine the triple-product asymmetries assumes that the and observables are symmetric in the acceptance corrections. Simulated events are then used to assign a systematic uncertainty related to this assumption.

6.2 Decay time acceptance

The impact parameter requirements on the final-state particles efficiently suppress the background from numerous pions and kaons originating from the PV, but introduce a decay time dependence in the selection efficiency.

The efficiency as a function of the decay time is taken from data events, with an upper limit of 1 applied to the decay time to ensure topological similarity to the decay. After the same decay time-biasing selections are applied to the decay as used in the decay, events are re-weighted according to the minimum track transverse momentum to ensure the closest agreement between the time acceptances of and simulated events. The denominator used to calculate the decay time acceptance in data is taken from a simulated dataset, generated with the lifetime taken from the value measured by the LHCb experiment [40].

For the case of the decay time dependent fit, the efficiency as a function of the decay time is modelled as a histogram, with systematic uncertainties arising from the differences in and simulated events. Figure 4 shows the comparison of the efficiency as a function of decay time calculated using data in 2011 and 2012. Also shown is the comparison between and simulated events.

Figure 4: Decay time acceptance (left) calculated using data events, and (right) comparing and simulation, where events are re-weighted to match the distribution of the minimum of the final state particles in decays.

In the fit to determine the triple-product asymmetries, the decay time acceptance is treated only as a systematic uncertainty, which is based on the acceptance found from data events.

7 Flavour tagging

To maximise the sensitivity on , the determination of the initial flavour of the meson is necessary. This results from the terms in the differential decay rate with the largest sensitivity to requiring the identification (tagging) of the flavour at production. At LHCb, tagging is achieved through the use of different algorithms described in Refs.[7, 41]. This analysis uses both the opposite side (OS) and same side kaon (SSK) flavour taggers.

The OS flavour tagging algorithm [42] makes use of the -quark produced in association with the signal -quark. In this analysis, the predicted probability of an incorrect flavour assignment, , is determined for each event by a neural network that is calibrated using , , , , and data as control modes. Details of the calibration procedure can be found in Ref. [7].

When a signal meson is formed, there is an associated quark formed in the first branches of the fragmentation that about 50 % of the time forms a charged kaon, which is likely to originate close to the meson production point. The kaon charge therefore allows for the identification of the flavour of the signal meson. This principle is exploited by the SSK flavour tagging algorithm [41]. The SSK tagger is calibrated with the decay mode. A neural network is used to select fragmentation particles, improving the flavour tagging power quoted in the previous decay time dependent measurement [20, 43].

Flavour tagging power is defined as , where is the flavour tagging efficiency and is the dilution. Table 2 shows the tagging power for the events tagged by only one of the algorithms and those tagged by both, estimated from 2011 and 2012 data events separately. Uncertainties due to the calibration of the flavour tagging algorithms are applied as Gaussian constraints in the decay time dependent fit. The dependence of the flavour tagging initial flavour of the meson is accounted for during fitting.

Dataset
2011 OS
2012 OS
2011 SSK
2012 SSK
2011 Both
2012 Both
Table 2: Tagging efficiency (), effective dilution (), and tagging power (), as estimated from the data for events tagged containing information from OS algorithms only, SSK algorithms only, and information from both algorithms. Quoted uncertainties include both statistical and systematic contributions.

8 Decay time dependent measurement

8.1 Likelihood

The parameters of interest are the violation parameters ( and ), the polarisation amplitudes (, , , and ), and the strong phases (, , , and ), as defined in Sec. 4.1. The -wave amplitudes are defined such that , hence only two are free parameters.

Parameter estimation is achieved from a minimisation of the negative log likelihood. The likelihood, , is weighted using the sPlot method [44, 45], with the signal weight of an event calculated from the equation \linenomath

(11)
\endlinenomath

where sums over the number of fit components to the four-kaon invariant mass, with PDFs , associated yields , and is the covariance between the signal yield and the yield associated with the fit component. The log-likelihood then takes the form \linenomath

(12)
\endlinenomath

where is used to account for the weights in the determination of the statistical uncertainties, and is the signal model of Eq. 2, accounting also for the effects of decay time and angular acceptance, in addition to the probability of an incorrect flavour tag. Explicitly, this can be written as \linenomath

(13)
\endlinenomath

where are the normalisation integrals used to describe the angular acceptance described in Sec. 6.1 and \linenomath

(14)
\endlinenomath

where is the calibrated probability of an incorrect flavour assignment, and denotes the Gaussian resolution function. In Eq. 14, for a () meson at in event or if no flavour tagging information exists. The 2011 and 2012 data samples are assigned independent signal weights, decay time and angular acceptances, in addition to separate Gaussian constraints to the decay time resolution parameters as defined in Sec. 5. The value of the - oscillation frequency is constrained to the LHCb measured value of  [46]. The values of the decay width and decay width difference are constrained to the LHCb measured values of and , respectively [7]. The Gaussian constraints applied to the and parameters use the combination of the measured values from and decays. Constraints are therefore applied taking into account a correlation of for the statistical uncertainties [7]. The systematic uncertainties are taken to be uncorrelated between the and decay modes.

The events selected in this analysis are within the two-kaon invariant mass range , and are divided into three regions. These correspond to both candidates with invariant masses smaller than the known mass, one candidate with an invariant mass smaller than the known mass and one larger, and a third region in which both candidates have invariant masses larger than the known mass. Binning the data in this way allows the analysis to become insensitive to correction factors that must be applied to each of the -wave and double -wave interference terms in the differential cross section. These factors modulate the contributions of the interference terms in the angular PDF due to the different line-shapes of kaon pairs originating from spin-1 and spin-0 configurations. Their parameterisations are denoted by and , respectively. The spin-1 configuration is described by a Breit-Wigner function and the spin-0 configuration is assumed to be approximately uniform. The correction factors, denoted by , are defined from the relation [7] \linenomath

(15)
\endlinenomath

where and are the upper and lower edges of a given bin, respectively. Alternative assumptions on the -wave and -wave lineshapes are found to have a negligible effect on the parameter estimation.

A simultaneous fit is then performed in the three invariant mass regions, with all parameters shared except for the fractions and strong phases associated with the -wave and double -wave, which are allowed to vary independently in each region. The correction factors are calculated as described in Ref. [7]. The correction factor used for each region is calculated to be 0.69.

8.2 Results

The results of the fit to the parameters of interest are given in Table 3.

Parameter Best fit value
()
()
()
()
()
()
Table 3: Results of the decay time dependent fit.

The -wave and double -wave parameter estimations for the three regions defined in Sec. 8.1 are given in Table 4. The fraction of -wave is found to be consistent with zero in all three mass regions.

Region () ()
Table 4: -wave and double -wave results of the decay time dependent fit for the three regions identified in Sec. 8.1, where indicates the region with both two-kaon invariant masses smaller than the known mass, the region with one smaller and one larger, and indicates the region with both two-kaon invariant masses larger than the known mass.

The correlation matrix is shown in Table 5. The largest correlations are found to be between the amplitudes themselves and the -conserving strong phases themselves. The observed correlations have been verified with simulated datasets. Cross-checks are performed on simulated datasets generated with the same number of events as observed in data, and with the same physics parameters, to ensure that generation values are recovered with negligible biases.

1.00 –0.48 0.01 0.07 0.00 0.01 –0.04 0.01 –0.13 –0.01
1.00 –0.02 –0.14 –0.03 0.01 0.05 0.02 0.07 0.03
1.00 0.18 0.59 0.01 0.04 0.07 –0.03 –0.18
1.00 0.21 0.01 0.01 0.06 –0.03 –0.25