# Branching Ratio of the Electromagnetic Decay of the

###### Abstract

The CLAS detector was used to obtain the first ever measurement of the electromagnetic decay of the from the reaction . A real photon beam with a maximum energy of 3.8 GeV was incident on a liquid-hydrogen target, resulting in the photoproduction of the kaon and hyperon. Kinematic fitting was used to separate the reaction channel from the background processes. The fitting algorithm exploited a new method to kinematically fit neutrons in the CLAS detector, leading to the partial width measurement of keV. A U-spin symmetry test using the SU(3) flavor-multiplet representation yields predictions for the and partial widths that agree with the experimental measurements.

###### pacs:

13.40.Em,14.20.Jn,13.30.Ce,13.40.HqCurrent address:] Los Alamos National Laboratory, Los Alamos, NM 87544 USA Current address:] Skobeltsyn Nuclear Physics Institute, Skobeltsyn Nuclear Physics Institute, 119899 Moscow, Russia Current address:] INFN, Laboratori Nazionali di Frascati, 00044 Frascati, Italy Current address:] INFN, Sezione di Genova, 16146 Genova, Italy Current address:] INFN, Sezione di Genova, 16146 Genova, Italy Current address:] Argonne National Laboratory, Argonne, Illinois 60439

The CLAS Collaboration

## I Introduction

The electromagnetic (EM) decay of baryons can provide considerable information on their underlying structure. This transition offers a clean probe of the wavefunction of the initial and final state baryons, providing theoretical constraints and tests of the quark model. The non-relativistic quark model (NRQM) of Isgur and Karl Isgur1 ; Isgur2 predicts the electromagnetic properties of the ground state baryons reasonably well. It has been less successful giving accurate descriptions of the low-lying excited-state hyperons. Several other theoretical techniques have been used to more accurately calculate these transitions, including (NRQM) DHK ; Koniuk , a relativized constituent quark model (RCQM) warns , a chiral constituent quark model (CQM) wagner , the MIT bag model kaxiras , the bound-state soliton model Schat , a three-flavor generalization of the Skyrme model that uses the collective approach Abada ; Haberichter , and an algebraic model of hadron structure Bijker .

Photoproduction from nucleon targets is a useful technique to cleanly generate a significant statistical sample of hyperons and to measure EM transitions to other decuplet baryons. If the EM transition form factors for decuplet baryons with strangeness are also sensitive to meson cloud effects, models attempting to make predictions of the decuplet radiative decay widths will need revisions to incorporate this effect. Comparison of data for the EM decay of decuplet hyperons, , to the present predictions of quark models provides a measure of the importance of meson cloud diagrams in the transition. Experimental results for the EM decay ratios for all charge states are desirable to obtain a complete comparison to EM decay predictions for the . Precision measurements of the and decay widths can be particularly useful in determining the degree of SU(3) symmetry breaking.

The decay width from the measurement of kellpaper ; taylor is much larger than most current theoretical predictions. This could be due to meson cloud effects, which were not included in these calculations. There is a theoretical basis for calculating these effects Lee that suggests pion cloud effects may be sizable. For example, they are predicted to contribute on the order of 40% to the magnetic dipole transition form factor, , for low . The CQM CQM indicates that the value of is directly proportional to the proton magnetic moment Lee , and measurements of for low are rationalized in the framework of the model if the experimental magnetic moment is lowered by about 25%.

With theoretical predictions for the degree at which the meson cloud effects plays a role, it is then possible to test flavor symmetry breaking (and the degree at which it is broken). This can be achieved by measuring both the and decay widths and comparing these to predictions from flavor SU(3) relations.

Just like isospin invariance can be used to compare the and decays, U-spin invariance may be used to compare the and decays. U-spin is analogous to isospin in that it is a symmetry in the exchange of the and quarks rather than the and quarks. A value of U-spin can be assigned to each baryon based on its quark composition. The and the of the baryon decuplet have , whereas the octet baryons and have .

U-spin symmetry forbids radiative decays of specific decuplet baryons. Since the photon is a charge singlet with , this implies that

have zero amplitude in the equal-mass limit due to U-spin symmetry. This can also be understood in the context of the SU(6) wavefunctions for these baryons. The M1 transition operator is written between the initial and final states as :

(1) |

Here the sum is over all constituent quarks, , and are the mass, spin vector and charge of the quark, is the propagation direction, and is the polarization vector. One can also show that the same transition operator for the gives a non-zero amplitude. U-spin invariance implies a large difference in the radiative decay widths of the and .

The chiral symmetry for U-spin is strongly broken because the constituent mass of the strange quark, , is approximately 1.5 times greater than the non-strange quarks, . The magnetic moment is inversely proportional to the mass, and so there is no cancellation in the wavefunction like in the equal-mass case in Eq. 1. From Ref. lipkin2 , an estimate of the ratio of the EM decay rates from the ratio of the square of the transition operators can be expressed as

resulting in a value of about 1%. This suggests that U-spin symmetry breaking for radiative decays is at the level of only a few percent. At this level U-spin is an effective tool, even considering the quark mass difference.

Detailed calculations from the CQM and -type expansions of the EM decay rates have been carried out by several groups quarks ; nrqm , all of which come up with decay ratios of a similar scale. In lattice QCD, the quarks have very different interactions with the photon than for the CQM, but these too have ratios (for the above equation) within a few percent lattice . This consistency makes a stronger case for the usefulness of U-spin symmetry.

There has been much theoretical interest in radiative baryon decays. However, there are only a few measurements. Recently, a measurement of the radiative decay of the was attempted by the SELEX collaboration selex , resulting in only an upper limit. The 90% confidence level upper bound of keV was reported, however most models predict a value of less than 4 keV. Ultimately this result has limited power to constrain theoretical estimates. More experimental measurements are necessary to provide better constraints.

A program to investigate the various electromagnetic decays is underway using data from the CEBAF Large Angle Spectrometer (CLAS) detector. First, two independent analyses of the EM decay of the have been completed kellpaper , taylor . The consistency in these results has given confidence in the notion that meson cloud effects are indeed contributing significantly. The next step described and presented here was to measure the electromagnetic decay, which has not been done before. The final program analysis for the decay will be addressed in a future CLAS publication.

In the following, a description of the experimental details and analysis procedure for extracting the EM decay branching ratio normalized to the strong decay is provided. Some specifics are given about neutron detection and the development of the neutron covariance matrix required by the analysis. After the signal extraction a U-spin symmetry test using the U-spin SU(3) multiplet representation is used to predict the and partial widths, which are then compared to the experimental results.

## Ii The Experiment

The present measurements were carried out with the CLAS in Hall B at the Thomas Jefferson National Accelerator Facility mecking . An electron beam of energy 4.023 GeV was used to produce a photon beam with an energy range of 1.6-3.8 GeV, as deduced by a magnetic spectrometer tagnim that “tagged” the electron with an energy resolution of . A 40-cm-long liquid hydrogen target was placed such that the center of the target was 10 cm upstream from the center of CLAS.

The CLAS detector is constructed around six superconducting coils that generate a toroidal magnetic field to momentum-analyze charged particles. The detection system consists of multiple layers of drift chambers to determine charged-particle trajectories, Cerenkov detectors for electron/pion separation, scintillation counters for flight-time measurements, and calorimeters to identify electrons and high-energy neutral particles, see Fig. 1. The Cerenkov detectors are not required for this experiment.

Each event trigger required a coincidence between the OR of the detector elements in the focal plane of the photon spectrometer and the CLAS Level 1 trigger. The Level 1 trigger required two charged particles in two different sectors of CLAS within a 150 ns coincidence time window. The approximate integrated luminosity for the CLAS run period used in this analysis was 70 pb. Details of the experimental setup can be found in Refs. mecking ; mccracken .

## Iii Event Selection

Events were selected for the channel . The present Particle Data Group (PDG) branching ratios list the decay to be , and assuming isospin symmetry, this leads to a branching ratio of for the decay pdg . This channel will be used to normalize the radiative signal that comes from the channel . For both channels the topology of the decay is , where is not directly measured, such that the and are differentiated using conservation of energy and momentum. This topology leads to the final set of decay products . The charged particles can easily be detected with the use of the CLAS drift chambers and time-of-flight system. The neutron must be detected with the CLAS electromagnetic calorimeters. The analysis was done using a previously prepared data reduction (skim) that required two positively charged tracks and one negatively charged track for each event.

Cuts were applied to take into account both the regions of CLAS where there are holes in the acceptance that arose from problematic detector elements and regions that were not well simulated. This includes tracks at extremely forward or backward angles, areas near the torus coils, and regions where the drift chambers and scintillator counter efficiencies were not well understood. Tracks that point near these shadow regions are less likely to be reconstructed accurately. In addition a minimum momentum of 0.125 GeV, after energy loss corrections, was enforced for both positively and negatively charged particles to ensure accurate drift chamber track reconstruction.

During the initial data skim, the hit times in the start counter that surround the target were used to find an interaction vertex time for each charged particle, which was then matched up with photons identified in the tagger, where there can be up to 10 candidate photons for a given event. The photon with the closest time to any track was selected as the photon that caused the event. Specifically, the time of interaction was determined using the time of the electron beam bucket (the accelerator RF time) that produced the event. To correlate the interaction time with the photon production time, a timing coincidence between the tagger and the start counter was used. The RF time for the photon was then used to get the vertex time (photon interaction time ) for the event. Using the time-of-flight (TOF) from the event vertex to the scintillator counter, the velocity was calculated for each particle. From and the particle’s measured momentum, a mass was calculated. Each track did not need to have a hit registered in the start counter for its mass to be calculated, only one track in the event needed a start counter hit.

The mass squared calculated from time-of-flight is

(2) |

where such that is the path length from the target to the scintillator, is the measured time-of-flight, and the speed of light is set to 1. From this initial identification, it was possible to use additional timing information to improve event selection. The measured time-of-flight and calculated time-of-flight were used for an additional constraint. The measured time-of-flight is , where is the time at which the particle strikes the time-of-flight scintillator counter. is then

(3) |

where is the time-of-flight calculated for an assumed mass such that

(4) |

where is the assumed mass for the particle of interest, and is the momentum magnitude. Cutting on or should be effectively equivalent.

Using for each particle it was possible to reject events that were not associated with the correct RF beam bucket, which was separated by 2 ns. This was done by requiring 1 ns for all charged particles in the initial analysis. This cut was chosen to minimize signal loss while also minimizing overlap from other beam buckets.

A cut was used to clean up the identification scheme. is the difference between the measured and the calculated . The good events were taken within a cut of for all pions as shown in Fig. 2.

### iii.1 Kaon identification

In the reaction of interest, , it is necessary to determine which comes from the . It is possible to check both final state ’s with the detected to study the kaon candidates in each case by using the invariant mass.

The invariant mass was selected for each pair, as shown in Fig. 3. Whichever lead to the invariant mass that was closest to the mass of the was associated with the identification. Afterwards a cut at GeV about the mass was used to clean up the selection. For cases where both combinations with the fell within the mass limit the wrong could be selected. Monte Carlo was used to check the frequency of this ambiguity and was found to be for the and channels. With additional kinematic constraints these ambiguous events were ultimately rejected.

### iii.2 Neutron identification

Neutral particles are detected in CLAS as clusters in the electromagnetic calorimeter (EC) ec not associated with any reconstructed charged track from the drift chambers. The momentum reconstruction depends on the path length and TOF of the neutron. The directional components of the neutral track were found by using the vertex and the cluster position on the EC for that hit. In this experiment the information about the neutral vertex was limited to the information that could be extracted from the other charged particle vertices in the decay chain.

The EC has six triangular sectors made of alternating layers of lead and scintillator. Scintillator layers compose of about 10-cm-wide scintillator strips, where strips in every consecutive layer run parallel to one of the three sides of the triangle. The EC has 13 layers of scintillator strips for each of the three directions making 39 total layers. In each direction the EC is subdivided into an inner stack of 5 layers and an outer stack of 8 layers.

The EC reconstruction software forms a cluster by first identifying a collection of strips in each of the three views. The software requires a set of threshold conditions to be met and that the strips to be contiguous. The groups of strips that pass these conditions define a and are organized with respect to the sum of the strip energies. The peak centroid and RMS in each of the three views is obtained and clusters are identified as intersection of centroids of peaks within their RMS. If a given peak contributes to multiple hits, then the energy in each hit due to that peak is calculated as being proportional to the relative sizes of the multiple hits as measured in the other views. For example, if there are multiple hits which have the same U peak, the energy in V and W is added for each of the hits, and the ratio of these summed energies determines the weight of the U peak’s energy of the multiple hits. If the software thresholds for the scintillator strip, peak and weighted hit energy are met then the cluster position and time are recorded. The events EC time (or EC time-of-flight) is defined as the time between the event start time and the time of the EC cluster.

During analysis the strip information was used to determine whether the centroid was reconstructed using only the outer stack of the EC or both the inner stack and outer stack. The centroid could be located in any one of the layers of each stack, however, the cluster reconstruction position did not contain that information, so the hit was assumed to be on the upstream face (closer to the target) of whichever stack the hit was contained in. With the assumed reaction vertex and the EC cluster position, the directional components in and were found, as well as the path length of the neutron. Using the EC time-of-flight the momentum was calculated. The neutrons were differentiated from photons using a cut.

Neutron detection is essential for the reaction of interest. The neutron momentum was used in combination with the not associated with the , to study the kinematics of the . Having clean constraints on the and is important when considering the event topology .

A thorough study of the accuracy of the EC for neutron reconstruction in all kinematic ranges has not been achieved previously at CLAS. Obtaining the resolution in all measured variables for neutron reconstruction was an essential part of the present analysis. Correlations between each measured variable in the EC had also not been previously studied. The EC covariance matrix of the neutron can give a lot of information about the quality of the kinematic variables in each region of the EC. These values can then be used to weight the neutron measurements appropriately in kinematic constraints that depend on maximum likelihood methods keller3 .

There are resolution differentials in all measured variables that are related to the acceptance of the EC. Hits from the center of each triangular sector have better measurements over those on the edges due to shower leakage. The inner and outer stacks can act as separate detectors in the sense that if a hit is seen in the outer but not the inner stack, then the inner stack plays no role in the reconstruction of that hit. It is far less common for an event to pass through the inner stack with no effect and to register a hit in the outer stack, but for these events, the outer EC stack was used independently with its own unique resolution parameters for each measured variable. All possible combinations of the measured neutron dependence on , , and were studied to develop a complete understanding of the neutron variance and covariance in the EC keller2 .

#### iii.2.1 Neutron detection test

The test reaction was isolated in the data set by selecting a and two , and kinematically fitting to a missing neutron hypothesis and then taking a confidence level cut. Only the detected neutrons found in a direction less than from the kinematically fit three pion missing momentum were used to ensure the correct neutron. This channel was selected because the final decay products are identical to the reaction of interest . In addition, the momentum range of the detected particles is the same. Kinematic constraints were imposed to remove possible combinations with invariant mass equal to the , so that only the events survived. The simplification made by working with the test channel is that in the reaction, there is only one interaction vertex. This implies that the neutron comes from the primary interaction vertex, which can be well determined using the charged pions.

To study the measured neutron variable residuals, we required each event to have one detected neutron and then compared the measured variable with the kinematically fit missing variable in each case. Assuming a high quality missing neutron four vector, this procedure was used to find the change in resolution with respect to all measured variables over the EC face keller2 . Only the events that registered an actual hit in the EC were used to study the resolution. No EC fiducial cuts were applied during the covariance investigation so that the entire EC face could be studied and compared to Monte Carlo. During analysis, only the minimal fiducial cuts were applied of on the neutron polar angle to maximize the statistics.

For the test channel the neutron vertex was found from a multi-track-vertex fitting procedure to give an accurate vertex (at less than 4% uncertainty in position for the topology of interest) for multiple final state particles all coming from the same vertex mcnabb . Because the neutron came from the primary interaction vertex in this study, its vertex was accurately known. However, for events in which the neutron comes from secondary vertices, its vertex is not as easily obtained. Because the neutron vertex information can affect its reconstructed four momentum, these differences can be important when studying resolutions.

Once the EC neutron covariance matrix for was well understood, the Monte Carlo resolution was matched to the data using the same test channel keller2 . The Monte Carlo was then used to study the channel and to find the neutron covariance matrix specific for this topology. In this way the interaction point with the beam line could be used as the starting point of the neutron path in the neutron reconstruction process for any of the decays, so no bias was introduced by assuming a . This step removed the explicit dependence on the neutron vertex. The Monte Carlo covariance matrix for was then used to tune the data neutron covariance matrix specific to the topology. The change in the momentum resolution from this tuning process was smaller than .

In order to obtain a consistent covariance matrix for the neutron, discrimination was made for each neutron between the inner and outer EC stacks in order to calculate the correct path length. In addition, timing and momentum corrections were applied as described below.

#### iii.2.2 Neutron path

As previously stated, the distance that the neutron travels in CLAS was used with the EC time-of-flight to determine the momentum of the neutron. The distance is dependent on the EC stack and the position of the cluster reconstruction. The inner stack cluster reconstruction was always used unless there was only a hit in the outer stack. A determination of whether there was a hit only in the outer stack, the inner stack, or both, was made by checking which EC scintillators were associated with an event. The probability to find a hit in the outer stack alone was less than 15%. For all other neutral hits either the inner stack or both were associated with the hit. If there was a hit only in the outer EC stack, the first layer (layer closest to the target) of the outer stack gave the plane of the EC cluster coordinates. If there was a hit in the inner stack or both, the first layer of the inner stack was used as the plane of the EC cluster coordinates.

#### iii.2.3 Neutron time

The time-of-flight for the neutron came from the difference between the event start time and the EC cluster time. The path length used to reconstruct the neutron momentum, which assumes a hit on the EC face of either the inner or outer stack, was inaccurate by the distance the neutron traveled past the EC face into the detector. A correction was used to compensate for the average additional distance the neutron travels into the EC. In addition the outer stack is farther from the target and for the same event would have a slightly different time response than that of the inner stack.

A correction was implemented directly in the neutron time-of-flight to correct the neutron momentum. This was done by using the calculated neutron time-of-flight and comparing it to the expected time . Here, is the path length of the neutron, and is the calculated using the missing momentum and energy of the neutron from the events that passed a 10% confidence level cut from a kinematic fit under a missing neutron hypothesis. By using for each stack, a separate correction was found for each case, such that . By finding separate timing corrections for the inner and outer stacks, the farther distance of the outer stack was compensated for. The separate study of timing corrections for the inner and outer stacks was carried out using the Monte Carlo simulations.

The timing correction used is the same in all directions. However, in order to obtain accurate covariance information an additional momentum correction was required which is sensitive to the geometry of the EC and neutrons trajectory. It was only after all corrections that the residual means of all measured variables were centered around zero to accurately reflect the neutron resolutions.

#### iii.2.4 Neutron momentum correction

A neutron momentum correction was implemented by studying the trend found in momentum and position resolutions over various kinematic ranges. This was done by studying the residuals , , and over each variable , , and . The residual of momentum, , is defined as the difference between the missing neutron momentum and the reconstructed neutron momentum. Likewise for the directional components and . The missing neutron four-vector was found by kinematically fitting the charged decay products in the missing mass of the neutron and taking a 10% confidence level cut. In this fit there were three unknowns from the components of the missing momentum vector and four constraints from the conservation of energy and momentum to make a 1-C fit keller3 . The trend of each of the residuals should be distributed around zero, if it is not, the distribution will display a trend that can be used to correct the measured variable. Once the neutron momentum magnitude and directional resolutions are evenly distributed around zero, the missing and detected four-vectors are comparable. This implies that for the majority of events, the detected neutron momentum vector was the same within the experimental resolution as the high quality kinematically fit missing neutron momentum vector.

Similar corrections were implemented to with respect to such that , with respect to such that , and with respect to such that . The Monte Carlo required separate corrections in the same variables that were determined using the same procedures as for the data.

#### iii.2.5 Neutron covariance matrix

The neutron covariance matrix was determined after the corrected neutron path was used with the timing correction implemented for the corresponding EC stack and momentum correction. This covariance matrix was required in order to kinematically fit the neutron with the other detected particles. The variables used to represent the neutron vector components were , , and , leading to a covariance matrix of

The variance and correlations of each variable were obtained by studying the differences between the kinematic variables of the detected neutron from the kinematically fit missing neutron. The residuals in each case were sliced and binned to fit with a Gaussian to find the functional dependence in each variable. Once the functional dependence on each variable was found for each and bin, an empirical smearing technique was used to get the Monte Carlo to closely match the same functional dependence seen in the data. Similar steps were taken for the directional components keller2 .

## Iv Analysis Procedure

In the analysis, progressive steps were taken to remove as much identifiable background as possible while preserving the counts from the channels and . The radiative signal was buried by the decay and required advanced fitting techniques to resolve the signal. The fitting procedure developed here required that all other backgrounds be removed or extensively minimized to ensure high quality separation between the radiative and strong decays of the .

The goal was then to achieve clean hadron identification before using the fitting procedure for the competing and radiative signals. For the sake of notation, let indicate the used in the invariant mass selection, such that is the that forms the closest known mass when combined with the . Naturally is the other detected . Fig. 5 shows the invariant mass of the - (upper left), missing mass off the - (upper right), the n- invariant mass (lower left), and the missing mass squared of all the detected particles (lower right). The distributions in Fig. 5 are before any kinematic constraints and after the assignments are made. The was cut about GeV of the known mass to reduce the - background. Fig. 6 shows the invariant mass of the n- (dashed lines show the cut that was implemented) (upper left), missing mass off the (upper right), the missing energy off all detected particles (dashed lines show the cut that was implemented) (lower left), and the missing mass squared of all the detected particles (lower right), after the cut. The peak is clearly visible as seen in the (upper left) plot. The clear visible peak in the missing mass squared at the mass is also an indication that the neutron measurement is effective.

A Monte Carlo study on the phase space of the reaction indicated that most events from the missing energy boosted in the frame should be in the range of 0-0.25 GeV. A cut at 0.24 GeV was chosen to clean up the candidates. This cut preserved of the radiative and signals, while substantially reducing the background under the . Fig. 7 shows an example of the Monte Carlo missing energy distribution for the reaction with the dotted line indicating the 0.24 GeV cut. Fig. 8 shows the results on the missing mass off the . A cut on the invariant mass of the n- combination along with the missing energy cut cleans up the excited-state hyperon spectrum, making the quite prominent. Finally a GeV cut was applied to the missing mass off the around the known mass of the .

Fig. 9 shows the missing mass squared of all detected particles after all of the mentioned cuts. A clear peak is present with some smaller but unknown amount of radiative signal at zero missing mass. Fig. 10 shows the the missing mass off the n- combination. The missing mass off the will be used in the background analysis.

### iv.1 Simulations

A Monte Carlo simulation of the CLAS detector was performed using GEANT geant , set up for the run conditions. The experimental photon energy distribution for an incident electron beam of 4.0186 GeV was used to determine the energies of the incident photons in the simulation. Events were generated for the radiative channel (), the normalization reaction (), and several background reactions.

A phase space event generator was used with a variable -dependence such that a channel with a was generated uniformly in the center-of-mass frame in with a -dependent distribution in according to with =2.0 GeV. Gaussian distributions in and with cm were used to approximate the beam width at the target. Events were generated uniformly along the length of the target. These generated events were fed into a simulation of the CLAS detector.

For each contributing channel the differential cross section was found using data to weight the strength and angular distribution in the Monte Carlo generator. A very careful empirical smearing procedure was used to match the Monte Carlo and data resolutions. This procedure is discussed in Refs. keller3 and keller2 . Ultimately the missing mass squared from Monte Carlo, , gave very good agreement with the shape of the experimental data as shown in Fig. 9.

This analysis relies on an understanding of the contributing leakage of background channels into the and radiative signal peaks. For example, leakage from a background channel such as will lead to over-counting of events. The Monte Carlo of various possible contributions was used to study the possible background leakage at various stages of the analysis. The acceptances of each possible background were used to study the possible effect on the final reported ratio.

At this stage of the analysis the most likely background reaction is , followed by decay. The decays primarily to followed by . The full reaction then has the same final state as and must be carefully considered. This is also true for reactions like . The has a relatively large decay width at 250-450 MeV pdg , which implies possible leakage into any cut around the . This is the reason for the extra steps to isolate the as shown in Fig. 8. Due to the constraints on the and , along with the series of cuts shown in Fig. 8, the leakage from the decay was negligible. However, because the is very close in mass to the and the from the decay has a similar phase space to the , there was some leakage that needed to be accounted for.

The contribution of the background was also studied. The constraints on the reconstructed , combined with the missing mass constraint off the to be the mass of the , should minimize any contribution. However, because the reaction has the same possible final states that are being analyzed, it was carefully considered. The Monte Carlo investigation indicated that there were contributions that needed to be accounted for.

Also investigated with Monte Carlo was the reaction , where is any meson that can decay to , and the provides the detected pion. Similarly, , where the decay to was a possible contaminant. In addition to the kinematic constraints previously mentioned, these backgrounds cannot contribute for low ( GeV). For testing purposes of these types of reactions, the channel was considered. The has a width of MeV and decays almost 100% to , so it was possible to leak under the invariant mass cut. Ultimately all contributions of the channel type were found to be negligible (zero acceptance).

Based on the possible final state decay products, the reactions , , and were also considered. These backgrounds also have negligible acceptance as determined from high-statistics Monte Carlo studies using the same event selection as for the data, and hence were dismissed.

#### iv.1.1 Minimization of the and backgrounds

As indicated in the previous section, the and channels are the most likely background contributors. The branching ratio of is pdg , implying a high probability of overlap with the normalization channel . The channel was a concern for the same reason. To get an indication of how much these channels were present in the data, the missing mass off the - combination was used. For the () channel the missing mass off - should show a () peak. The missing mass spectrum off the - combination from the Monte Carlo of the channel was compared to the same distribution from data. To isolate the channel in the data, a kinematic fit to the missing with a confidence level cut was applied to leave only the final state in the data. A direct comparison between the data and Monte Carlo of the missing mass spectrum off the - combination then deviated where background was present (normalizing the Monte Carlo to the data). It is clear from the comparison shown in Fig. 11 that there is a non-negligible amount of events. The number of events with a present is too small to be visible.

To remove the majority of the events a kinematic fit was performed with a missing hypothesis, while constraining the and to have the mass, resulting in a 2-C fit. High confidence level candidates were then rejected as part of the identifiable background. Various confidence level cuts were tested until the data matched the Monte Carlo in the mass range of the (within the statistical error bars of the data). Ultimately a confidence level cut of was used, resulting in the comparison seen in Fig. 12. The same cut was used to reduce the possible background by imposing the constraint on the and to be . In this case it was not possible to use the Monte Carlo and data to check in the same way, and so a cut was used. The same cut was also used under a hypothesis to reduce the acceptance of the channel. Similarly for the with the hypothesis of the and to be .

Even with the above cuts in place, a small amount of the and background still slipped through. An estimate of this leakage was found and then subtracted out of the final result, as discussed in the following sections.

#### iv.1.2 Cross sections

To tune the Monte Carlo, the differential cross sections for the reactions , , and were obtained. The shapes of the differential cross sections were then used to adjust the event generators. In each case a photon energy distribution was used in the generator.

A normalization procedure shown to accurately reproduce a number of well-measured channels was used for each cross section mccracken . The following is a discussion of the procedure used to extract the cross section. A similar procedure was followed for the two background channels.

With all of the aforementioned constraints the reaction was easily isolated with a kinematic fit to the missing . A confidence level cut was applied to ensure channel purity. The yield was determined by the ratio of the raw events to the number of incident photons in each bin, so as to normalize with the bremsstrahlung spectrum. Corrections were made for each bin with the newly obtained acceptances. The angular dependence in the generator was initially flat with a zero -slope dependence. After the differential cross section was obtained, the distributions were used to adjust the generator. Each corresponding angle and energy bin was filled according to the distributions seen in the data results. Each angle bin was divided into bins and represented accordingly in the new event weighting scheme of the generator. The adjusted generator was then used to produce new Monte Carlo and obtain more accurate acceptances. This process was then iterated until no change was seen to the differential cross sections within the statistical uncertainties. After these modifications were made, the resulting Monte Carlo was compared with the data, using the momentum distributions for the , , and neutron tracks, as well as the lab frame angle distribution, and found to closely match the data within the statistical uncertainties, see Fig. 13.

The same corrections were applied to the cross sections. The nature of the corrections to the Monte Carlo were specific to the cross section and so the corrections could be applied without discrimination between the electromagnetic and strong decays of the baryon.

To calculate the acceptance of the signal and background reactions, an extraction method used to resolve the radiative and channels was required and will be discussed next.

### iv.2 Fitting technique

The two-step kinematic fitting procedure developed in Ref. kellpaper was employed to resolve the radiative and strong decay signals. Because of such similar topologies and the small relative size of the radiative signal, the kinematic fitting procedure could not be expected to cleanly separate the events from the overwhelming events in a single fit. Thus we employed a two-step kinematic fitting procedure, making first a kinematic fit to a missing hypothesis, then checking the quality of the fit of the low confidence level candidates in a second kinematic fit to the actual radiative hypothesis.

In order to check the quality of the kinematic fit to a particular hypothesis, we studied the distribution from the fitting results. In this procedure all detected particles were kinematically fit to the appropriate missing mass hypothesis. An additional constraint was introduced into the kinematic fitting enabling analysis of the more well-behaved 2-C distribution, as opposed to the 1-C distribution, to test the quality of the candidates with the hypothesis used. The constraint required that the neutron and track have an invariant mass of the in the hypothesis. The detected particle tracks were kinematically fit as the final stage of analysis and filtered with the confidence level cut. In this fit there were three unknowns () and five constraint equations, four from conservation of momentum and then the additional invariant mass condition. This makes a 2-C kinematic fit. In the attempt to separate the various contributions of the radiative decay and the decay to , the events were fit using different hypotheses for the topology:

2-C | |

2-C. |

The constraint equations were

(5) |

and represent the missing momentum and energy of the undetected or .

Then a fit function was made for a 2-C distribution following from Ref. keller3 as

(6) |

This fit function has a flat background term, . was used to measure how close the distribution in the histogram was to the ideal theoretical distribution for two degrees of freedom.

Because there were two kinematic fits for both the and radiative channels, some new notation is introduced. The first confidence level cut used to filter out the larger signal from the radiative signal by using a kinematic fit to and taking only the low confidence level candidates is denoted as . The final kinematic fit used to isolate the radiative signal, using a hypothesis has a confidence level cut denoted as , taking only the high confidence level candidates. Optimization studies have been previously done to constrain the choice of and keller3 .

### iv.3 Ratio calculation

The leakage into the channel was the dominant correction to the radiative branching ratio. To properly calculate the ratio, the leakage into the region from the channel was also used. Taking just these two channels into consideration, the number of counts is represented as for the channel and for the channel. The acceptance under the hypothesis is written as , with the subscript showing the hypothesis type and the actual channel of Monte Carlo input that was used to obtain the acceptance value indicated in the parentheses. For the calculated acceptance of the channel under the hypothesis, the acceptance is , and for the hypothesis it is . It is now possible to express the measured yields for each channel and as

(7) |

(8) |

The desired branching ratio of the radiative channel to the channel using the counts is then . Solving for to get the branching ratio expressed in terms of measured values and acceptances gives

(9) |

Equation 9 is based on the assumption that there are no further background contributions. The formula for the branching ratio to take into account background from the channel, as an example, can be expressed as

where

(11) | |||||

and

(12) | |||||

The () terms come directly from the yield of the kinematic fits and represent the measured number of photon (pion) candidates. In the notation used, lowercase represents the measured counts, while uppercase represents the acceptance corrected or derived quantities. The terms are corrections needed for the leakage from the channel (an arbitrary number of background types can be accounted for in this manner). The notation utilized is such that the pion (photon) contributions are denoted (), so that denotes the relative leakage of the channel under the hypothesis and denotes the relative leakage of the channel under the hypothesis.

The final acceptance for each channel was determined after the final set of confidence level cuts was taken. After the background acceptances were minimized, an estimate of the background contributions was found for each relevant case.

## V Signal Extraction

Each Monte Carlo channel was run through the analysis with the same cuts as used for the data. These cuts for the extraction of the radiative and signals are listed in Table 1. The cuts are listed in the order implemented. The first cut, (1), was on the mass and restricted the sample. The second cut, (2), on the mass was implemented to clean up the sample prior to the more restrictive cuts, (5)-(8). The missing energy restriction used to reduce background is number (3). Cut (4) restricted the missing mass off the to be in the range of the . Cuts (5) and (6) list the final confidence level cuts used from the kinematic fit to isolate the missing . Cuts (7) and (8) were used to isolate the radiative decay. The second column lists whether the cut was applied to just one channel or both.

Cut Used | (Applied) |
---|---|

(1) GeV | () |

(2) GeV | () |

(3) GeV | () |

(4) GeV | () |

(5) | () |

(6) | () |

(7) | () |

(8) | () |

The two-step kinematic fitting procedure was used to isolate the radiative signal from the channel. In this procedure two separate kinematic fits were performed, one with zero missing mass for the and the other with the missing mass of the . The fit function in Eq. 6 was used to fit the distributions to determine the resulting quality of candidates present in the fit. The parameter was used to measure how close the distribution in the histogram was to the theoretical distribution for two degrees of freedom. The pure radiative decay Monte Carlo was used to determine the value of the expected parameter. Fig. 14 (A) shows the distribution for a kinematic fit of the Monte Carlo channel under the radiative hypothesis, displaying a highly distorted distribution. Fig. 14 (B) shows the distribution from the same kinematic fit under a radiative hypothesis of the Monte Carlo channel , a fit using Eq. 6 after all the kinematic cuts up to (4) in Table 1. The parameter was used as the value to expect. After obtaining the expected , a kinematic fit to the data was performed using both hypotheses.

The first confidence level cut was used to filter out the larger signal from the radiative signal by using a kinematic fit to a hypothesis and taking only the low confidence level candidates. This confidence level cut was checked and made more restrictive until from the data matched the expected value from Monte Carlo (within the projected error bars). Fig. 14 (C) shows the distribution and fit before any cut was applied and Fig. 14 (D) shows the distribution after a cut was applied. The final confidence level cut from the kinematic fit to a hypothesis was used on the remaining candidates. Only the high confidence level candidates were preserved.

Note that the yields for the decay will be reduced for a lower value of , which is desirable for extracting the radiative decay signal. On the other hand, this cut cannot be made arbitrarily small, since it reduces the statistics (, increases the statistical uncertainty). Similarly, the signal will be purified by a higher cut on , but again the higher the cut, the lower the statistics. The Monte Carlo was used to examine the acceptance of these cuts for various branching ratios (), which is discussed in the next section. Ultimately, the branching ratio extracted from the data should not depend on the cut points chosen (assuming the Monte Carlo gives accurate cut acceptances). The Monte Carlo was then used to optimize the trade-off between statistical uncertainty and systematic uncertainty (due to the choice of confidence level cuts based on a more quantitative analysis of ). The cut value of showed consistent optimization with for this topology and range of statistics as listed in Table 1. Details of the optimization method of the confidence level cuts using the Monte Carlo are described in Ref. keller3 . The confidence level distribution under the radiative hypothesis before and after the cut is shown in Fig. 15(A). Likewise, the confidence level distribution for a fit to the data under the missing hypothesis before and after the cut is shown in Fig. 15(B).

The same cuts determined to effectively isolate the radiative signal, with , were used to isolate the channel used for normalization of the ratio, such that is used with . This can be denoted as and . The final missing mass squared distribution after all cuts is shown in Fig. 16 before the two sets of confidence level cuts and after.

The acceptances were found for each contributing channel and are listed in Table 2. Each of the channels in Table 2 was generated with enough statistics so that the statistical uncertainty would not contribute in the final ratio calculation. The value of the acceptance for the () hypothesis is listed under the () column. The uncertainty is statistical only.

Reaction | ||
---|---|---|

0.06440.0040 | 0.62440.0125 | |

0.65020.0128 | 0.05910.0038 | |

0.00180.0005 | 0.01860.0021 | |

0.02310.0023 | 0.00500.0011 | |

0.00030.0001 | 0.00000.0000 | |

0.00000.0000 | 0.00020.0000 |

The acceptance values indicate that contributions from the and channels will be subtracted out directly. All other background channels not listed were zero. As mentioned, all background contributions to the ratio are relatively small, but care is taken to accurately consider each contribution. The levels of these contributions depend the placement of the confidence level cuts previously mentioned.

To obtain an estimate of the amount of leakage into the and signals, some cuts were removed to obtain a fit on the channels of interest. Only the GeV and the GeV cuts from Table 1 were used with an additional cut on the missing mass squared of all detected particles around the mass of GeV. The missing mass off the combination was then checked. The resulting and peaks were fit with a relativistic Breit-Wigner while the background was fit with a polynomial function, as shown in Fig. 17.

The total number of events present in the data set using the less restricted set of cuts just described can be expressed as