1 Introduction

## Abstract

The following effects in the nearly forward (“soft”) region of the LHC are proposed to be investigated:

At small the fine structure of the cone (Pomeron) should be scrutinized: a) a break of the cone near   GeV due to the two-pion threshold, and required by t-channel unitarity, and b) possible small-period oscillations between and the dip region.

In measuring the elastic scattering and total cross section at the LHC, the experimentalists are urged to treat the total cross section the ratio of real to imaginary part of the forward scattering amplitude, the forward slope and the luminosity as free parameters, and to publish model-independent results on

Of extreme interest are the details of the expected diffraction minimum in the differential cross section. Its position, expected in the interval GeV at the level of about GeV GeV, cannot be predicted unambiguously, and its depth, i.e. the ratio of at the minimum to that at the subsequent maximum (about GeV, which is about ) is of great importance.

The expected slow-down with increasing of the shrinkage of the second cone (beyond the dip-bump), together with the transition from an exponential to a power decrease in , will be indicative of the transition from “soft” to “hard” physics. Explicit models are proposed to help in quantifying this transition.

In a number of papers a limiting behavior, or saturation of the black disc limit (BDL), was predicted. This controversial phenomenon shows that the BDL may not be the ultimate limit, instead a transition from shadow to antishadow scattering may by typical of the LHC energy scale.

Forward Physics at the LHC; Elastic Scattering

R. Fiore, L. Jenkovszky, R. Orava E. Predazzi A. Prokudin and O. Selyugin

Dipartimento di Fisica, Universitá della Calabria

Instituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza

I-87036 Arcavata di Rende, Cosenza, Italy

Bogolyubov Institute for Theoretical Physics

Kiev-143, 03680 Ukraine and

RMKI, KFKI, POB 49, Budapest 114, H-1525 Hungary

Helsinki Institute of Phyics and University of Helsinki,

PL64, FI-00014 University of Helsinki, Finland

Dipartimento di Fisica Teorica, Università di Torino,

Instituto Nazionale di Fisica Nucleare, Sezione di Torino

Via P. Giuria 1, I-10125 Torino, Italy

BLTP, Joint Institute for Nuclear Research,

141980 Dubna, Moscow region, Russia

CONTENTS

## 1 Introduction

In this paper, some crucial issues that may be useful in preparing the experiments at the LHC are discussed and clarified.

Measurement of the total proton-proton cross section will be one of the first priorities at the LHC. The importance of these measurements is two-fold. First, they are mandatory to fix the normalization of all subsequent measurements. Furthermore, the value of the total cross section will drastically narrow the range of the existing models with predictions for the total cross section ranging as mb [1, 2] or even more1. The knowledge of the total cross section will help in selecting a class of models of diffraction, based on the dominance of multi-gluon, or Pomeron (IP) exchange.

Diffractive events, for example, diffractive Higgs production, are widely believed to produce the cleanest signal of possible new phenomena [3, 4, 5, 6, 7].

Quantum ChromoDynamics (QCD), complemented with the Regge pole theory, and the unitarity condition superimposed, form the theoretical basis of the strong interaction. Both QCD and the Regge pole theory need experimental verification to clarify the role of higher QCD corrections, on the one hand, and to restrict the existing flexibility in the Regge pole models, on the other hand. The scattering amplitude at the LHC can safely be parameterized by the dominating Pomeron exchange, appended by a possible tiny Odderon contribution, the contribution from the secondary Reggeons at LHC being presumably negligible. A comprehensive introduction to high-energy diffraction can be found in Ref. [8].

LHC will be the first accelerator where the relative contribution from secondary (sub-leading) trajectories (R) will be negligible, i.e. smaller than the experimental errors. The ratio , apart from kinematics, depends essentially on the difference of the relevant intercepts, however the above statement holds even for the most conservative (i.e. large ) ratio, see e.g. Ref. [9]), decreasing with since the Pomeron slope is smaller than that of the sub-leading Reggeons.

Parametrization of the Pomeron is far from being unique. According to Ref. [10], there is only one Pomeron in the nature (although its form is not necessarily simple, see [11, 12, 13, 14, 15, 16]). The data on deep inelastic scattering from HERA provoked discussions on the existence of an alternative, “hard” or “QCD Pomeron” [17, 18, 19, 20], needed for the confirmation of both perturbative QCD calculations and of the “hard” and “semi-hard” diffractive physics. It should be remembered that the properties (parameters, etc.) of the Pomeron in hadronic collisions (ISR, SPS, Tevatron and LHC) can be determined with a precision and reliability much higher than that in collisions.

The interface and/or transition between soft and hard dynamics is a key issue of the strong interaction theory. In elastic scattering at the LHC it is expected to occur in a smooth way, within the reach of the forthcoming LHC experiments. Roughly speaking, this region will be characterized by a transition from an exponential in , through an exponential in (”Orear regime”), fall-off of the differential cross section to a power behavior, manifesting hard scattering between point-like constituents of the nucleons. The transition region is not so simple because of the different unitarization and rescattering procedures used in the models. Moreover, non-linear trajectories (that mimic hard scattering), non-perturbative contributions and higher order perturbative QCD corrections complicate the issue. In particular, quark model and QCD calculations, apart from a power behavior in indicate [21] the onset of an independent regime in the differential cross section typical of the transition from soft to hard collisions. The complexity of this transition is connected with the deconfinement of quarks and gluons in nucleons. We argue in this paper that this transition is expected in the region GeV, well within reach of the LHC measurements.

We propose to investigate LHC effects that can be predicted qualitatively and which can be measured to give definite quantitative answers concerning the nature of the strong interaction at large distances. The LHC will rule out many of the existing model predictions and thus narrow the class of viable theoretical approaches.

In Secs. 2 and 3 the essential features of the experimental program concerning forward physics at the LHC are described.

In Sec. 4 we discuss two types of interesting irregularities observed in within the exponential cone. One is the so-called “break”2 of the cone (change of its slope) near GeV . The second one concerns the possibility of tiny oscillations superimposed over the cone.

In Sec. 5 the “dip-bump” structure is analyzed. This is a sensitive region that will help to understand the nature of the high-energy diffraction and, eventually, to reveal the Odderon, whose role (see Ref. [22] and references therein) is often exaggerated, dramatized and confused. It is maintained (see e.g. Refs. [10, 14]) that the Odderon should exist simply because nothing forbids its existence. It is not known how large the Odderon is or how its contribution varies with and . Specific theorems and predictions concerning the Odderon can be found in Refs. [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]. Until now it has been observed directly in a single experiment [34, 35] only, and therefore needs to be confirmed experimentally.

The region beyond the dip-bump structure, considered in Sec. 6, along the expected second cone, GeV, may be indicative of the transition from soft to hard physics. It manifests transition from an exponential to a power decrease in , with a possible slow-down of the energy dependence [21]. Details of this effect will measure the nonlinearity of the Pomeron trajectory at large enough . One should note that the analytic matrix theory, perturbative QCD and the data require that Regge trajectories be nonlinear, complex functions [36, 37] (for more details see Refs. [10, 38]).

At the LHC energies a new phenomenon, namely the onset of the Black Disk Limit (BDL) may come into play, changing the -dependence of the slope This phenomenon is discussed in details in Sec. 7.

The Pomeron trajectory has threshold singularities, the lowest one being due to the two-pion exchange, required by the channel unitarity [36, 37, 39, 40, 41, 42], as shown in Fig. 1. There is a constrain [36, 43, 44, 45, 46], following from the channel unitarity, by which

 Iα(t)∼(t−t0)Rα(t0)+1/2,  t→t0, (1)

where is the lightest threshold in the given channel. For the Pomeron trajectory the lightest threshold is , as shown in Fig. 1, and the trajectory near the threshold can be approximated by a square root:

 α(t)∼√4m2π−t. (2)

The observed nearly linear behaviour of the trajectory is promoted by higher, additive thresholds (see Ref. [47, 48] and references therein).

This threshold singularity appears in different forms in various models, see Sec. 4. It is also noted that, irrespective of the specific form of the trajectory or scattering amplitude, the presence of the above-mentioned threshold singularity in results in an exponential asymptotic decrease of the impact parameter amplitude, with important physical consequences (the so-called nucleon atmosphere, or its clouding).

The important role of nonlinear trajectories and their observable consequences were first studied in Refs. [37, 49, 50, 51, 52]. Independently, they were also developed by the Kiev group (see Ref. [53, 54, 55] and references therein), and more recently in a series of papers (Ref. [56] and references therein).

Asymptotically, the trajectories are logarithmic. This asymptotic behaviour follows from the compatibility of the Regge behavior with the quark counting rules [57] as well as from the solution of the BFKL equation [17, 18, 19, 20]. A simple parametrization combining the linear behaviour at small with its logarithmic asymptotic is [10, 38, 58]

 α(t)=α0−γln(1−β1t). (3)

Such a trajectory, being nearly linear at small , reproduces the forward cone of the differential cross section, while its logarithmic asymptotic provides for the wide-angle scaling behavior [57, 58, 59]. Eqs. (2) and (3) can be combined in the form [10, 14]

 α(t)=α0−γln(1+β2√t0−t), (4)

where and in Eqs. (3) and (4) are parameters whose numerical values can be associated with their physical meaning (see [60]).

In a limited range, especially at small and intermediate values of their argument, linear trajectories may be a reasonable approximation to their otherwise complex form.

## 2 Measurement Strategy at the LHC

The elastic proton-proton interactions are measured, and triggered, by the leading proton detectors, Roman Pots, placed symmetrically on both sides of the CMS experiment at and meters from the Interaction Point (IP5) 3. To measure protons at small scattering angles, the detectors must be moved close to the primary LHC beam in vertical direction 4. The “nominal” TOTEM beam optics set-up has high ( m) and no crossing angle at the IP for optimizing the acceptance and accuracy at small values of the four-momentum transfer squared down to GeV.

The distribution of the scattered protons, , is extrapolated to , where it is related to the total proton-proton cross section by the Optical theorem. With the special optics of m, compatible with the LHC injection optics, a first quick measurement of the elastic cross section could be made; extrapolation to the Optical point will be made with an accuracy of a few percent.

Runs with different LHC optics set-ups, such as m, the “injection” optics ( m), stage-1 “pilot” run optics ( m) and the standard LHC optics ( m) will allow measurements up to GeV. Runs with a reduced center-of-mass energy will allow an analysis of energy dependence, comparisons with the Tevatron results and a precise measurement of the parameter.

### 2.1 Elastic proton signature at the LHC

Transverse position of an elastically scattered proton with the momentum loss at a distance from the interaction point (IP): ( , ), is given by the initial coordinates at the IP, (, ), scattering angle, , the effective length , magnification, , and dispersion, , as (see Ref. [61] and references therein)

 y(s) = νy(s) y∗+Leffy(s) θ∗y x(s) = νx(s) x∗+Leffx(s) θ∗x+Δpp D(s) (5)

By using the transfer matrix, , the initial coordinates of an elastically scattered proton at the IP can be mapped into a detector location at point s along the machine. By measuring the transverse position and scattering angle at point s, the initial coordinates can be determined. Since angles are very small, they have to be measured by combining the symmetrically located detector stations on both sides of the IP: a combined measurement, using the left and right arms of the leading proton detectors, yields an accurate measurement of the collinear pairs of elastically scattered protons (Fig. 2).

From Eq. (5)

 θ∗x=xR−xL2Leffx;   θ∗y=yR−yL2Leffy. (6)

The elastically scattered protons have to be measured close to the primary LHC beam; down to angles of to rad with respect to the beam direction. For this, special beam optics conditions with reduced beam divergence at the interaction point (IP) - and sufficiently large displacement of the scattered protons at the detector locations (Fig. 2) - are required. Thorough studies of optimal LHC beam conditions have led to the nominal “TOTEM” beam optics with m 5 for the measurement of elastic scattering and soft diffraction Ref. [61].

In Fig. 3, the acceptance of elastic protons is shown as a function of for three different run scenarios. An acceptance down to GeV is achieved with the nominal LHC beam emittance; with improved beam emittance an acceptance down to GeV is obtained [62]. Contrary to the nominal TOTEM optics, the m optics uses the standard LHC injection optics and could be realised during the run-in phase of the machine. The acceptance in reaches GeV.

Due to the geometric constraints imposed by the LHC vacuum pipe and beam screen, elastic protons with values in excess of GeV cannot be measured with the nominal high- optics conditions. Also, since at high values , higher luminosities would be desired. The “standard” LHC optics scenario includes the “injection” optics with m and a “pilot” stage optics with m that allows elastic protons with higher values to be accessed (Figure 3). With the m optics, a reasonable event statistics up to GeV (GeV) and resolution is achieved.

With the high- TOTEM optics, so-called parallel-to-point focussing conditions are achieved (at m in both horizontal and vertical planes) and the elastic proton measurement becomes independent of the location of the primary interaction vertex. This will reduce systematic uncertainty in measuring the four-momentum-transfer squared of the scattered proton.

The angular beam divergence dominates the uncertainty in measuring -t, and together with the detector resolution, accounts for basically all of the uncertainty in . For small values of (GeV) the error in , , is in case the forward-backward pair of leading proton measurement stations is used and if only one of the two leading proton spectrometry “arms” is used.

With lower center-of-mass energies, the acceptance in improves and, at the energies of TeV, values of one order of magnitude lower than at the nominal LHC energy are reached. This would allow comparisons with the Tevatron results and the Coulomb-hadronic interference region to be probed.

### 2.2 Extrapolation to the optical point

For the total pp cross section measurement, extrapolation of the elastic scattering distribution, , to is required. The relative statistical uncertainty of the extrapolation is estimated to be based on short periods of data taking with the nominal TOTEM optics and luminosity of cm [61].

A thorough study of the systematical effects in the extrapolation process was carried out [61, 62] and concluded that a precision better than should be achieved in extrapolating the elastic cross section to the optical point6.

In case Coulomb scattering is not accounted for in the extrapolation process, a shift of of could occur. Due to Coulomb scattering contribution, the slope in does not stay constant at low as assumed in a usual extrapolation. The uncertainty due to this effect might represent an uncertainty of several parts in to the extrapolation.

### 2.3 Measuring ρ

For measuring the -parameter,

 ρ=RA(s,t=0)IA(s,t=0), (7)

one fits the elastic differential cross section:

 1LdNeldt=dσdt=4πα2F41(t)t2−α(ρ+Δϕ)σtotF21(t)|t|e−B|t|/2+σ2tot(1+ρ2)166πe−B|t|, (8)

where the three terms are due to Coulomb scattering, Coulomb-hadronic interference, and hadronic interactions. is the integrated luminosity, the fine structure constant, the relative Coulomb-hadronic phase, given as

 Δϕ=−ln(B(s)|t|/2)−0.577 (9)

and is the nucleon electromagnetic form factor, which is usually parameterized as

 F1(t)=1(1+|t|/0.71)2. (10)

In the least squares fit procedure, the following two equations are also used:

 σ2tot=1L16π(1+ρ2)dNeldt|t=0; (11)
 σtot=1L(Nel+Ninel). (12)

Eq. (11) is a direct consequence of the Optical theorem. is the total number of events obtained by integrating the distribution within the region where hadronic interactions dominate, and extrapolated to and to by using the form . is the total number of inelastic events. Note that Eqs. (11) and (12) allow the luminosity to be expressed in terms of and . Then in Eq. (8) can be expressed in terms of just three unknowns: , and . In the fit procedure, the same data on , together with the total number of inelastic events recorded during the same experimental data taking runs, are used as inputs. A least-squared analysis for , and in Eq. (8) is done by using all the collected input data.

The evaluation of systematic errors due to the uncertainty in beam emittance, vertex positions and spread, beam transport and incoming beam angles is based on Monte Carlo and machine simulations. These simulations use the geometry of the experimental set-up and efficiency of the detectors as input.

### 2.4 Elastic scattering run scenarios

For elastic scattering, three run scenarios are considered (Table 1):

1. Nominal TOTEM optics for (low ) elastic scattering, m,

2. An early medium- optics, with m, and

3. Optics for large elastic scattering, m.

The event rate per bunch crossing is calculated as (for symbols see Table 1)

 Nev=σel Lf Nb(NBNb), (13)

where elastic cross section, luminosity, frequency, no. of bunches; factor () accounts for the empty buckets.

### 2.5 Total pp cross section measurement strategy

The total proton-proton cross section is measured - in a luminosity independent way - by using the Optical theorem. By extrapolating the elastic rate down to the Optical point, , and by recording the elastic and inelastic event rates, the total cross section is measured with an over-all accuracy better than .

For the total cross section measurement, the “nominal” TOTEM beam optics ( m) with several short runs is used. During the initial LHC running, the run scenarios with m is planned to be used for a total cross section measurement with an accuracy of about .

For measuring the total cross section, inelastic scattering needs to be studied within large The aim of the forward physics initiatives at the LHC is to complement the base line ATLAS (ALFA), CMS (TOTEM) and ALICE designs with forward detector systems (see Refs. [64, 65, 66, 67, 68] ). Besides restricted detector acceptance, inadequate theoretical understanding of the forward physics phenomena poses a serious systematic uncertainty for the base line experiments in need of precise luminosity measurement. As an example, the single diffractive cross section for the low diffractive mass region, GeV, could amount to of the over-all and cause a major systematic uncertainty in the total cross section measurement.

The base line LHC experiments define a “minimum bias” event category that must be suppressed due to the limitations in recording interactions in excess of the rate ( level trigger band width) which can be recorded, as the cost of triggering on “interesting” large events. The main task of equipping the forward region of a main stream LHC experiment is to complement the physics reach by including the events that are not selected by the “minimum bias” event trigger. The TOTEM experiment together with CMS forward detectors (CASTOR, ZDC, and the proposed FSC and FP420 systems), represent the necessary complement for selecting an unbiased (!) sample of “minimum bias” events required for the analysis goals stated in the CMS Physics TDR.

In general, the soft particles in the non-diffractive event category (nd) will end up at central rapidities, while the relatively few energetic ones are expected to end up at small angles, to be recorded by the forward calorimetry and spectrometers.

The longitudinal, , coordinate of the vertex is defined by first determining the distance of closest approach to the nominal axis for each track, , (at least two tracks are required) and by then calculating the mean: . By using this simple method, the coordinate of the primary interaction vertex can be determined with a resolution of cm. The beam related events are identified by requesting that their reconstructed value is within cm of the nominal IP. This selection is found to be more than efficient in choosing the beam related events. The remaining events have diffractively excited systems with relatively small masses, GeV, in which all the charged tracks escape detection in TOTEM T1 and T2 spectrometers (Figure 4).

For obtaining the over-all inelastic rate, the missing coverage7 above was estimated by extrapolation. Fig. 4 (right panel) shows the simulated distributions for the single diffractive () events before and after acceptance correction. A linear fit is used to correct for the unseen part of spectrum. In the case of events, a correction of , corresponding to about mb, was quoted above the detected fraction of this category of events. A similar analysis gives a correction factor of mb for double diffractive and mb for the central diffractive events [61].

Unfortunately diffractive cross sections are poorly known and theoretical understanding of both small and large mass single diffraction is lacking. Low mass single diffraction ( GeV) could represent of the total diffractive cross section (for double diffractive () events the uncertainties are even more severe). Moreover, in the light of recent CDF measurements, soft central diffractive cross sections are likely to be seriously overestimated in current Monte Carlo models.

With the vertex constraint, a substantial part of the beam-gas events is rejected. The simulation studies [61] show that by using the vertex constraint, the beam-gas interactions closer than m represent only of the selected sample of events. Since the beam-gas event rate is estimated to be Hz/m at of the nominal beam current, the trigger rate due to this source of background can be safely neglected. In addition, the Roman Pot based leading proton trigger can be used to further reduce the background rate.

### 2.6 σtot measurement

The LHC (TOTEM) measurements of the total cross section is luminosity independent and based on the Optical theorem:

 σtot=16 π(1+ρ2)(dN/dt)t=0Nel+Ninel, (14)

where and are the elastic and inelastic event rates, () is the elastic cross section extrapolated to the Optical point.

The relative error in , neglecting uncertainty in , is then:

 (dσtotσtot)2=(d(dNel/dt)t=0(dNel/dt)t=0)2+(d(Nel+Ninel)(Nel+Ninel))2, (15)

The uncertainty in inelastic cross section is estimated to be less than 1 mb (see Ref. [61]). However, little experimental data on small mass diffraction exists, and the uncertainty in inelastic diffraction alone could amount to several millibarns. Together with the uncertainty of about of the extrapolated value of , results in an over-all error of

 Δσtotσtot>>1%. (16)

The uncertainty in the value of the -parameter could also have an important contribution to the measurement (see Chapter 3 and Ref. [69]), and it could be reduced by a direct measurement.

### 2.7 Luminosity measurement and monitoring

Luminosity measurement

The luminosity relates the cross section, , of a given process to the corresponding event rate by

 L=Niσi. (17)

It is not trivial to find a process with a well defined - and precisely calculable - cross section combined with a prominent event signature. The cross section should be large enough for monitoring the luminosity as a function of time, e.g. during a fill or when investigating bunch-to-bunch variations. By using several complementary aprroaches, systematic uncertainties can be brought under control [63].

A simultaneous measurement of the elastic and inelastic event rates can be used to define the luminosity as

 L=1+ρ216 π(Nel+Ninel)2(dNel/dt)|t=0), (18)

On-line luminosity monitoring

LHC is the first hadron collider where, due to the large c.m.s energy and high luminosity, a significant number of inelastic interactions (on average 35 interactions at the design luminosity) are expected to take place per bunch crossing. The traditional technique of monitoring luminosity by requesting a coincidence of two counters at small angles on both sides of the IP is not sufficient at the LHC. At the design luminosity, the coincidence rate observed will not be proportional to the number of events, but rather to the number of bunch crossings. In the case of no segmentation of the luminosity monitors, the probability to get a coincidence will be close to .

Highly segmented forward detectors in T2 and/or FSC could be used as luminosity monitors. The rate used to monitor the luminosity, could be defined by using the double-arm coincidence rate between a pair of left-right detector segments. The segmentation reduces the counting rate significantly below the bunch crossing frequency and, therefore, becomes proportional to the luminosity. A coincidence signal between a pair of left-right detector segments is unlikely in case of separate overlapping collisions. The technique would help to suppress beam related backgrounds. In principle, beam related background may be recognized by a time stamp given by a forward detector. During the beam cross-over at the IP, no other bunches should pass through the luminosity monitor location. Secondary particles from beam-gas and beam-wall interactions are traveling in-time with the bunch. In practice, the time stamping is challenging due to the high bunch crossing frequency (40 MHz). By requiring simultaneous left-right signals contributions from the beam related backgrounds are practically eliminated.

The calibration of the luminosity monitors can be performed during the dedicated high- runs at lower luminosities, where the luminosity is precisely determined together with tot. Alternatively, once the total inelastic cross section is precisely measured during the high- runs, together with the elastic and total cross sections, the inelastic events could be used to re-calibrate the monitor during the low- running. This becomes important when any significant changes to the detector lay-out are made, e.g. when the outer detectors are dismantled after the first year of running.

## 3 Forward physics at the LHC

Forward, or soft physics, roughly speaking, is the synonym of diffraction - elastic and inelastic, the role of the latter increasing with increasing scattering angle (or momentum transfer).

Forward physics will play an important role in early runs of the LHC for at least two reasons. One is that measurements of basic quantities, such as the total cross section , the ratio of the real part to the imaginary one of the forward scattering amplitude, the local slope of the differential cross section, etc. are of fundamental importance for the calibration/normalization of the beam and detectors and this task goes beyond the problems of understanding the nature of diffraction. Secondly, apart from classical studies of diffraction (Pomeron), the diffractive medium (gluons) may also favour central production of the Higgs boson. With increasing luminosity, the experiments at LHC will gradually shift towards measurements of rare events in the non-forward direction.

At the LHC, three collaborations, namely TOTEM/CMS, ALICE and ATLAS, are preparing to measure elastic, inelastic and the total cross section 8 at the expected energy TeV. The complementarity of the expected performances will ensure optimal reliability of the results. The total cross section is claimed to be measured within of precision. The precision of the extrapolation to the optical point has been analyzed in [70] 9

One should note that the predictions based on model extrapolations for at TeV have a wide range. For example, Landshoff predicts [1, 2] (14 TeV)  mb.

TOTEM, CMS and FP420 collaborations are combining their efforts to cover a phase space (see Fig. 2), where the geometric acceptance of detectors is shown) exceeding that in any of the preceding collider experiment [67, 71, 72].

The TOTEM experiment, in particular, aims at measuring 1) the total proton-proton cross-section with a relative precision of about ; 2) the elastic proton-proton scattering between GeV GeV, and being respectively the proton momentun and scattering angle in the c.m.s. Furthermore, by combining TOTEM with CMS and its forward additions, the CASTOR and ZDC calorimeters, a full forward physics program is planned.

The TOTEM experiment will measure elastic scattering to about GeV and will provide crucial new understanding of the phenomena in the high elastic scattering regime. To discriminate between different models, it is important to measure elastic scattering in the widest possible kinematic region. In most of the previous measurements of the total cross section (at ISR, SPS, Tevatron, RHIC) the value of the parameter was imported either from another experiment or from model calculations. As argued in Refs. [72, 73], known variations of the value of have a negligible effect on the resulting value of . This is in disagreement with Ref. [74], where it was argued that a small variation of may affect the resulting significantly. If this is true, a simultaneous fit to all inputs ( and ) from a single experiment is needed. Publication of direct and unbiased data on is highly welcome.

There are two possible kinds of luminosity measurements: one yields an absolute value which serves as a point of reference, the other one gives a relative value as a function of time. The latter measurement will be performed by ATLAS using a special detector called LUCID (Luminosity measurements Using Cherenkov Integrating Detectors). The idea of the LUCID detector is explained e.g. in Ref. [75]. It makes possible measurements in the very forward region, directly related to the instantaneous luminosity. This detector will enable ATLAS to obtain a linear relationship between luminosity and the number of tracks counted in the detector, directly related to the luminosity (see Table 2).

Measurements in the Coulomb-nuclear interference (CNI) region,

(note that at SPS ) will be be used to extract the value of the parameter from Eq. (8).

The extracted value of may be affected by at least two phenomena. One is connected with the well known corrections [77] to the Coulomb-hadron phase [78] and the other one with the non-exponential behavior of the diffraction cone, known as the “fine structure of the Pomeron trajectory” [23, 24, 25, 39, 40, 41, 42]. Below, in Sec. 4, we shall come back to this point. The program of studying this “fine structure of the Pomeron trajectory” is among the priorities of the ATLAS collaboration [73, 75].

The ALICE experiment is designed as a general purpose experiment with a central barrel covering the pseudorapidity range and a muon spectrometer covering the range at luminosities and in and collisions, respectively, as well as an asymmetric system at a luminosity (see, also, Ref. [79, 80, 81]).

Moreover, the experimental program of ALICE to large extent will be oriented to inelastic reactions, e.g. by studying their dependence on the width of the rapidity gap.

It should be noted that all these values are approximations since some of them were taken at as , , or integrated over all the region of angles as , , i.e. all these values were obtained under some theoretical assumptions.

Tables 3 and 4 [82] show the range of the measured and the expected values of above the ISR energies, with the divergence of the existing data on at the same energy quoted in Table 2. One can see that the extraction of from the differential cross section is a complicated problem and that the use of different models can chance the predicted value of the total cross sections at the LHC energies from mb to mb (see Table 3). Therefore, one cannot use the theoretical predictions of, say or , to extract other observables from the experimental data on . With the exception of the UA4 and UA4/2 Collaborations, the numerical data on of other collaborations were not published, excluding any cross-check or improvement of these results. We hope that the future LHC data on will be published.

In Ref. [75] the significant correlations between the values of and are also visible. The results of the fits of the simulated LHC experimental data in the framework of the non-exponential model of the hadron scattering amplitude presented at “EDS-07” [82] at equal to  TeV and  TeV show large errors in the determination of and (see Tables 3 and 4).

The (weak) dependence of on if is very small, and it comes from the coefficient in front of (see Eq. (14)), while the strong dependence of the normalization of , the values of and comes from the extraction of the Coulombic and Coulomb-hadron interference terms from data to obtain . The Coulomb-hadron interference term is proportional to . At very small the Coulomb-hadron phase is also important. Corrections to the Bethe formula [94] were calculated in Refs. [95, 96, 97, 98]. A detailed analysis of the role of this term in the behaviour of the differential cross sections was carried out in Ref. [92]. To make the analysis complete, the Odderon contribution [23, 24, 25] as well as the nearby threshold singularity at [39], should be also taken into account.

To minimize the errors, in some experiments the value of or were fixed from other measurements. This was done, for example, by the UA4/2 Callaboration, which extracted by using from the result of the UA4 Collaboration ( mb). However, the value of obtained by the Collaboration UA4/2 turned out to be . With such a value of the resulting becomes larger. This situation was analysed in Refs. [99, 69]. with the results shown in Table 5.

Diffraction dissociation, in particular the low mass one, is among priorities of the first experiments at the LHC. It should be remembered that diffraction dissociation - both single and double - and elastic scattering have much in common. Similarities are expected in the shape of the diffraction cone, with its fine structure (the “break” and oscillations), open for observation, see Ref. [100], and the dip-bump structure (until now not seen in diffraction dissociation). Hence studies of elastic scattering and diffraction dissociation are complementary. A recent overview of the ALICE detector and trigger strategy for diffractive and electromagnetic processes at the LHC can be found in Ref. [79, 80, 81].

## 4 The forward cone

### 4.1 The “break” at very small |t|

An essential part of the future TOTEM+CMS and ATLAS experiments is connected with the measurement of the elastic differential cross section at small momentum transfer, with the purpose of extracting from these data the values of the total cross section. An important point of this procedure is the simultaneous measurement of four quantities: the luminosity (or the normalization coefficient), the total cross section , the slope , defined as

 B(s,t)=ddtlogdσ(s,t)dt (19)

and the ratio . The last two quantities depend on their, a priori unknown, -dependence. Consequences of this complexity are the contradictions of the obtained values of in different experiments in the energy range of GeV [99], in particular the large difference in the values of as measured in the UA4 and UA4/2 experiments at GeV as well as the discrepancy between the values of at TeV. This discrepancy results from the correlations between simultaneous measurements of several unknown values in a single experiment instead of fixing some values from the phenomenological analysis or from other experiments.

This situation was demonstrated for example in Ref. [99]. Since we do not know the true -dependence of and of , one cannot use the same constant values for different small intervals of . Therefore, simultaneous measurements of all unknown observables in a single experiment are needed, rather than their substitution by values taken either from phenomenological models or alternative experiments, as illustrated e.g. in Refs. [99, 69], where compatible values of , different from those experimental were obtained.

Given the uncertainty in the and dependence of the observables, the standard procedure is to suppose their smooth and monotonic behavior from low to super-high energies. Along these lines, it is usually proposed to measure taking the value of from the results of theoretical analysis (for example, that of the COMPETE Collaboration [86]), although, as argued in Refs. [3, 4, 5, 6], any consistent analysis should include a simultaneous fit to expression of Eq. (8).

In the ISR energy region, the diffraction cone changes its slope near GeV by about 2 units of GeV. In Ref. [39] this phenomenon was interpreted as the manifestation of channel unitarity (a two-pion loop, see Fig. 1) and, in term of the Pomeron exchange, it was modeled by the inclusion of a relevant threshold singularity in the trajectory10:

 αP(t)=α0+α1t−α2√4m2π−t, (20)

where It will be interesting to see whether this effect will persist at the LHC and thus confirm the idea of Ref. [39].

As shown in Ref. [100], the “break”, or the fine structure of the Pomeron, in principle, can be seen directly, unbiased by electromagnetic interactions, in proton-neutron scattering.

### 4.2 Oscillations

An important phenomenon that should be scrutinized at the LHC is the possible appearance of small-period oscillations over the smooth exponential cone. Possible deviations from a simple exponential behaviour of the hadron-hadron scattering amplitude were discussed in the literature long ago. For example, it was shown in Ref. [101] that peripheral contributions from inelastic diffraction result in large- and small-period oscillations in the momentum transfer. Among the attempts to verify these oscillations the first one was done in an experiment at Serpukhov, where oscillations in scattering were detected [102] and then discussed in Ref. [103]. An alternative view, interpreting these oscillations as an artifact, connected with the dependence of the slope, was put forward in Ref. [104]. In Ref. [105] the statistical nature of the possible oscillations in the ISR data were analyzed in the framework of the Dubna Dynamical (DD) model (for the DD model see Subsection 5.4). The results are shown in Fig. 5.

The effect was first observed at the ISR and subsequently was discussed in Ref. [106, 107]. Although it has not yet been confirmed unambiguously, it continues to attract attention (see Ref. [106, 107] and references therein). The analysis of the slope [106, 107] shows possible oscillations in the UA4/2 data. The super-fine structure (oscillations) of the cone may be related to residual, long-range interaction between nucleons [108] or the action of a potential of rigid hadronic strings [109].

A new method of the analysis of the experimental data of UA4/2 experiment was proposed in Ref. [110]. The method is based on the comparison of two statistically independent sets, Ref. [111]. If we have two statistically independent sets and of values of the quantity distributed around a certain value, say with the standard error equal to , we can try to find the difference between and . For that we can compare the arithmetic mean of these choices:

 ΔX=(x′1+x′2+...x′n1)/n1−(x′′1+x′′2+...x′′n2)/n2=¯¯¯¯¯¯¯x′n1−¯¯¯¯¯¯¯x′′n2.

The standard deviation for this case will be

 δ¯¯¯x=[1/n1+1/n2]1/2.

If we have the purely statistical noise the value tends to zero. However if there is some additional signal, this value should differ from zero. When it is larger than , one can say that the difference between these two choices has a probability of confirming the presence of the oscillations (for more details see Ref. [110]).

The deviation of each of the experimental data for the cross section from the corresponding theoretical values is measured in units of the experimental error :

 ΔRi=[(dσ/dti)exp−(dσ/dti)th]/δexpi. (21)

By summing these over all 99 experimental points of the UA4/2 experiment, the result should tend to zero as the statistical deviations are equally probable in both sides of the theoretical curve. However, if the theoretical curve does not precisely describe the experimental data, for example, the scattering amplitude deviates from the exponential behavior in the momenta transfer, the sum over can differ slightly from zero, going beyond the value of a statistical error. To take into account this effect, we divide the whole interval of the momentum transfer into equal pieces of size such that , where , and sum separately over the even and odd pieces. Thus, we get two sums and for the even and odd interval, respectively. For and or :

 Lup=n1∑j=1(99∑iΔRi)|Δq(2j−1)

In Ref. [110], where this method was applied to the data of the UA4/c Collaboration, GeV was used.

Let us calculate the quantities and ; the results are shown in Fig. 6 by the full and dash-dotted lines. It can be seen that in the range GeV these quantities change drastically and in the range GeV instead they vary slightly. It means that the amplitude of a possible periodic structure is decreasing with growing .

Note that this new method can be used to check the true determination of the parameters of the elastic scattering at small . The two curves obtained have to be symmetric with respect to the line calculated by using the basic parameters ().

A way to identify possible oscillations at the LHC is to use the method of overlapping bins, suggested by J. Kontros and Lengyel in Ref. [106]. The procedure consists in scanning the cone by overlapping bins in , each containing a certain number of data points shifted by a small number of points . Within each bin an exponential fit is applied [106, 107] (see Fig. 7).

The length of the bin should not be too large to oversimplify the parameterization and not too small in order to contain a reasonable number of points for each process. For example, a shift from bin to bin GeV was found in Ref. [106] reasonable.

Looking for oscillations in the cone at LHC by using the method of overlapping bins of Ref. [106, 107] is a promising program for future experiments. As already mentioned, the oscillations can be related to residual long-range forces between nucleons [108].

## 5 The Dip-Bump Region, t∼−1 GeV2

Before going into details, we would like to notice a model-independent regularity found in Ref. [114]. As shown in that paper, a correlation between the value of and the depths of the minima of the diffraction differential cross section in proton-proton and proton-antiproton scattering exists. The ratio for the latter changes sign in the energy region GeV. At this energy the differential cross section of -scattering in the dip-bump structure has its sharpest minimum, while in -scattering changes its sign around GeV. At this energy the differential cross section of -scattering has its sharpest minimum.

At the highest ISR energies all models and experiments show that . The experiments also show that the dip at these energies in is higher then in -scattering.

The observed dip-bump structure in the high-energy differential cross section was not predicted by any model or theory. It can be related e.g. to the multiple scattering of quarks and gluons (Glauber theory), leaving however much room for speculations on the basic inputs in this approach. The optical model (see Ref. [115] and earlier references therein) predicted a sequence of minima and maxima, however the ISR and SPS data show only one structure, both in and in scattering. Some models [116] predict that more structure will appear at higher energies, for example at the LHC. The (non)appearance of additional minima and maxima at the LHC will confirm or rule out part of the existing models, although most of them will survive by refitting its parameters a posteriori.

Two models [22, 117, 118], fitted to the data in a wide range of and can illustrate the state of art in this field (see, for instance, Fig. 8). While the authors of Ref. [22] claim that fits require the presence of an Odderon contribution, those of Ref. [117, 118] do not need it. By this we only want to stress that the flexibility of the existing models allow for good “postdictions”, but their predictions are not unique. In this situation useful empirical parameterization [119], unbiased by any theoretical prejudice can be useful.

By definition, any diffractive pattern in / is a property of the (predominantly imaginary) Pomeron contribution, rather than of the “non-diffractive” Odderon, whose amplitude is predominantly real and thus it can only “contaminate” the diffractive pattern.

### 5.1 A simple model of the diffraction pattern

Data on scattering below the LHC energy region can be described by a Pomeron and sub-leading contributions. From the phenomenological point of view, the need for the Odderon, becomes important only after the SPS data on scattering appeared, in which the dip seen in scattering appeared as a “shoulder”, suggesting that the diffractive minimum was filled by the Odderon contribution. This is not a proof, just a plausibility argument. Below we illustrate the dynamics of the dip and shoulder in and scattering by using a simple (“minimal”) dipole Pomeron (DP) model to “guide the eye”. It reproduces the observed dynamics of the dip and will help to anticipate the LHC phenomena.

By neglecting spin effects, the scattering amplitude for and scattering is written as a sum

 Ap¯ppp(s,t)=P(s,t)±O(s,t), (23)

where is the Pomeron contribution and is that of the Odderon. For the DP amplitude we use the following “minimal” model [23, 24, 120, 26, 27]

 (24)

where with ; is the Pomeron trajectory and and are free parameters. The model produces rising cross sections without violation of the Froissart bound as well as a dip-bump structure seen in the scattering with its non-trivial dynamics observed at the CERN ISR in the range [23, 24, 120, 26, 27]. The absence of a relevant structure (replaced by a “shoulder”) in scattering at the CERN SPS was interpreted [23, 24, 26, 27] as a manifestation of the Odderon, filling the dip produced by the Pomeron term (for reviews see Refs. [10, 38]). A high-quality description of elastic high-energy data with predictions for future accelerators can be found in [116]. In an alternative successful approach [121] there are several Pomerons, whose interference produces the dip-bump structure and perfect agreement with the data.

Since little is known about the properties of the Odderon, apart from its assumed asymptotic nature, one usually parameterizes it in the form close to that of the Pomeron given in Eq. (24). However: 1) the Odderon is odd, which implies an extra factor in front of the amplitude; 2) its relative contribution is by orders of magnitude smaller than that of the Pomeron (until now the Odderon was not seen in the forward direction); 3) the slope of the Odderon trajectory is much smaller than that of the Pomeron. There are two reasons for the latter: one is based on theoretical arguments [23, 24] and the other one is phenomenological: the “flat” shoulder in scattering, seen at the SPS, could be a manifestation of the Odderon.

Let us remind that the dip in the ISR energy region is monotonically deepening, reaching its maximal depth at GeV, whereafter the monotonic trend changes, albeit at a single energy equal to GeV. Similar to the unique case of the measured [34, 35] difference between and amplitudes at GeV, mentioned in the Introduction, this phenomenon needs confirmation. Regrettably, measurements of the difference between and amplitudes are not foreseen in the near future.

Fig. 9 shows the differential cross section of scattering at and GeV calculated from Eqs. (23) and (24) (full line), compared with a fit from paper [83] (broken line). Details of the calculations and the values of the fitted parameters can be found in Refs. [23, 24, 26, 27, 120].

It should be reminded that the hypothetical Odderon contributes enters Eq. (6) with different signs in and amplitudes, thus distorting the pure Pomeron contribution in both cases. Due to its small slope (flatness of its trajectory), the role of the Odderon increases with increasing the phenomenon reaching its maximum in the dip region.

The ratio of the cross section at the minimum to that at the maximum (depth of the dip) is more informative than its absolute value. Even greater than at the ISR () value of the ratio will indicate the monotonic deepening of the dip and will disfavour the Odderon contribution, while its shallowing will favour the presence of the Odderon.

At this point it may be appropriate to introduce several known models describing the diffractive pattern (the dynamics of the dip-bump structure). Before doing so, two points are worth mentioning: 1) one still lacks a theoretical understanding of the phenomenon and must rely on models; 2) among the large number of models, only few are able to fit properly the large number of high-statistics data that exist in a wide range of and for and scattering. A collection/review of the existing models for high-energy scattering with a critical evaluation of their prediction is highly desirable, however this task is beyond the scope of the present paper. We apologize to those authors whose papers, for brevity of space, are not cited here.

### 5.2 The “Protvino” model

One of the examples of eikonal models of diffraction is the “Protvino” model [92, 121].

The unitarity condition

 ImT(s,→b)=|T(s,→b)|2+η(s,→b),

where is the scattering amplitude in the impact representation, is the impact parameter, is the contribution of inelastic channels, implies the following eikonal form for the scattering amplitude :

 T(s,→b)=e2iδ(s,→b)−12i, (25)

where is the eikonal function. The unitarity condition in terms of the eikonal looks as follows:

 Imδ(s,→b)≥0,s>sinel. (26)

The eikonal function is assumed to have simple poles in the complex -plane and the corresponding Regge trajectories are normally used in the linear approximation

 α(t)=α(0)+α′(0)t. (27)

The following representation for the eikonal function is used:

 δ¯pppp(s,b)=δ+P1(s,b)+δ+P2(s,b)+δ+P3(s,b)∓δ−O(s,b)+δ+f(s,b)∓δ−ω(s,b), (28)

where are Pomeron contributions. ’’ denotes C even trajectories (the Pomeron trajectories have the following quantum numbers ), ’’ denotes C odd trajectories, is the Odderon contribution (the Odderon is the C odd partner of the Pomeron with quantum numbers ); , are the contributions of secondary Reggeons, () and ().

The parameters of secondary Reggeon trajectories are fixed according to the parameters obtained from a fit of the meson spectrum:

 αf(t)=0.69+0.84t,αω(t)=0.47+0.93t. (29)

The model fits high-energy elastic and scattering data. The data are well described for all momenta () and energies (GeV) (). It predicts the appearance of two dips in the differential cross-section which will be measured at LHC. The parameters of the Pomeron and Odderon trajectories are:

The model predicts (see Fig. 10) the following values of total and elastic cross sections at the LHC:

 √s=14.TeV, σpptot=106.73(mb)+7.56mb−8.50mb, (30) σppelastic=29.19(mb)+3.58mb−2.83mb, ρpp=0.1378+0.0042−0.0061.

On the right panel of Fig. 10 one can see a typical “diffractive” pattern of the differential cross-section also present in other models based on unitarisation [122, 123, 124, 117, 118, 125] (see also Fig. 9 and Fig. 11).

### 5.3 The “Connecticut” model

A model based on a physical picture of the nucleon having an external cloud, an inner shell of baryonic charge, and a central quark-bag containing the valence quarks was proposed in Ref. [93]. The underlying field theory model is the gauged Gell–Mann – Levy linear -model with spontaneous breakdown of chiral symmetry, with a Wess–Zumino–Witten (WZW) anomalous action. The model attributes the external nucleon cloud to a quark–antiquark condensed ground state analogous to the BCS ground state in superconductivity – an idea that was first proposed by Nambu and Jona-Lasinio. The WZW action implies that the baryonic charge is geometrical or topological in nature, which is the basis of the Skyrmion model. The action further shows that the vector meson couples to this topological charge like a gauge boson, i.e. like an elementary vector meson. As a consequence, one nucleon probes the baryonic charge of the other one via -exchange. In pp elastic scattering, in the small momentum transfer region, the outer cloud of one nucleon interacts with that of the other giving rise to diffraction scattering. As the momentum transfer increases, one nucleon probes the other one at intermediate distances and the -exchange becomes dominant. At momentun transfers even larger, one nucleon scatters off the other via valence quark-quark scattering.

Diffraction is described by using the impact parameter representation and a phenomenological profile function:

 TD(s,t)=ipW∫∞0bdbJ0(bq)ΓD(s,b). (31)

here is the momentum transfer () and is the diffraction profile function, which is related to the eikonal function : . is taken to be an even Fermi profile function:

 ΓD(s,b)=g(s)[11+e(b−R)/a+11+e−(b+R)/a−1]. (32)

The parameters and are energy dependent:

 R=R0+R1(lnss0−iπ2),
 a=a0+a1(lnss0−iπ2);

is a complex crossing even energy-dependent coupling strength, GeV.

For the diffraction amplitude the following asymptotic properties are obtained:

1. (Froissart-Martin bound)