TASI Lectures on Future Colliders
Abstract
These lectures review the main motivations for future highenergy colliders, focusing on the understanding of electroweak symmetry breaking and on the search for physics beyond the Standard Model. The open questions and the challenges are common to all future projects; for concreteness, I will use studies of the potential of and pp circular colliders to provide examples of the anticipated physics reach.
CERNTH2019073
TASI Lectures on Future Colliders
Michelangelo L. Mangano^{†}^{†}thanks: Speaker.
TH Department, CERN, 1211 Geneva, Switzerland
Email: michelangelo.mangano@cern.ch
\abstract@cs
Contents
 1 Introduction
 2 Where we stand
 3 The way forward with future colliders
 4 Higgs boson properties
 5 Precision EW measurements
 6 Precision versus sensitivity
 7 The Higgs potential
 8 Direct searches for new physics
 9 Final remarks
 A Simple features of the energy scaling of hard cross sections in hadronic collisions
 B FCC parameters
1 Introduction
Several projects for future highenergy particle colliders are under consideration in various regions worldwide, to complement and extend the physics reach of CERN’s Large Hadron Collider (LHC). These include:

the Future Circular Collider (FCC [9, 10, 11, 12]) and the CEPC/SppC [13, 14], two similar projects promoted by CERN and China respectively, envisioning a staged facility, enabled by a 100 km circular ring, designed to deliver collisions at energies in the range GeV, pp collisions up to TeV, ep collisions at TeV, as well as heavy ion collisions.
This variety of layouts (circular or linear), beam types (electrons or protons) and energies, reflects slightly different priorities for the physics targets and observables, as well as a different judgement on the overall balance between physics returns, technological challenges and feasibility, time scales for completion and exploitation, and financial/political realities.
If approved today, the projects in this list could in principle begin delivering physics results at some point during the decade 203040, and operate for the following 1525 years, depending on the technology and upgrade path. Beyond this, but with an unspecified time scale, ideas are on the table for a possible subsequent generation of even more ambitious leptoncollider projects, which I only mention here: linear electron accelerators based on multiGeV/m gradient technologies like plasma wake fields or lasers [15], and muon circular colliders [16, 17].
The FCC and SppC proton colliders would face a preparatory phase longer than the colliders, mostly because of the R&D period required to produce reliable and affordable SC bending magnets with the 16 T magnetic field needed to keep 50 TeV protons in orbit in the 100 km ring (A 12 T option, based on highT SCs, is considered for SppC, but is itself far from being established).
No matter what the energy or the technology, all these projects share common goals, driven by the need to clarify several outstanding open issues in particle physics. This need singles out the next generation of colliders beyond the LHC as unique and indispensable exploratory tools to continue driving the progress in our understanding of nature.
In these lectures, I will review the main motivations for future highenergy colliders and discuss their physics potential. I will mostly cover topics such as Higgs and physics beyond the Standard Model (BSM), and, while I will recall some basic theoretical background, I will give for granted that students know those topics from their studies, or from other lectures in the TASI program. In particular, in 2018 students have been exposed, among others, to excellent lectures on the Higgs boson [18]), supersymmetry and dark matter [19], QCD at colliders [20], effective field theory [21] and flavour [22]. I refer to these, and to the great lectures on LHC physics by our host Tilman Plehn [23], for the necessary background material.
I will illustrate the value of the physics reach through concrete examples of the FCC physics potential. No attempt is made here to compare the FCC against the other projects, as the point of these lectures is not to promote one project over another: I choose here the FCC since it is the project that I know better, and the one that, in terms of breadth, variety and physics performance, best illustrates how ambitious the targets of a future collider can be.
I will include also a few exercises here and there. They are meant to stimulate your thinking, they are simple, do not necessarily require calculations and are mostly for a qualitative discussion. But if you take them seriously, some of them could be the seed for interesting work!
2 Where we stand
In almost ten years of studies at the LHC, the picture of the particle physics landscape has greatly evolved. The legacy of this first phase of the LHC physics programme can be briefly summarised in three points: a) the discovery of the Higgs boson, and the start of a new phase of detailed studies of its properties, aimed at revealing the deep origin of electroweak (EW) symmetry breaking; b) the indication that signals of new physics around the TeV scale are, at best, elusive; c) the rapid advance of theoretical calculations, whose constant progress and reliability underline the key role of ever improving precision measurements, from the Higgs to the flavour sectors. Last but not least, the LHC success has been made possible by the extraordinary achievements of the accelerator and of the detectors, whose performance is exceeding all expectations, supporting the confidence in the ability of the next generation of colliders to achieve what they promise.
2.1 The puzzling origin of the Higgs field
The years that preceded the discovery of the Higgs boson have been characterized by a general strong belief in its existence, justified by the success of the Standard Model (SM), and by the confidence that EW symmetry breaking (EWSB) is indeed driven by the basic dynamics of the Higgs mechanism, as described in the SM. Starting from this assumption, the theoretical speculations focused on identifying possible solutions to the hierarchy problem, namely the extreme fine tuning required to achieve the decoupling of the Higgs and EW mass scale from the phenomena expected to emerge at much higher energy scales, up to the Planck scale. These speculations led to the consideration of several possible scenarios of new physics, from supersymmetry to large extra dimensions, which would provide natural solutions to the hierarchy problem by introducing new degrees of freedom, new symmetries, or new dynamics at the TeV scale. The opportunity to combine the solution of the hierarchy problem with the understanding of experimental facts such as the existence of dark matter, or of the features of flavour phenomena, gave further impetus to these theoretical efforts, and to the many experimental studies dedicated to the search for BSM manifestations.
The conceptual simplicity and appeal of several of these scenarios, justified optimism that their concrete manifestations would appear “behind the corner”. After all, the SM itself was born as the simplest possible model in which to embed an elegant explanation (the Higgs mechanism) to the problem of justifying the mass of the weak force carrier and parity nonconservation, and this simple framework DID work! Why shouldn’t the next step beyond the SM be accomplished by similarly elegant and “simple” proposals?
Lack of evidence of new physics at the TeV scale, made even more compelling by the hundreds of inconclusive searches scrupolously carried out by the LHC experiments, has not removed, however, the need to continue addressing the original motivations for a BSM extension of the SM. If anything, this has made the open issues even more intriguing, and challenging. But, while the efforts to review the underlying perspective on the hierarchy problem and naturalness continue, we should focus on a perhaps even more basic question: who ordered the Higgs? Where does the famous “mexican hat” Higgs potential come from? This appears like a pointless trivial question. A sort of mexicanhat potential must be there, it’s a necessary ingredient in the realization of EWSB, without it we would not be here discussing it. But what is its true origin?
To understand the value of this question, it is useful to compare the dynamics of the Higgs field with that of electromagnetism (EM), or of any of the other known fundamental forces in nature. All properties of EM arise from a simple principle, the gauge principle. Coulomb’s law has no free parameter, except the overall scale of the electric charge, absorbed in the definition of the charge unit. The quantization of the charge may have a deep origin in quantum mechanical properties such as anomaly cancellation, or in the algebraic structure of the representations of larger gauge groups in which EM is embedded. The sign of the electric force, positive or negative depending on the relative sign of the interacting charges, follows from the spin1 nature of the photon. The behaviour follows from the Gauss theorem, or charge conservation, or gauge invariance, depending on how we want to phrase it. We do not know why nature has chosen gauge symmetry as a guiding principle, although this appears as an unavoidable consequence of the existence of interactions mediated by massless particles, which are the basis of the longrange forces needed to sustain our existence. But gauge symmetry appears everywhere, e.g. in the zero modes of a string theory, or as a result of compactification in KaluzaKlein gravitational theories. Gauge symmetry is therefore intimately related to possible deeper properties of nature.
On the contrary, nothing is fundamental in the Higgs potential, there doesn’t seem to be any fundamental symmetry or underlying principle that controls its structure. To fix the notation for further use, we shall write this potential as
(2.0) 
where is the SU(2) doublet scalar field and is the real part of the neutral component ( being its vacuum expectation value). The condition of minimum of the potential () and the Higgs mass definition () lead to the relations and , expressed in terms of the measured Higgs mass and of Fermi’s coupling GeV. The sign and value of the parameters and are a priori arbitrary. A negative sign in front of the quadratic term is required to achieve symmetry breaking, but is not required by any symmetry. The positive sign of is necessary for the stability of the potential at large but, again, is not dictated by anything: it could be negative, and the potential could be stabilized at larger values by higherorder terms. Even the functional form is not fundamental: the underlying gauge symmetry only requires the potential to depend on , and the quartic form could simply represent the leading terms in the power expansion of a more complex functional dependence of the potential.
The SM Higgs potential has therefore the features of an effective potential, as in other natural phenomena. Spontaneous symmetry breaking, in fact, is not a process unique to the EW theory. There are many other examples in nature where the potential energy of a system is described by a mexicanhat functional form, leading to some order parameter acquiring a nonzero expectation value. A well know case is that of superconductivity. The LandauGinzburg theory (LG) [24] is a phenomenological model that describes the macroscopic behaviour of type1 superconductors. This model contains a scalar field , with free energy given by
(2.0) 
where the ellipsis denote additional terms not relevant to this discussion. This equation describes a scalar field of charge with a mass and a quartic interaction. These parameters are temperature dependent. At high temperature the masssquared is positive and the scalar field has a vanishing expectation value throughout the superconductor. However, below the critical temperature the masssquared is negative, leading to a nonvanishing expectation value of throughout the superconductor. This expectation value essentially generates a mass for the photon within the superconductor, leading to the basic phenomenology of superconductivity.
Similarly to the Higgs mechanism, the LG theory is a phenomenological model, which offers no explanation as to the fundamental origin of the parameters of the model. It also does not explain the fundamental origin of the scalar field itself. Ultimately, these questions were answered by Bardeen, Cooper, and Schrieffer, in the celebrated BCS theory of superconductivity [25]. The scalar field is a composite of electrons, the Cooper pairs, and its mass relates to the fundamental microscopic parameters describing the material. The and parameters can therefore be calculated from first principles, starting from the underlying dynamics, namely the electromagnetic interactions inside the metal, subject to the rules of quantum mechanics and to the phonon interactions within the solid.
The analogy of the Higgs with the LG model is striking, with the exception that the model is relativistically invariant and the gauge forces nonAbelian. Unlike with superconductivity, currently neither the fundamental origin of the SM scalar field nor the origin of the mass and selfinteraction parameters in the Higgs scalar potential are known. The SM itself does not provide a dynamical framework that allows us to predict the shape of the Higgs potential. This must follow from a theory beyond the SM. What are possible scenarios? An obvious option is a mechanism analogous to BCS: the Higgs could be the bound state of a pair of fermions, strongly coupled by a new fundamental (possibly gauge) interaction, whose dynamics determines the properties of the Higgs field. Another well known framework is supersymmetry: elementary scalar fields appear as a result of the symmetry itself, and the Higgs potential is likewise determined by the symmetry. For example, in the minimal supersymmetric SM (MSSM) the Higgs selfcoupling is not a free parameter, but is related to the weak gauge coupling. The parameters that characterize supersymmetry breaking modify the supersymmetric predictions for the Higgs interactions, and ultimately the dynamics of EW symmetry breaking would be calculable from the fundamental properties of supersymmetry breaking.
Now that the Higgs boson has been discovered, and the basic phenomenology of EWSB established, the next stage of exploration for any future high energy physics programme is to determine their microscopic origins. And the obvious place where one should look for hints is the Higgs boson itself, exploring in detail all of its properties. As of today, we are not aware of any other experimental context except particle colliders, where the question of the origin of the Higgs potential can be studied.
Another aspect of the Higgs dynamics makes it appear very different than EM or other gauge forces. This is the (lack of) decoupling between short and longdistance interactions, or between low and highenergy modes. The Gauss theorem, or gauge invariance, teaches us that the charge of an electron can be obtained by measuring the integral of the flux of its field through any closed surface surrounding the electron. Using the surface of a sphere of small or large radius will give the same value. Possible additional unknown interactions of the electron at very short distances do not modify the charge we measure at large distances. In the case of the Higgs potential, its parameters and receive instead dominant contributions from any shortdistance Higgs interaction. Selfenergy loop diagrams for the Higgs boson shift the Higgs mass squared by amounts proportional to the mass scale of the particles in the loop. Given that we anticipate the existence in nature of other fundamental mass scales much larger than the weak scale, notably the Planck scale, the mass of the Higgs boson is intrinsically unpredictable, and its small value rather unnatural: this is the so called hierarchy problem. This puzzle could be resolved if there were an additional new microscopic scale near the weak scale, involving new particles and interactions governed by symmetries that decouple the Higgs mass from shortdistance contributions.
The Higgs quartic coupling is modified only logarithmically by loop corrections, but the effect of running to high energy can be dramatic. At leading order, and neglecting small contribtions from the gauge couplings, the renormalization group running of is given by the following expression:
(2.0) 
where is the top Yukawa coupling, and is the running Higgs selfcoupling. It is straightforward to verify that, for the actual values of the top and Higgs masses, , and is driven to smaller values at large . A complete analysis, including higher order terms (see e.g. [26]), indicates that turns negative at energies in the range of GeV, as shown in Fig. 1. A negative would make the potential unstable, and shortdistance quantum fluctuations could therefore potentially destabilize the SM Higgs vacuum. The timescale for “our” vacuum to run away, calculated with the given values of top and Higgs mass, is much longer than the age of the universe, making the vacuum metastable and consistent with observation [27]. But it is disturbing, once more, that the dynamics of the Higgs field be influenced so much by physics taking place at scales much higher than the weak scale!
Both the puzzle of the hierarchy problem and the issue of the metastability of the Higgs vacuum point to the existence of a more fundamental layer behind EW symmetry breaking, and galvanise the need to understand the deeper origin of the Higgs potential.
2.2 More exploration targets for future colliders
Even setting to the side the key issue of the origin of the Higgs, there are other very concrete reasons why the Higgs deserves further study, and may provide a window to undiscovered phenomena. As it carries no spin and is electrically neutral, the Higgs may have socalled ‘relevant’ (i.e. dimension4) interactions (e.g. ) with a scalar particle, S, living in sectors of particle physics that are otherwise totally decoupled from the SM interactions. These interactions, even if they only take place at very high energies, remain relevant at low energies – contrary to interactions between new neutral scalars and the other SM particles. The possibility of new hidden sectors already has strong experimental support: there is overwhelming evidence from astrophysical observations that a large fraction of the observed matter density in the universe is invisible. This socalled dark matter (DM) makes up 26% of the total energy density in the universe and more than 80% of the total matter [28]. Despite numerous observations of the astrophysical properties of DM, not much is known about its particle nature. This makes the discovery and identification of DM one of the most pressing questions in science, a question whose answer may hinge on the role of the Higgs boson.
The current main constraints on a particle DM candidate are that it: a) should gravitate like ordinary matter, b) should not carry colour or electromagnetic charge, c) is massive and nonrelativistic at the time the CMB forms, d) is long lived enough to be present in the universe today (), and e) does not have too strong selfinteractions (). While no SM particles satisfy these criteria, they do not pose very strong constraints on the properties of new particles to play the role of DM. In particular the allowed range of masses spans almost 80 orders of magnitude. Particles with mass below eV would have a wave length so large that they wipe out structures on the kPc (kiloParsec) scale and larger [29], disagreeing with observations, while on the other end of the scale microlensing and MACHO (Massive Astrophysical Compact Halo Objects) searches put an upper bound of solar masses or GeV on the mass of the dominant DM component [30, 31, 32]. We shall discuss later on how future colliders can attack this pressing question, providing comprehensive exploration of the class of ‘thermal freezeout’ DM, which picks out a particular broad mass range as a wellmotivated experimental target, as well as unique probes of weakly coupled dark sectors.
Returning to the matter which is observable in the Universe, the SM alone cannot explain baryogenesis, namely the origin of the dominance of matter over antimatter that we observe today. Since the matterantimatter asymmetry was created in the early universe when temperatures and energies were high, higher energies must be explored to uncover the new particles responsible for it, and the LHC can only start this search. In particular, a wellmotivated class of scenarios, known as EW baryogenesis theories, can explain the matterantimatter asymmetry by modifying how the transition from the hightemperature EWsymmetric phase to the lowtemperature symmetrybroken phase occurred. Independently of the problem of the matterantimatter asymmetry, there is the question of the nature of the EW phase transition (EWPT): was it a smooth crossover, as predicted by the SM, or a firstorder one, as possible in BSM scenarios (and as necessary to enable EW baryogenesis)? Since this phase transition occurred at temperatures near the weak scale, the new states required to modify the transition would likely have mass not too far above the weak scale, singling out future 100 TeV colliders as the leading experimental facility to explore the nature of this foundational epoch of the early Universe.
Another outstanding question lies in the origin of the neutrino masses, which the SM alone cannot account for. As with DM, there are numerous models for neutrino masses that are within the discovery reach of future lepton and hadron colliders, as discussed in Ref. [9].
These and other outstanding questions might also imply the existence of further spatial dimensions, or larger symmetries that unify leptons and quarks or the known forces. The LHC’s findings notwithstanding, higher energy and larger statistics will be needed to explore these fundamental mysteries more deeply and possibly reveal new paradigm shifts.
3 The way forward with future colliders
Since the mid 70’s, the path to establish experimentally the SM was clear: discover the gauge bosons and complete the fermion sector (e.g. determine the number of SMlike neutrino species and eventually discover the top quark), test strong and EW interactions at the level of quantum corrections (comparing precise measurements and accurate theoretical predictions), test the CKM framework of flavor phenomena, and discover the Higgs boson. Having accomplished all this, the situation today is less well defined. In spite of the fact that the formulation of the open problems, as reviewed in the previous Section, is rather clear, there is however no experimental approach known today that can guarantee conclusive answers. This is underscored by the fact that our prejudices on where to look have not given results. One of the main questions we face in planning our future is therefore “why don’t we see as yet any sign of the new physics that we confidently expected to be present around the TeV scale?”. The question admits two possible answers: (i) the mass scale of the new physics lies beyond the LHC reach, or, (ii) while being within LHC’s reach, its manifestations are elusive and escaped so far the direct search. These two scenarios are a priori equally likely, but they clearly impact in different ways the future of our field, and thus the assessment of the physics potential of possible future facilities. Our safest hedge is therefore the readiness to cover both scenarios, via an experimental programme relying on higher precision and sensitivity (to address possible elusive signatures), and on an extended energy and mass reach relative to the LHC.
A possible way to assess the value of a future collider facility is to consider the following three criteria:

The guaranteed deliverables. This criterion is what I refer to as the “value of measurements”: the new information that we can collect to probe the SM to a deeper level, pushing further the exploration of particles and processes that are still poorly known. The main targets of this component of the programme include of course the Higgs boson, the gauge bosons and EW interactions at energies above the EWSB scale, the flavor phenomena, in particular those related to the least known fermions, such as the top quark or the tau lepton.

The discovery potential. While the emergence of phenomena beyond the SM cannot be guaranteed, a future facility must promise a significant extension of today’s sensitivity to new physics, addressing the most relevant and compelling BSM scenarios under consideration, and with sufficient flexibility to accommodate new ideas. The increase in the reach for direct discovery at the highest masses should be accompanied by the increased sensitivity throughout the whole mass range, thanks to higher precision and statistics. The mass reach for direct discovery should ideally match the sensitivity reach obtained indirectly via precision measurements.

Conclusive answers. Unless an actual discovery is made, no experiment can provide conclusive answers to general questions such as “what is DM?”, “do supersymmetry or new Z’ bosons exist?”. Lack of evidence can be evaded by pushing the relevant mass spectrum beyond reach. But there exist important, less generic, questions, for which it is reasonable to expect that a conclusive answer can be found below welldefined mass scales. Some examples were given before: did EWSB induce a first order phase transition? Is DM made of particles coupled to the SM via the weak interaction? Do neutrino masses arise from the weak scale? Even negative answers, if firm, would be of great value, since they would force us to focus the searches elsewhere. While current experiments (LHC and others) could find partial answers, conclusive statements are expected to require higher energy and sensitivity, setting performance targets for the evaluation of future experiments.
These lectures will present an overview of the physics potential of the various elements of the FCC programme, in the light of those three criteria.
3.1 The Role of FCCee
The capabilities of circular colliders are well illustrated by LEP, which occupied the LHC tunnel from 1989 to 2000. Its pointlike collisions between electrons and positrons and precisely known beam energy allowed the four LEP experiments to test the SM to new levels of precision, particularly regarding the properties of the W and Z bosons. Putting such a machine in a 100 km tunnel and taking advantage of advances in accelerator technology such as superconducting radiofrequency cavities would offer even greater levels of precision on a greater number of processes. For example, it would be possible to adapt the collision energy during about 15 years of operation, to examine physics at the Z pole, at the WW production threshold, at the peak of ZH production, and above the threshold. Controlling the beam energy at the 100 keV level would allow exquisite measurements of the Z and W boson masses, whilst collecting samples of up to Z and W bosons, not to mention several million Higgs bosons and top quark pairs. The experimental precision would surpass any previous experiment and challenge cutting edge theory calculations.
FCCee would quite literally provide a quantum leap in our understanding of the Higgs. Like the W and Z gauge bosons, the Higgs receives quantum EW corrections typically measuring a few per cent in magnitude due to fluctuations of massive particles such as the top quark. This aspect of the gauge bosons was successfully explored at LEP, but now it is the turn of the Higgs – the keystone in the EW sector of the SM. The millions of Higgs bosons produced by FCCee, with its clinically precise environment, would push the accuracy of the measurements to the per mille level, accessing the quantum underpinnings of the Higgs and probing deep into this hitherto unexplored frontier. In the process , the mass recoiling against the Z has a sharp peak that allows a unique and absolute determination of the Higgs decaywidth and production cross section. This will provide an absolute normalisation for all Higgs measurements performed at the FCC, enabling exotic Higgs decays to be measured in a model independent manner.
The high statistics promised by the FCCee programme goes far beyond precision Higgs measurements. Other signals of new physics could arise from the observation of flavour changing neutral currents or leptonflavourviolating decays, by the precise measurements of the Z and H invisible decay widths, or by direct observation of particles with extremely weak couplings, such as righthanded neutrinos and other exotic particles. The precision of the FCCee programme on EW measurements would allow new physics effects to be probed at scales as high as 100 TeV, anticipating what the FCChh must focus on.
3.2 The Role of FCChh and FCCeh
The FCChh would operate at seven times the LHC energy, and collect about 10 times more luminosity. The discovery reach for highmass particles – such as Z or W gauge bosons corresponding to new fundamental forces, or gluinos and squarks in supersymmetric theories – will increase by a factor five or more, depending on the final statistics. The production rate of particles already within the LHC reach, such as top quarks or Higgs bosons, will increase by even larger factors. During the planned 25 years of data taking, a total of more than Higgs bosons will be created, several thousand times more than collected by the LHC through Run 2 and 200 times more than will be available by the end of its operation. These additional statistics will enable the FCChh experiments to improve the separation of Higgs signals from the huge backgrounds that afflict most LHC studies, overcoming some of the dominant systematics that limit the precision attainable at the LHC. While the ultimate precision of most Higgs properties can only be achieved with FCCee, several demand complementary information from FCChh. For example, the direct measurement of the coupling between the Higgs and the top quark requires that they be produced together, requiring an energy beyond the reach of the FCCee. At 100 TeV, almost out of the top quarks produced will radiate a Higgs boson, allowing the topHiggs interaction to be measured at the 1% level – several times better than at the HLLHC and probing deep into the quantum structure of this interaction. Similar precision can be reached for Higgs decays that are too rare to be studied in detail at FCCee, such as those to muon pairs or to a Z and a photon. All of these measurements will be complementary to those obtained with FCCee and will use them as reference inputs to precisely correlate the strength of the signals obtained through various production and decay modes.
One respect in which a 100 TeV protonproton collider would really come to the fore is in revealing how the Higgs behaves in private. The rate of Higgs pair production events, which in some part occur through Higgs selfinteractions, would grow by a factor of 40 at FCChh, with respect to 14 TeV, and enable this unique property of the Higgs to be measured with an accuracy reaching 5%. Among many other uses, such a measurement would comprehensively explore classes of models that rely on modifying the Higgs potential to drive a strong first order phase transition at the time of EW symmetry breaking, a necessary condition to induce baryogenesis.
FCChh would also allow an exhaustive exploration of new TeVscale phenomena. Indirect evidence for new physics can emerge from the scattering of W bosons at high energy – where the Higgs boson plays a key role in controlling the rate growth – from the production of Higgs bosons at very large transverse momentum, or by testing the far ‘offshell’ nature of the Z boson via the measurement of lepton pairs with invariant masses in the multiTeV region. The plethora of new particles predicted by most models of symmetrybreaking alternatives to the SM can be searched for directly, thanks to the immense mass reach of 100 TeV collisions. The search for DM, for example, will cover the possible space of parameters of many theories relying on weakly interacting massive particles, guaranteeing a discovery or ruling them out. Several theories that address the hierarchy problem will also be conclusively tested. For supersymmetry, the mass reach of FCChh pushes beyond the regions motivated by the hierarchy problem alone. For composite Higgs theories, the precision Higgs coupling measurements and searches for new heavy resonances will fully cover the motivated territory. A 100 TeV proton collider will even confront exotic scenarios such as the twin Higgs, which are extremely difficult to test. These theories predict very rare or exotic Higgs decays, possibly visible at FCChh thanks to its enormous Higgs production rates.
The FCCeh collider could operate in synchronous, symbiotic operation alongside the pp collider. The facility would serve as the most powerful, highresolution microscope onto the substructure of matter ever built. Highenergy ep collisions would provide precise information on the quark and gluon structure of the proton, and how they interact. FCCeh would complement and enhance the study of the Higgs, and broaden the new physics searches also performed at FCChh and FCCee, with a specific focus on phenomena such as quark substructure, leptoquarks, heavy sterile neutrinos and longlived particles.
While not discussed at all in these lectures, FCChh would also enable the continuation of the LHC successful programme of heavy ion collisions, extending studies of the thermodynamic behaviour of QCD of crucial relevance to multiple topics, ranging from the fundamental properties of quantum field theory, to cosmology and astrophysics.
4 Higgs boson properties
Indirect information about the Higgs boson is accessible through precision EW measurements, as proven by the global fits to the LEP and SLC data, which set very tight constraints on the Higgs mass well before its discovery. But, following the Higgs discovery, the most direct way to test the Higgs properties is to produce it and observe its decay features. With the knowledge of the Higgs mass, the SM predicts uniquely its couplings to each SM particle, and therefore all production and decay rates are fixed. Since our target is to explore the origin of EWSB, and possibly identify the underlying BSM phenomena that trigger it, we must be open however to all sorts of deviations from the SM. For example, while the couplings of the Higgs to the gauge bosons are determined by the Higgs quantum numbers (an SU(2) doublet), the existence of an additional Higgs scalar, acquiring its own expectation value, could lead to a mixing in the Higgs sector, and the mass eigenstate at 125 GeV could couple to the W and the Z with a slightly reduced strength. The existence of additional Higgses opens the door to the possibility that different fermions couple to different Higgses, modifying the direct relation between fermion mass and Yukawa coupling to the 125 GeV state (as in the case in supersymmetry). The study of Higgs couplings, threfore, requires as much as possible a model independent approach.
Establishing the gauge couplings of the known fermions was relatively straightforward, since they are quantized and the fermion assignment to a gauge group representation ranges over a discrete set of possibilities. Deviations are possible of course, but only in presence of additional BSM interactions, that appear at low energy as operators of dimension higher than 4. The basic, leadingorder and renormalizable interactions of SM fermions are therefore easily established experimentally. That the top quark is an color triplet, for example, can follow from the analysis of its production rate and decay patterns^{1}^{1}1This is so straightforward in principle, that I am not even sure there has ever been an explicit experimental analysis to confirm that the top quark is a triplet. I leave it to you as an exercise to list the data and signatures that could be used to confirm it.. On the contrary, the leadingorder Higgs couplings are a priori a generic real (or complex) number, and the confidence on whether they agree or not with the SM will always only be conditional to the precision of the available data.
The dominant Higgs production channels, in hadronic collisions like at the LHC, are shown in Fig. 2. In these examples, the production rates are proportional to the coulings to the gauge bosons, or to the top quark. In the ideal world in which the strong coupling , the partonic densities (PDFs) and the QCD matrix elements were perfectly known, counting events in a given decay mode would provide a measurement of , where are the Higgs couplings to initial and final state state, and () is the partial (total) decay width. If we could observe every possible Higgs decay, summing over all Y states for a given production channel X would allow the measurement of , since . At the LHC and in general in hadronic collisions, this is hardly possible: several SM decay modes with a substantial branching ratio (BR), like , are very difficult to measure, and possible exotic Higgs decays are also likely to escape detection. A completely modelindependent extraction of Higgs couplings in hadronic collisions can therefore only reach a limited precision, independently of the theoretical challenge of properly calculating the QCD part of the reactions.
As we show in the next sections, the measurements at an electron collider can provide the needed input of , and open the way for a powerful synergetic programme of precision measurements with the next generation of hadron colliders.
A complete compilation and critical review of the Higgs coupling measurement prospects covering all proposed future colliders can be found in Ref. [33].
4.1 Higgs coupling measurements at FCCee
The determination of is however possible at future colliders, operating above the ZH threshold. Here, the production of a Higgs boson can be reconstructed, in a modelindependent way, with the socalled recoilmass technique. One considers final states, and for each event defines the recoil mass as . Most final states arise from ZZ production (), in which case is the momentum of the second Z, and the recoil mass equals (up to finitewidth and experimental resolution effects) the Z mass. A further contribution comes from WW production (), in which case represents the missing momentum, and the recoil mass is a broad continuum. In the case of production (see the left image of Fig. 3), the recoil mass coincides with the Higgs mass, regardless of the H decay mode. These three contributions are shown, for the simulation of an FCCee experiment, in Fig. 3. A global fit of the recoil mass spectrum returns the total number of Higgs produced in , and a direct measurement of the HZZ coupling, . If we now focus on events with the decay, and consider that their rate is proportional to , the knowledge of allows to extract in a modelindependent way.
Exercise: discuss, in a qualitative way, to which extent EW
radiative corrections or BSM effects influence this line of reasoning, and whether
they affect the “modelindependent” argument.
Exercise: discuss, in a qualitative way, the backgrounds under
the H peak in the recoil mass spectrum, and how they can be estimated,
and subtracted, precisely.
Exercise: discuss how the recoil mass observable can be used to
determine the presence of exotic (in particular, invisible) H decays.
Having established the value of , further dedicated measurements allow to determine the absolute value of the Higgs couplings to all particles accessible via decay modes or production channels. Assuming SM couplings, the statistical precision that can be achieved for several BRs measurable at FCCee is summarized in Table 1 (for the details of the reconstruction of individual final states, see e.g. [10]). These include the results obtained from the run just above the Higgs threhsold, at 240 GeV, and the runs above the thresholds, where the VBF process , shown in Fig. 4, becomes relevant.
(GeV)  

Luminosity ()  5  
(%)  HZ  ν H  HZ  ν H 
ττ  
γγ  
μμ  
Collider  HLLHC  FCCee  

Lumi ()  3  HLLHC  
Years  25  3  4  
(%)  SM  2.7  1.3  1.1 
(%)  1.5  0.2  0.17  0.16 
(%)  1.7  1.3  0.43  0.40 
(%)  3.7  1.3  0.61  0.56 
(%)  SM  1.7  1.21  1.18 
(%)  2.5  1.6  1.01  0.90 
(%)  1.9  1.4  0.74  0.67 
(%)  4.3  10.1  9.0  3.8 
(%)  1.8  4.8  3.9  1.3 
(%)  3.4  –  –  3.1 
BR (%)  SM 
In practice, the width and the couplings are determined with a global fit, which closely follows the logic of Ref. [35]. The results of this fit are summarised in Table 2 and are compared to the same fit applied to HLLHC projections [34]. Table 2 also shows that the extractions of and of from the global fit are significantly improved by the addition of the WWfusion process at GeV, as a result of the correlation between the HZ and ν H processes. In particular the Higgs EW couplings have a permillelevel precision, and the couplings to the tau, the bottom and charm quarks and the effective couplng to the gluon reach the percent level or better.
Several SM couplings are left out of these projections: to the lightest quarks (u, d, s), to the electron, to the top quark, to the pair, and the Higgs selfcoupling. To access the light quarks, several ideas have been proposed: exclusive decays to hadronic resonances, such as (, V=W/Z/γ) [36, 37, 38, 39], lightjet tagging techniques [40], or kinematical distributions of the Higgs boson in hadronic collisions [41, 42]. Experimental searches for exclusive radiative hadronic decays have started already at the LHC [43], to at least establish upper limits, even though well beyond the SM expectations. Given the small BRs, an electron collider will barely have sufficient statistics to gain the required SM sensitivity. At the FCCee, the most promising channel is Hγρ[ππ], with about 40 events expected [44]. A future hadron collider will have much more events to play with, but backgrounds and experimental conditions will be extremely challenging, and only detailed simulations will be able to establish their true potential.
To probe the Hee coupling, the best hope appears to be the direct resonant production in . The low rate demands high luminosity, and a tuning of the beam energy to exactly match . Preliminary studies [44] indicate that a 3 observation requires an integrated luminosity of 90ab, namely several years of dedicated running at 125 GeV.
While the direct access to the Htt coupling in an collider requires a centerofmass energy of 500 GeV and more, FCCee will expose an indirect sensitivity to it, through its effect at quantum level on the cross section just above production threshold, GeV. The precise measurement of from the runs at the Z pole will allow the QCD effects to be disentangled from those of the top Yukawa coupling at the vertex, to achieve a precision of [10].
To access in a direct way the top Yukawa coupling, and to improve to the percent level the measurement of small BR decays such as Hγγ, Zγ and μμ, we can then appeal to the huge statistics available to a hadron collider.
4.2 Higgs couplings measurements at FCChh
Two elements characterise Higgs production at the FCChh: the large statistics (see Table 3), and the large kinematic range, which, for several production channels, probes in the multiTeV region (see Fig. 5).
ggH  VBF  WH  ZH  tt̄H  HH  

180  170  100  110  530  390  
16  15  11  12  24  19 
These factors lead to an extended and diverse sensitivity to possible deviations of the Higgs properties from their SM predictions: the large rates enable precise measurements of branching ratios for rare decay channels such as γγ or μμ, and push the sensitivity to otherwise forbidden channels such as τμ. The large kinematic range can be used to define cuts improving the signaltobackground ratios and the modelling or experimental systematics, but it can also amplify the presence of modified Higgs couplings, described by higherdimension operators, whose impact grows with . Overall, the Higgs physics programme of FCChh is a fundamental complement to what can be measured at FCCee, and the two Higgs programmes greatly enrich each other. This section contains some examples of these facts, and documents the current status of the precision projections for Higgs measurements. A more extensive discussion of Higgs production properties at 100 TeV and of possible measurements is given in Ref. [45].
Figure 5 shows the Higgs rates above a given threshold, for various production channels. It should be noted that these rates remain above the level of one million up to TeV, and there is statistics for final states like Hbb̄ or Hττ extending up to several TeV. Furthermore, for 1 TeV, the leading production channel becomes tt̄H, followed by vector boson fusion when 2 TeV. The analysis strategies to separate various production and decay modes in these regimes will therefore be different to what is used at the LHC. Higgs measurements at 100 TeV will offer many new options and precision opportunities with respect to the LHC, as it happened with the top quark moving from the statisticshungry Tevatron to the rich LHC.
Exercise: discuss possible strategies to separate the different
Higgs production processes in the various ranges of shown in
Fig. 5.
For example, as shown in Ref. [45], improves for several final states at large . In the case of the important γγ final state, Fig. 6 shows that increases from at low (a value similar to what observed at the LHC), to at GeV. In this range of few hundred GeV, some experimental systematics will also improve, from the determination of the energies (relevant e.g. for the mass resolution of Hγγ or bb̄) to the mitigation of pileup effects.
Exercise: why do you think S/B improves at
large for a process like γγ]+jet?
The analyses carried out so far for FCChh are still rather crude when compared to the LHC standards, but help to define useful targets for the ultimate attainable precision and the overall detector performance. The details of the present detector simulations for Higgs physics at FCChh are contained in Ref. [46].
The target uncertainties considered include statistics (taking into account analysis cuts, expected efficiencies, and the possible irreducible backgrounds) and systematics (limited here to the identification efficiencies for the relevant final states, and an overall 1% to account for luminosity and modelling uncertainties). While these estimates do not reflect the full complexity of the experimental analyses in the huge pileup environment of FCChh, the systematics assumptions that were used are rather conservative. Significant improvements in the precision of reconstruction efficiencies would arise, for example, by applying tagandprobe methods to largestatistics control samples. Modelling uncertainties will likewise improve through better calculations, and broad campaigns of validation against data. By choosing here to work with Higgs bosons produced at large , the challenges met by triggers and reconstruction in the high pileup environment are eased. The projections given here are therefore considered to be reasonable targets for the ultimate precision, and useful benchmarks to define the goals of the detector performance.
The consideration of the reconstruction efficiency of leptons and photons is relevant in this context since, to obtain the highest precision by removing global uncertainties such as luminosity and production modelling, ratios of different decay channels can be exploited. The reconstruction efficiencies are shown in Fig. 7 as a function of . The uncertainties on the electron and photon efficiencies are assumed to be fully correlated, but totally uncorrelated from the muon one. The curves in Fig. 7 reflect what is achievable today at the LHC, and it is reasonable to expect that smaller uncertainties will be available at the FCChh, due to the higher statistics that will allow statistically more powerful datadriven fine tuning. For example, imposing the identity of the Z boson rate in the ee and μμ decay channels will strongly correlate the e and μ efficiencies.
The absolute uncertainty expected in the measurement of the production and decay rates for several final states is shown in Fig. 8, as a function of the minimum . The curves labeled by “stat+syst” include the optimal reconstruction efficiency uncertainties shown in Fig. 7. The curves labeled by “stat+syst+lumi” include a further 1%, to account for the overall uncertainty related to luminosity and production systematics. The luminosity itself could be known even better than that by using a standard candle process such as Z production, where both the partonic cross section and the PDF luminosity will be pinned down by future theoretical calculations, and by the FCCeh, respectively. Notice that the gg luminosity in the mass range between and several TeV will be measured by FCCeh at the few per mille level.
Several comments on these figures are in order. First of all, it should be noted that the inclusion of the systematic uncertainty leads to a minimum in the overall uncertainty for values in the range of few hundred GeV. The very large FCChh statistics make it possible to fully benefit from this region, where experimental systematics are getting smaller. The second remark is that the measurements of the Higgs spectrum can be performed with a precision better than 10%, using very clean final states such as γγ and 4, up to values well in excess of 1 TeV, allowing the possible existence of higherdimension operators affecting Higgs dynamics to be probed up to scales of several TeV.
Independently of future progress, the systematics related to production modelling and to luminosity cancel entirely by taking the ratio of different decay modes, provided selection cuts corresponding to identical fiducial kinematic domains for the Higgs boson are used. This can be done for the final states considered in Fig. 8. Ratios of production rates for these channels provide absolute determinations of ratios of branching ratios, with uncertainties dominated by the statistics, and by the uncorrelated systematics such as reconstruction efficiencies for the different final state particles. These ratios are shown in Fig. 9. The curves with the systematics labeled as “cons” use the conservative reconstruction uncertainties plotted in Fig. 7.
Observable  Parameter  Precision  Precision 

(stat)  (stat+syst+lumi)  
(H)B(H γγ)  0.1%  1.5%  
(H) B(Hμμ)  0.28%  1.2%  
(H)B(H 4μ)  0.18%  1.9%  
(H)B(H γμμ)  0.55%  1.6%  
(HH)B(Hγγ)B(Hbb̄)  5%  7.0%  
B(Hμμ)/B(H4μ)  0.33%  1.3%  
B(Hγγ)/B(H 2e2μ)  0.17%  0.8%  
B(Hγγ)/B(H 2μ)  0.29%  1.4%  
B(Hμμγ)/B(Hμμ)  0.58%  1.8%  
(tt̄H) B(H bb̄)/(tt̄Z)B(Z bb̄)  1.05%  1.9%  
(H invisible)  95%CL 
These results are summarised in Table 4, separately showing the statistical and systematic uncertainties obtained in our studies. As remarked above, there is in principle room for further progress, by fully exploiting datadriven techniques to reduce the experimental systematics. At the least, one can expect that these potential improvements will compensate for the current neglect of other experimental complexity, such as pileup. The most robust measurements will involve the ratios of branching ratios. Taking as a given the value of the HZZ coupling (and therefore (H)), which will be measured to the few permille level by FCCee, from the FCChh ratios it could be possible to extract the absolute couplings of the Higgs to γγ (0.4%), μμ (0.7%), and Zγ(0.9%).
Exercise: discuss the possible role of precise measurememts of
ratios of BRs in exploring the microscopic origin of potential
deviations from the SM expectations. Which type of models can give
rise to deviations in the ratios considered here? Which models would
leave no signatures in these ratios?
The ratio with the tt̄Z process is considered for the tt̄H process, as proposed in Ref. [47]. This allows the removal of the luminosity uncertainty, and reducing the theoretical systematics on the production modelling below 1%. An updated study of this process, including the FCChh detector simulation, is presented in Ref. [46]. Assuming FCCee will deliver the expected precise knowledge of (Hbb̄), and the confirmation of the SM predictions for the Ztt̄ vertex, the tt̄H/tt̄Z ratio should therefore allow a determination of the top Yukawa coupling to 1%.
The limit quoted in Table 4 on the decay rate of the Higgs boson to new invisible particles is obtained from a study of large missing signatures. The analysis, discussed in detail in Ref. [46], relies on the datadriven determination of the leading SM backgrounds from W/Z+jets. The integrated luminosity evolution of the sensitivity to invisible H decays is shown in Fig. 10. The SM decay H4ν, with branching ratio of about , will be seen after 1 ab, and the full FCChh statistics will push the sensitivity to .
Exercise: if the Higgs admits an important decay rate to nonSM
particles, will increase. A larger width will reduce all BRs by a
common factor. Assuming that these decay signatures are elusive in a
hadron collider, discuss how ratios of BRs could still be used to learn
more about their origin.
Last but not least, Table 4 reports a 7% expected precision in the extraction of the Higgs selfcoupling . This result is discussed in more detail in a later Section, with other probes of the Higgs selfinteraction.
cut  GeV  GeV  GeV  GeV 

[0.98,1.05]  [0.99,1.04]  [0.99,1.03]  [0.98,1.02] 
4.3 Longitudinal Vector Boson Scattering
The scattering of the longitudinal components of vector bosons is particularly sensitive to the relation between gauge couplings and the VVH coupling. A thorough analysis of samesign scattering, in the context of the FCChh detector performance studies, is documented in Ref. [46]. The extraction of the signal requires the removal of large QCD backgrounds (+jets, WZ+jets) and the separation of large EW background of transverseboson scattering. The former is suppressed by requiring a large dilepton invariant mass and the presence of two jets at large forward and backward rapidities. The longitudinal component is then extracted from the scattering of transverse states by exploiting the different azimuthal correlations between the two leptons. The precision obtained for the measurement of the cross section as a function of integrated luminosity, is shown in Fig. 11 (left). The three curves correspond to different assumptions about the rapidity acceptance of the detector and drive the choice of the detector design, setting a lepton (jet) acceptance out to . The small change in precision when increasing the jet cut from to GeV indicates a strong resilience of the results against the presence of large pileup. The quoted precision, reaching the value of 3% at 30 ab, accounts for the systematic uncertainties of luminosity (1%), lepton efficiency (0.5%), PDF (1%) and the shape of the distributions used in the fit (10%). The right plot in Fig. 11 shows the impact of rescaling the WWH coupling by a factor . The effect is largest at the highest dilepton invariant masses, as expected. The measurement precision, represented by the small vertical bars, indicates a sensitivity to at the percent level, as shown also in Table 5.
5 Precision EW measurements
The Higgs boson forms an integral part of the EW sector, and its properties are deeply intertwined with those of EW phenomena. A thorough program of EW measurements goes hand in hand with the study of Higgs properties, and is an essential complement to it. At the FCC, EW interactions can be studied from multiple perspectives, extending by large factors all previous targets of precision and energy reach. We summarize here the main results from the existing studies, documented in more detail in Ref. [9, 10].
The FCCee run at the peak will deliver about times the LEP statistics, with about or final states, and hadronic decays. The larger statistics w.r.t. LEP will be accompanied by significant efforts to minimize the systematic uncertainties. For example, the beam energies will be measured more precisely, and better detectors will improve the efficiency of btagging or the precision in the absolute luminosity determination. Significant theoretical improvements in the calculation of higherloop EW and QCD corrections are also foreseen, and necessary to fully exploit the potential improvement by over two orders of magnitude in the statistical precision.
is a crucial input parameter to interpret SM precision observables. The EM coupling at the scale of the electron mass is the best known fundamental constant of nature, but its renormalization group evolution to the scale of weak interactions is subject to important uncertainties, due to nonperturbative hadronic physics, which enters the photon selfenergy corrections as evolves through the region of the hadronic resonances. The systematic uncertainties of the experimental data on hadrons end up dominating the precision of the extrapolation to , which is limited to the level of . Dedicated runs at =87.7 and 93.9 GeV will extract directly (namely without an extrapolation in ) from the energydependence of the forwardbackward asymmetry, improving the current uncertainty by a factor of 4.
Forwardbackward and polarization asymmetries will also allow to reduce by a factor of 3050 the uncertainty in . I recall that today’s determination of the weak mixing angle, [48], is dominated by the combination of two precise measurements (the bquark forwardbackward asymmetry from LEP and the leftright polarization asymmetry from SLD), which differ among themselves by 3.2 standard deviations. While future HLLHC data [49] will provide an independent determination of the weak mixing angle with a precision approaching the current LEP/SLD one, using the lepton charge asymmetry in Z boson decays, it is only with the future FCCee data that this puzzling result will be clarified.
The Zdecay asymmetries will also help improving the measurement of vector and axial couplings of the leptons and of the charm and bottom quarks. In absence of a reliable technique to distinguish Z decays to the different lighter quarks (u, d and s), the most precise determination of their couplings to the neutral current will come from FCCeh. There, a simultaneous fit to the light quarks EW couplings and to the PDFs, using both charged and neutral current data, will disentangle the individual quarks and allow the measurement of their respective vector and axial couplings. The projection in in Fig. 12 for the precision of all fermionic couplings, from a global fit [50, 51] to both FCCee and FCCeh data treating each lepton and quark flavour as independent, shows the improvement expected with respect to today’s knowledge.
The measurement of the total Z width , and of its visible fraction, will allow to extract the invisible component of . Today, the number of neutrino species obtained from the LEP data is , which is low by two standard deviations. A deficit in the neutrino counting from Z decays could be attributed to a violation of unitarity in the neutrino mixing matrix, or to the presence of righthanded neutrinos [52]. FCCee will improve the precision on by almost a factor of 10, down to 0.001.
The pairs of W bosons produced at the two energies of 157.5 and 162.5 GeV will reduce the uncertainty on the W mass, , to 0.5 MeV, and of its width to 1.2 MeV. The limited statistics of W bosons from LEP2 left us with a puzzling discrepancy between the decay branching ratio of the W to the tau lepton, , and to the e and μ( and , resp.). FCCee can reduce these uncertainties by almost two orders of magnitude, greatly increasing the sensitivity to possible violations of lepton flavour universality, a topic that is receiving great attention nowadays. For comparison, the Z decays to individual leptons will allow to test neutralcurrent lepton universality at the level of of . τ semileptonic decays, and the τ lifetime, could achieve a sensitivity to deviations from lepton flavour universality in weak charged currents at a similar level.
A collection of the various EW precision measurements possible at the FCCee, including those relative to the top quark properties, is shown in Table 6. Their overall impact in testing the SM relation between the top and W masses is shown in Fig. 13.
Observable  present  FCCee  FCCee  Comment and  
value  error  Stat.  Syst.  dominant exp. error  
91186700  2200  5  100  From Z line shape scan  
Beam energy calibration  
2495200  2300  8  100  From Z line shape scan  
Beam energy calibration  
20767  25  0.06  0.21.0  ratio of hadrons to leptons  
acceptance for leptons  
1196  30  0.1  0.41.6  from above [54]  
216290  660  0.3  <60  ratio of to hadrons  
stat. extrapol. from SLD [55]  
(nb)  41541  37  0.1  4  peak hadronic crosssection  
luminosity measurement  
2991  7  0.005  1  Z peak cross sections  
Luminosity measurement  
231480  160  3  2  5  from at Z peak  
Beam energy calibration  
128952  14  4  small  from off peak [56]  
992  16  0.02  13  bquark asymmetry at Z pole  
from jet charge  
1498  49  0.15  <2  τ polarisation and charge asymmetry  
τ decay physics  
80350  15  0.5  0.3  From WW threshold scan  
Beam energy calibration  
2085  42  1.2  0.3  From WW threshold scan  
Beam energy calibration  
1170  420  3  small  from [57]  
2920  50  0.8  small  ratio of invis. to leptonic  
in radiative Z returns  
172740  500  17  small  From threshold scan  
QCD errors dominate  
1410  190  45  small  From threshold scan  
QCD errors dominate  
1.2  0.3  0.1  small  From threshold scan  
QCD errors dominate  
30%  %  small  From run 
5.1 Complementarity of EW and Higgs measurements
The best framework to expose the complementarity between Higgs and EW probes of new physics at large scales, is that of the SM effective field theory (SMEFT [58, 59, 60]). Here one assumes that, as in the SM, the Higgs boson transforms as doublet of , and considers all invariant operators, classified according to their dimension:
(5.0) 
At dimension 4, one finds the SM itself. At dimension 5 appear the operators that generate Majorana neutrino masses. At dimension 6 one first finds operators that, parameterizing the lowenergy behaviour of new interactions beyond the EW scale, lead to modifications of EW and Higgs observables. Focusing on lepton and baryonnumber conserving operators, and assuming flavour universality, leaves a basis of 59 dim6 operators [59].
Figure 14 shows the constraints that future FCCee data can impose on the coefficients of the subset of operators that play a role in the EW and Higgs observables discussed so far. On the EW side, one has the following 10 operators:
(5.0) 
where is the Higgs scalar doublet, runs over all the 5 types of SM fermion multiplets, while only refers to the 2 types of SM lefthanded fermion doublets. The field can induce effects in processes where an explicit Higgs particle is present, or can influence EW observables indirectly, when is set to its vacuum expectation value. The other 8 operators shown in Fig 14 are mostly affecting Higgs observables:
(5.0) 
The sensitivities to the ratios are reported in Fig. 14 as 95 probability bounds on the interaction scale, , associated to each operator. Notice that, in the same way that the energy scale associated to the Fermi constant differs from the W boson mass by a factor related to the weak coupling, the interaction scale defined by does not correspond exactly to the mass of new particles.
Few remarks are in order. First of all, comparison [10] with the current constraints shows that the FCCee data will increase by a factor of 45 the sensitivity to new energy scales. This matches well the factor of 7 increase in energy of the FCChh w.r.t. the LHC, which will lead to a comparable incease in the direct sensitivity to new physics. In other words, a 100 TeV pp collider is typically properly scaled to search for the microscopic orgin of possible deviations observed by the FCCee precision measurements. In absolute terms, the scales that can be probed in the case of weaklyinteracting new physics () range between a few and few tens of TeV, while they can go up to O(100) TeV in the case of strongly interacting forces. The second remark is that Fig. 14 underscores the complementarity between EW and Higgs observables: even though operators considered here (except ) include both Higgs and gauge bosons or fermions, their constraints come primarily from either EW or Higgs measurements. Both types of measurements are therefore necessary for a systematic exploration of BSM contributions induced via SMEFT operators. The respective scale sensitivity observed for both types of operators is rather consistent, with most limits on interaction scales in the range of 2030 TeV, showing that the precision targets set by the Higgs and EW programs are coherent.
6 Precision versus sensitivity
In the previous sections, we focused on the very precise measurements that the FCC enables, both in the Higgs and in the EW sector of the SM. These measurements have a value per se, independently of whether we eventually find discrepancies w.r.t. the SM expectations. We need these precise measurements to test up to which point our understanding of fundamental EW phenomena is controlled by the SM. But we also need precise measurements because, the day new physics were found some place, we shall need as much data as possible to evaluate and constrain the many models that will be proposed to explain it. In this context, data that agree with the SM can be as useful as data that may not agree with it. Precise data in agreement with the SM, for example, can help us rule out claims of observations of new physics that were to openly clash with established measurements.
So, the success of a precision physics program is not necessarily tied to the discovery of discrepancies with the SM, but builds on reliable, unbiased and ever more accurate measurements of how nature behaves. Treating the achievable precision as a gauge of the sensitivity to new physics phenomena, however, is critical to examine and characterize the potential relevance of a given measurement in a specific BSM context. It is also a useful way to evaluate the reach of different facilities, which might not measure exactly the same observable: the interpretation of their measurements in terms of constraints on some common BSM phenomenon can be seen as a useful standard candle for their comparison, to assess synergies and complementarities.
In the context of the FCC, there is great richness of synergies and complementarities. The same new physics model can manifest itself via departures in indirect precision measurements at FCCee, and directly via the production of new particles at FCChh. But low and highenergy observables can also both participate in building indirect evidence for new phenomena living at scales that are beyond the direct discovery reach. This interplay can be shown with a general example, using the EFT language. Let us consider a dimension6 operator , parameterizing at energies the effects of some new physics present at a scale . This induces a contribution to the Lagrangian given by , and leads to corrections to the transition between the initial and final states and . By dimensional analysis, will scale, relative to a dimension4 SM contribution, like , where is a mass scale characteristic of the transition. In the case of a Higgs decay, or of Higgs production at threshold as in , the only possible scales are or . Taking e.g. , gives . If the transition involves another large kinematical scale , with , as in the case of Higgs production at large (see Fig. 15), the deviation could scale like , . The impact of the EFT operator would therefore be greatly enhanced at large . Assuming TeV, for example, a measurements at a scale GeV would lead to an effect of approximatey 15%, ten times larger than for the decay or production at threshold. This means that, at large , one can detect the presence of new physics effects with lesser precision than is required at smaller ^{2}^{2}2Needless to say the EFT formalism looses validity once the hard scale becomes large enough to induce direct manifestation of new physics. . A hadron collider cannot provide precision comparable to an electron collider, but its higher kinematical reach can compensate for this!
Several studies of this interplay between precision and sensitivity from highenergy indirect measurements have appeared in the literature (see e.g. [61, 62, 63, 64, 65, 66, 67, 68]) covering both Higgs and EW observables. Several of these papers test these ideas in the context of the LHC, and the sensitivity comparison is therefore drawn w.r.t. the LEP precision measurements. We shall give here a couple of examples specific to the 100 TeV collider, where the kinematic reach and the large Higgs production rates make this approach even more powerful.
6.1 Example: the VVHH coupling
Figure 16 shows the diagrams for Higgspair production via vector boson fusion, studied in Ref. [69]. At large invariant mass of the Higgs pair, , the tripleHiggs coupling contribution is suppressed by the offshell Higgs propagator,
and the amplitude is controlled by the behaviour of the longitudinallongitudinal component of the amplitude, characterised by the destructive interference between the first two diagrams:
(6.0) 
Here, and represent, respectively, the coefficients of the VVHH and VVH couplings, normalised to their SM values. vanishes in the SM and in models where the symmetry is linearly realized, and the growth of the amplitude with energy is suppressed. In composite Higgs models based e.g. on an symmetry [70], on the other hand, and (), so the scattering amplitude grows with energy as .
The study of Ref. [69] considered the HH4b final state, applying boostedjet tagging techniques, given the high of the Higgs bosons at high . The impact of is visible in Fig. 17, which shows the distribution in the SM in a , scenario, and the expected backgrounds. will be measured with a few permille precision at FCCee (independently of whether it agrees or not with the SM), and the constraints on at FCChh will translate directly into a constraint on . The detailed study of Ref. [69] projects a sensitivity to deviations of from its SM value of better than , in spite of the much coarser precision in the measurement of the HH rates! See Ref. [69] for the discussion of the validity range of the EFT approximation.
6.2 Example: DrellYan at large mass
DrellYan (DY) production is another example where energy can complement precision, achieving sensitivity to new highscale phenomena [71]. The total production rates of W and Z bosons at 100 TeV are about 1.3 and 0.4 μb, respectively, i.e. samples of leptonic decays per ab! A large fraction of these events probe very high energies. Figure 18, left panel, shows the integrated spectra of the W boson transverse mass () and of the γ/Z dilepton mass.
Exercise: The right panel of
Fig. 18 shows the invariant mass spectrum of gauge boson
pairs. Compare these rates to those for leptonpair production (left
panel), and discuss the reasons for the large differences.
In presence of new physics, large corrections to the SM prediction can arise from the and oblique parameters defined by the following dim6 EFT operators [72]:
(6.0) 
These parameters capture the universal modifications of the EW gauge boson propagators, and are already constrained, at the per mille level, from LEP2 precision measurements and from the W mass measurements at Tevatron and LHC (as well as other precision measurements at the Z pole at LEP/SLD). The FCChh DY statistics at very high mass will contribute, with the precision measurements at lower energy by FCCee, to improve the current constraints by two orders of magnitude, as shown in Table 7. In terms of a new physics scale defined by , the FCChh reach corresponds to TeV, a sensitivity that could be matched by a multiTeV lepton collider such as CLIC. We should remark that the low and highenergy approaches should not be seen as alternative, but as synergetic: should deviations be observed in either of them, the other measurement would serve as an independent probe to confirm the SM departure, and to help pinning down its origin.
LEP  ATLAS 8  CMS 8  LHC 13  FCChh  FCCee  
luminosity  19.7 fb  20.3 fb  0.3 ab  3 ab  10 ab  
NC  W  
Y  
CC  W  —  — 
Further applications of the huge DY lever arm in are (i) the determination of the running of EW couplings, by measuring the transverse (invariant) mass spectrum of (di)leptons produced by far offshell W (Z) bosons [73], and (ii) the indirect search for new heavy EW particles (like gauginos in supersymmetry) using the distortion of the DY shape near their production threshold [74].
7 The Higgs potential
As we discussed at the beginning of these lectures, understanding the origin of the Higgs potential is among the most, if not the most, outstanding target of future colliders. As an essential part of this understanding, we must start measuring it. Let us briefly rediscuss the relations between the parameters of the Higgs potential and the physical observables, in a context slightly more general than the SM. To simplify the notation, let us consider a single real field, and consider the following simple generalization of the SM potential (you could consider repeating the exercise with a more general functional form):
(7.0) 
must be even, and of course in the SM. The two key relations obtained by setting at the minimum of the potential, and defining give:
(7.0) 
We stress that, while dimensional analysis makes the Higgs mass proportional to , its precise value is not defined by the dynamics near the origin , where the quadratic term dominates, but it is defined by the dynamics around the minimum , which could be far away from the origin. This is reflected by the coefficient in Eq. 7.0, whose specific value depends on the shape of the potential at large . In other words, the quadratic term of the potential only provides the overall scale of the Higgs mass, but the specific value is controlled by the Higgs dynamics at the minimum, where the higherorder terms of the potential are important: in the same way that the masses of SM particles are related to the strength of their interaction with the Higgs, it is reasonable that the mass of the Higgs be related to its own selfinteraction. Therefore studying the structure of the Higgs potential is also a way to address the question of “what gives mass to the Higgs”.
Now, the potential above has three parameters, , and the power . However we only have two measurements, the Higgs mass and its expectation value . The two are absolute numbers, independent of the shape of the potential: the mass is what we measure in the experiments, while is given by the relation , where is the weak gauge coupling (recall also that ). For each , we can extract a value for and , but we won’t be able to determine what the shape of the potential is. In other words, the only thing we know experimentally about the Higgs potential is that its second derivative at the origin is negative, to drive away from 0. We have no experimental evidence, as of today, that rather than 6. To make progress in learning about the structure of , we need a further measurement, sensitive to the power in Eq. 7.0. Expanding the potential around its minimum, the cubic term controls the cubic selfinteraction, and its strength is given by:
(7.0) 
Assuming a Higgs potential given by Eq. 7.0, and given that and are known, the measurement of the Higgs cubic selfinteraction would be directly a measurement of . More in general, it is clear that the Higgs selfcoupling is a mandatory measurement to start learning about the Higgs potential. Notice one point: the change from (as in the SM), to something like , has a big impact on the selfcoupling, namely it increases it by a factor of 5/3. With this modification of the Higgs potential, there is no continuous knob that allows to smoothly change from its SM value. However large, this change would have still failed to be detected experimentally, so for all we know this is still an open option. So we should be open to the possibility that differs from its SM value by , and a 20% uncertainty in its determination might already explore possible deviations at the 5 level.
Needless to say, the most likely scenario assumes the presence of the SM quartic coupling^{3}^{3}3The presence of a quartic coupling is unavoidable, even if, for some odd reason, the underlying fundamental theory did not have such a coupling: the quartic would in fact be generated via radiative corrections at the oneloop order, starting from the term., supplemented by other higherorder terms, like . In this case, becomes a continuous tunable parameter, which can alter the SM Higgs selfcoupling by arbitrarily small values:
(7.0) 
One more small remark^{4}^{4}4Notice that obviously has mass dimension 1, independently of the form of the potential, since it’s the parameter of a 3boson coupling.: we can rewrite Eq. 7.0 as