# The Degree of Fine-Tuning in our Universe – and Others

###### Abstract

Both the fundamental constants that describe the laws of physics and the cosmological parameters that determine the properties of our universe must fall within a range of values in order for the cosmos to develop astrophysical structures and ultimately support life. This paper reviews the current constraints on these quantities. The discussion starts with an assessment of the parameters that are allowed to vary. The standard model of particle physics contains both coupling constants and particle masses , and the allowed ranges of these parameters are discussed first. We then consider cosmological parameters, including the total energy density of the universe , the contribution from vacuum energy , the baryon-to-photon ratio , the dark matter contribution , and the amplitude of primordial density fluctuations . These quantities are constrained by the requirements that the universe lives for a sufficiently long time, emerges from the epoch of Big Bang Nucleosynthesis with an acceptable chemical composition, and can successfully produce large scale structures such as galaxies. On smaller scales, stars and planets must be able to form and function. The stars must be sufficiently long-lived, have high enough surface temperatures, and have smaller masses than their host galaxies. The planets must be massive enough to hold onto an atmosphere, yet small enough to remain non-degenerate, and contain enough particles to support a biosphere of sufficient complexity. These requirements place constraints on the gravitational structure constant , the fine structure constant , and composite parameters that specify nuclear reaction rates. We then consider specific instances of possible fine-tuning in stellar nucleosynthesis, including the triple alpha reaction that produces carbon, the case of unstable deuterium, and the possibility of stable diprotons. For all of the issues outlined above, viable universes exist over a range of parameter space, which is delineated herein. Finally, for universes with significantly different parameters, new types of astrophysical processes can generate energy and thereby support habitability.

###### keywords:

Fine-tuning, Multiverse, Fundamental Constants, Cosmology, Stellar Evolution, Nucleosynthesis, Habitability^{†}

^{†}journal: Physics Reports

Table of Contents

1. Introduction 5

1.1. Types of Fine-Tuning Arguments

1.2. Scales of the Universe

1.3. Formulation of the Fine-Tuning Problem

1.4. Scope of this Review

2. Particle Physics Parameters 13

2.1. The Standard Model of Particle Physics

2.2. Constraints on Light Quark Masses

2.2.1. Stability of Quarks within Hadrons

2.2.2. Stability of Protons and Neutrons within Nuclei

2.2.3. Stability of Free Protons and Hydrogen

2.2.4. Unbound Deuterium and Bound Diprotons

2.2.5. Plane of Allowed Quark Masses

2.2.6. Summary of Quark Constraints

2.2.7. Mass Difference between the Neutron and Proton

2.2.8. Constraints on the Higgs Parameters

2.3. Constraints on the - Plane

2.4. Constraints on the Strong Coupling Constant

2.5. Additional Considerations

2.5.1. Charge Quantization

2.5.2. Constraint from Proton Decay

3. Cosmological Parameters and the Cosmic Inventory 33

3.1. Review of Parameters

3.2. Constraints on the Cosmic Inventory

3.3. The Flatness Problem

3.4. Quantum Fluctuations and Inflationary Dynamics

3.5. Eternal Inflation

4. The Cosmological Constant and/or Dark Energy 49

4.1. The Cosmological Constant Problem

4.2. Bounds on the Vacuum Energy Density from Structure Formation

5. Big Bang Nucleosynthesis 55

5.1. BBN Parameters and Processes

5.2. BBN Abundances with Parameter Variations

5.2.1. Variations in the Baryon to Photon Ratio

5.2.2. Variations in the Gravitational Constant

5.2.3. Variations in the Neutron Lifetime

5.2.4. Variations in the Fine Structure Constant

5.2.5. Variations in both and

5.3. BBN without the Weak Interaction

6. Galaxy Formation and Large Scale Structure 68

6.1. Mass and Density Scales of Galaxy Formation

6.2. Structure of Dark Matter Halos

6.3. Bounds on the Amplitude of Primordial Fluctuations from Planet Scattering

6.4. Constraints from the Galactic Background Radiation

6.5. Variations in the Abundances of Dark Matter and Baryons

6.6. Gravitational Potential of Galaxies

6.7. Cooling Considerations

7. Stars and Stellar Evolution 83

7.1. Analytic Model for Stellar Structure

7.2. Minimum Stellar Temperatures

7.3. Stellar Lifetime Constraints

7.4. The Triple-Alpha Reaction for Carbon Production

7.5. Effects of Unstable Deuterium and Bound Diprotons on Stars

7.5.1. Universes with Stable Diprotons

7.5.2. Universes with Unstable Deuterium

7.6. Stellar Constraints on Nuclear Forces

7.6.1. Stellar Evolution without the Weak Interaction

7.6.2. Stellar Constraint on the Weak Interaction

7.6.3. Supernova Constraint on the Weak Interaction

7.6.3. Supernova Constraint on the Nucleon Potential

8. Planets 110

8.1. Mass Scale for Non-Degenerate Planets

8.2. Mass Scale for Atmospheric Retention

8.3. Allowed Range of Parameter Space for Planets

8.4. Planet Formation

8.5. Planets and Stellar Convection

9. Exotic Astrophysical Scenarios 118

9.1. Dark Matter Halos as Astrophysical Objects

9.1.1. Power from Dark Matter Annihilation

9.1.2. Time Evolution of Dark Matter Halos

9.2. Dark Matter Capture and Annihilation in White Dwarfs

9.3. Black Holes as Stellar Power Sources

9.4. Degenerate Dark Matter Stars

9.5. Nuclear-Free Universe

10. Conclusion 134

10.1. Summary of Fine-Tuning Constraints

10.2. General Trends

10.3. Anthropic Arguments

10.4. Is our Universe Maximally Habitable?

10.5. Open Issues

10.6. Insights and Perspective

Appendix A. Mass Scales in terms of Fundamental Constants 150

Appendix B. Number of Space-Time Dimensions 155

B.1. Stability of Classical Orbits

B.2. Stability of Atoms: Bound Quantum States

Appendix C. Chemistry and Biological Molecules 162

Appendix D. Global Bounds on the Structure Constants 166

Appendix E. Probability Considerations 169

Appendix F: Nuclei and the Semi-Empirical Mass Formula 173

References 178

## 1 Introduction

The laws of physics in our universe support the development and operations of biology — and hence observers — which in turn require the existence of a range of astrophysical structures. The cosmos synthesizes light nuclei during its early history and later produces a wide variety of stars, which forge the remaining entries of the periodic table. On larger scales, galaxies condense out of the expanding universe and provide deep gravitational potential wells that collect and organize the necessary ingredients. On smaller scales, planets form alongside their host stars and provide suitable environments for the genesis and maintenance of life. Within our universe, the laws of physics have the proper form to support all of these building blocks that are needed for observers to arise. However, a large and growing body of research has argued that relatively small changes in the laws of physics could render the universe incapable of supporting life. In other words, the universe could be fine-tuned for the development of complexity. The overarching goal of this contribution is to review the current arguments concerning the possible fine-tuning of the universe and make a quantitative assessment of its severity.

Current cosmological theories argue that our universe may be only one component of a vast collection of universes that make up a much larger region of space-time, often called the “multiverse” or the “megaverse” carrellis (); davies2004 (); deutsch (); donoghuethree (); ellis2004 (); garriga2008 (); hallnomura (); lindemultiverse (); reesbefore (). This ensemble is depicted schematically in Figure 1. Parallel developments in string theory and its generalizations indicate that the vacuum structure of the universe could be sampled from an enormous number of possible states boussopolchinski (); halverson (); hogan2006 (); kachru (); schellekens (); susskind (). The potential energy function for this configuration space is depicted schematically in Figure 2, where each minimum corresponds to a different low-energy universe. If each individual universe within the multiverse (represented by a particular bubble in Figure 1) samples the underlying distribution of possible vacuum states (by choosing a particular local minimum represented in Figure 2), the laws of physics could vary from region to region within the ensemble. In this scenario, our universe represents one small subdomain of the entire space-time with one particular implementation of the possible versions of physical law. Other domains could have elementary particles with different properties and/or different cosmological parameters. A fundamental question thus arises: What versions of the laws of physics are necessary for the development of astrophysical structures, which are in turn necessary for the development of life?

### 1.1 Types of Fine-Tuning Arguments

Fine-tuning arguments have a long history dicke (); dirac1937 (); dirac1938 (); gamow (). Although many previous treatments have concluded that the universe is fine-tuned for the development of life barnes2012 (); barrow2002 (); bartip (); boussoetal (); carr (); carter1974 (); carter1983 (); davies2006 (); dirac1974 (); donoghue (); hogan (); lewbarn (); reessix (); schellekens (); uzan (); uzantwo (), it should be emphasized that different authors make this claim with widely varying degrees of conviction (see also bradford2011 (); carroll2006 (); davies2004 (); gleiser (); hogan2006 (); linde (); liviorees2018 (); rees2003 ()). We also note that this topic has been addressed through the lens of philosophy (see craigcarroll (); ellisphil (); friederich (); leslie (); smeenk () and references therein), although this present discussion will focus on results from physics and astronomy. In any case, the concept of fine-tuning is not precisely defined. Here we start the discussion by making the distinction between two types of tuning issues:

The usual meaning of “fine-tuning” is that small changes in the value of a parameter can lead to significant changes in the system as a whole. For example, if the strong nuclear force were somewhat weaker, then the deuterium nucleus would no longer be bound. If the strong force were somewhat stronger, then diprotons would be bound. In both of these examples, relatively small changes (here, several percent variations in the strong force) lead to different nuclear inventories. A second type of tuning arises when a parameter of interest has a vastly different value from that expected (usually on theoretical grounds). The cosmological constant provides an example of this issue: The observed value of the cosmological constant is smaller than some expectations of its value by orders of magnitude. This second type of tuning is thus hierarchical. In the first example, the strong nuclear force can apparently vary by only percent without rendering deuterium unstable or diprotons stable. Nuclear structure thus represents a possible instance of Sensitive Fine-Tuning. In the second example, the value of the cosmological constant could be a million times smaller or larger (if the fluctuation amplitude is also allowed to vary) and nothing catastrophic would happen, but the values would still be much smaller than the Planck scale (by orders of magnitude or more). The cosmological constant is thus an example of Hierarchical Fine-Tuning.

In addition, when an unexpected hierarchy arises due to some quantity being much smaller than its natural scale, one way to get such an ordering is for two large numbers to almost-but-not-quite cancel. This near cancellation of large quantities can be extremely sensitive to their exact values and could thus require some type of tuning. This state of affairs arises, for example, in the cosmological constant problem caldwellkam (); weinberg89 () (see Section 4). This general concept is known as Naturalness. Although many definitions exist in the literature, the basic idea is that a quantity in particle physics is considered unnatural if the quantum corrections are larger than the observed value (for recent discussions of this issue, see dine2015 (); wellstune () and references therein; for a more critical point of view, see hossenfelder ()). In such a situation, the quantum corrections must (mostly) cancel in order to allow for the observed small value to emerge. This cancellation is not automatic, so that it requires some measure of fine-tuning. One way to codify this concept, due to ’t Hooft, is to state a Principle of Naturalness: A physical quantity should be small if and only if the underlying theory becomes more symmetric in the limit where that quantity approaches zero thooft ().

### 1.2 Scales of the Universe

The physical constituents of our universe display a hierarchy of scales that allows it to function carr (); liviorees2018 (); rees1980 (). Before considering the details of fine-tuning, it is useful to assess the scope of our particular universe. Figure 3 depicts the range of length scales and mass scales that allow our universe to operate. The masses and lengths are given in units of the proton mass and the proton size, respectively. The triangular symbol at the origin thus marks the location of the proton. At the other end of the diagram, the mass and size of the observable universe is marked by the triangle at . Objects that are smaller than their event horizons ) fall below the red line, and lie in the black hole regime. Objects that are smaller than their Compton wavelengths () fall below the blue line and lie in the quantum regime. These two regions meet at the location of a Planck mass black hole, marked by the lower triangle at . Contours of constant density are shown by the dashed lines in the figure. A number of macroscopic bodies lie near the line of atomic density (middle dashed curve), which extends from the Hydrogen atom on the left to the black hole boundary on the right. In between, the green line segment shows the regime of known life forms, ranging from bacteria to whales. Planets are depicted by the square symbols and stars are depicted by the circles. The range of known black holes in shown by the heavy black line segment. Note that this segment is much shorter than the total possible range of black holes, which could span the entire red line. Finally, the region sampled by galactic structures is shown as the shaded region in the upper right portion of the diagram.

Figure 3 illustrates both the challenges and limitations posed by the scales of the universe. The full mass range spans approximately 80 decades. The range in radial scale, while large, is more constrained. The lower dashed curve shows the contour of nuclear density. At large mass scales, where gravity can crush material to higher density, objects become black holes. For lower masses, the nuclear forces dominate, so that our universe does not generally produce entities with sizes below the line of nuclear density. The upper dashed curve corresponds to the density of the universe as a whole. Objects above this curve would have densities lower than that of background space, and would be subject to tidal destruction. As result, our universe does not generally produce entities that fall above this line. The range of possible sizes for a given mass thus spans ‘only’ about 15 decades (with a smaller range at high masses because of the black hole limit). The universe, with its myriad structure, supports a parameter space that is about decades in extent. This large range of length and mass scales is enabled by the large hierarchy between the strength of gravity and the electromagnetic force. As emphasized previously carr (); liviorees2018 (), if gravity were stronger, this range of scales would be correspondingly smaller: The red line would move upward in Figure 3 and the real estate available for astrophysical structures would shrink accordingly.

Another feature of the universe illustrated by Figure 3 is that the regions occupied by particular types of terrestrial and astrophysical objects are relatively small. The diagram shows the locations in parameter space populated by life forms, planets, stars, black holes, and galaxies. Moreover, the regions populated by atoms are tightly clustered near the point shown for the Hydrogen atom. Similarly, nuclei are clustered near the location of the proton. All of these regions are small compared to the total available parameter space and are widely separated from each other.

### 1.3 Formulation of the Fine-Tuning Problem

The overarching question under review is whether the parameters of physics in our universe are fine-tuned for the development of life. This question, which can be stated simply, is fraught with complications. This section outlines the basic components of the fine-tuning problem.

The first step is to specify what parameters of physics and astrophysics are allowed to vary from universe to universe. It is well known that the Standard Model of Particle Physics has at least 26 parameters, but the theory must be extended to account for additional physics, including gravity, neutrino oscillations, dark matter, and dark energy (Section 2). Specification of such extensions requires additional parameters. On the other hand, not all of the parameters are necessarily vital to the functioning of the low-energy universe (which does not depend on the exact masses of the heavy quarks). One hope — not yet realized — is that a more fundamental theory would have fewer parameters, and that the large number of Standard Model parameters could be derived or calculated from the smaller set. As a result, the number of parameters could be larger or smaller than the well-known 26. In addition to the parameters of particle physics, the Standard Cosmological Model has its own set of quantities that are necessary to specify the properties of the universe (Section 3). These parameters include the baryon-to-photon ratio , the analogous ratio for dark matter , the amplitude of primordial density fluctuations, the energy density of background space, and so on. In principle, some or all of these quantities could be calculated from a fundamental theory, but this program cannot be carried out at present. Even if the cosmological parameters are calcuable, their values could depend on the expansion history of the particular universe in question, so that these values depend on the initial conditions (presumably set at the Planck epoch).

Once the adjustable parameters of physics and cosmology have been identified, a full description of the problem must consider their probability distributions. In the case of a single parameter, we need to know the underlying probability distribution for a universe to realize a given value of that parameter. For example, if the underlying probability distribution is a delta function, which would be centered on the value measured in our universe, then all universes must be the same in this regard. In the more general case of interest for fine-tuning arguments, the probability distributions are assumed to be sufficiently wide that large departures from our universe are possible. In particular, the range of possible parameters values (the minima and maxima of the distributions) must be specified. A full assessment of fine-tuning requires knowledge of these fundamental probability distributions, one for each parameter of interest (although they are not necessarily independent). Unfortunately, these probability distributions are not available at the present time.

The probability distributions described above are priors, i.e., theoretically predicted distributions that apply to a random point in space-time at the end of the inflationary epoch (or more generally whatever epoch of the ultra-early universe sets up its initial conditions). As emphasized by Ref. tegmark (), one must also consider selection effects in order to test the theoretical predictions through experiment. For example, if a parameter affects the formation of planets, then the probability distribution for that parameter will be different when evaluated at a random point in space-time or at a random planet.

The crucial next step is to determine what range of the parameters allow for observers to develop. The question of what constitutes an observer represents yet another complication. For the sake of definiteness, this review considers a universe to be successful (equivalently, viable or habitable) if it can support the full range of astrophysical structures necessary for life or some type of complexity to arise. We then implicitly assume that observers will arise if the requisite structures are in place, and we won’t worry whether the resulting observers are mice or dolphins or androids. The list of required structures includes complex nuclei, planets, stars, galaxies, and the universe itself. In addition to their existence, these structures must have the right properties to support observers. Stable nuclei must populate an adequate fraction of the periodic table. Stars must be sufficiently hot and live for a long time. The galaxies must have gravitational potential wells that are deep enough to retain heavy elements produced by stars and not overly dense so that planets can remain in orbit. The universe itself must allow galaxies to form and live long enough for complexity to arise. And so on. The bulk of this review describes the constraints on the parameters of physics and astrophysics enforced by these requirements.

To summarize this discussion: In order to make a full assessment of the degree of fine-tuning of the universe, one must address the following components of the problem:

[] Specification of the relevant parameters of physics and astrophysics that can vary from universe to universe.

[] Determination of the allowed ranges of parameters that allow for the development of complexity and hence observers.

[] Identification of the underlying probability distributions from which the fundamental parameters are drawn, including the full possible range that the parameters can take.

[] Consideration of selection effects that allow the interpretation of observed properties in the context of the a priori probability distributions.

[] Synthesis of the preceding ingredients to determine the overall likelihood for universes to become habitable.

This treatment focuses primarily on first two of these considerations. For both the Standard Model of Particle Physics and the current Consensus Model of Cosmology, we review the full set of parameters and identify those that have the most influence in determining the potential habitability of the universe. Most of the manuscript then reviews the constraints enforced on the allowed ranges of the relevant parameters by requiring that the universe can produce and maintain complex structures. Unfortunately, the underlying probability distributions are not known for either the fundamental parameters of physics or the cosmological parameters. As a result, these distributions and how they influence selection effects are considered only briefly. Similarly, selection effects depend on the probability distributions for the fundamental parameters and cannot be properly addressed at this time.

### 1.4 Scope of this Review

The consideration of possible alternate universes, here with different incarnations of the laws of physics, is by definition a counterfactual enterprise. This review considers the ranges of physical parameters that allow such a universe to be viable. Since alternate universes are not observable, this endeavor necessarily lies near the boundary of science carrellis (); ellis2004 (). Nonetheless, this discussion is useful on several fronts: First, one can take the existence of the multiverse seriously, so that other universes are considered to actually exist, and the question of their possible habitability is relevant reesbefore (). Moreover, if multiverse theory becomes sufficiently developed, then one could in principle predict the probability for a universe to have a particular realization of the laws of physics, and hence estimate the probability of a universe becoming habitable. Second, anthropic arguments carr (); bartip () are currently being used as an explanation for why the universe has its observed version of the laws of physics. In order to understand both of these issues, the first step is to determine the ranges of parameters that allow a universe to develop structure and complexity. Finally, and perhaps most importantly, studying the degree of tuning necessary for the universe to operate provides us with a greater understanding of how it works.

In this review, the term multiverse refers to the ensemble of other possible universes represented schematically in Figure 1 — other regions of space-time that are far away and largely disconnected from our own universe. For completeness, we note that the Many Worlds Interpretation of quantum mechanics dewitt (); everett () describes physical reality as bifurcating into multiple copies of itself and this collection of possibilities is sometimes also called a multiverse deutsch (). Here we consider the multiverse only in the first, cosmological sense. The philosophy of quantum mechanics, and hence the second type of multiverse, is beyond the scope of this present treatment.

This review is organized as follows: We first consider the Standard Model of Particle Physics in Section 2. After discussing the full range of parameters, we focus on the subset of quantities that have the greatest influence in determining the properties of complex structures and then discuss constraints on those parameters resulting primarily from considerations of particle physics. Additional constraints resulting from astrophysical requirements are discussed in subsequent sections. The standard model of cosmology is presented in Section 3. The full range of cosmological parameters is reviewed, along with an assessment of the most important quantities for producing structure and some basic constraints on the cosmic inventory. The case of the cosmological constant (dark energy) is of particular interest and is considered separately in Section 4. The epoch of Big Bang Nucleosynthesis (BBN) is also considered separately in Section 5, which assesses how the abundances of the light elements change with varying values for the input cosmological parameters. Galaxy formation and galactic structure are considered in Section 6, which provides constraints on both fundamental and cosmological parameters due to required galactic properties. Section 7 considers the constraints due to the necessity of working stars, which are required to have stable nuclear burning configurations, sufficiently long lifetimes, and hot photospheres. This section also revisits the classic issues of the triple alpha resonance for carbon production, the effects of unstable deuterium, and the effects of stable diprotons. The required properties of planets are considered in Section 8, where the parameter constraints are found to be similar to — but less limiting than – those from stellar considerations. More exotic scenarios are introduced in Section 9, including alternate sources of energy generation such as dark matter annihilation and black hole radiation. The paper concludes in Section 10 with a summary of the fine-tuning constraints and a discussion of their implications. A series of Appendices provides more in-depth discussion, and presents some ancillary issues, including a summary of astrophysical mass scales in terms of the fundamental constants (A), the number of space-time dimensions (B), molecular bio-chemistry (C), global bounds on the structure constants (D), a brief discussion of the underlying probability distributions for the tunable parameters (E), and the range of possible nuclei (F).

A note on notation: The particle physics literature generally uses natural units where , , and . Most of our discussion of particle physics topics follows this convention. On the other hand most of the astrophysical literature uses cgs units, so the discussion of stars and planets includes the relevant factors of and .

## 2 Particle Physics Parameters

A full assessment of the parameters of particle physics — along with an analysis of their degree of possible fine-tuning — is complicated by the current state of development of the field. On one hand, the Standard Model of Particle Physics provides a remarkably successful description of most experimental results to date. In addition to its myriad successes, the theory is elegant and well motivated. On the other hand, this theory is incomplete. We already know that extensions to the minimal version of the Standard Model are required to include neutrino oscillations, non-baryonic dark matter, dark energy, and quantum gravity. Additional extensions are likely to be necessary to account for cosmic inflation, or whatever alternate construction explains the relevant cosmological problems, along with baryon number violating processes that lead to the observed cosmic asymmetry. Against this background, this section reviews the parameters of particle physics that are known to be relevant, along with the sensitivity of the universe to their possible variations. Allowed ranges of parameter space are discussed for the fine structure constant, light quark masses, the electron to proton mass ratio, and the strong coupling constant. We also briefly consider constraints arising from physics beyond the Standard Model, including charge quantization and nucleon decay.

### 2.1 The Standard Model of Particle Physics

Specification of the the Standard Model itself requires a large number of parameters gaillard (); kanebook (). In the absence of neutrino masses, the Lagrangian of the minimal Standard Model contains 19 parameters hogan (), and the inclusion of neutrinos raises the number to 26 tegmark (). Fortunately, however, only a subset of these parameters appear to require critical values for the successful functioning of the universe. Here we first review the full set of parameters (see particlegroup () for current values) and then discuss the minimal subset necessary to consider variations in alternate universes.

In this treatment, we assume that the entire set of parameters can vary independently from universe to universe. Keep in mind, however, that if the current Standard Model of particle physics is the low-energy manifestation of a more fundamental theory, then the number of parameters could be smaller — or larger — than that considered here. Moreover, their variations could be correlated or even fully coupled. In any case, following previous treatments hogan (); tegmark (), the Standard Model parameters can be organized and enumerated as follows:

The masses of the six quarks and three leptons are specified by Yukawa coupling coefficients. Although the coefficients appear in the Standard Model Lagrangian, the masses appear in most phenomenological discussions of fine-tuning. In either case, this subset of parameters can be denoted as = . Here we denote the coupling coefficients as , where the subscript refers to the type of particle. The corresponding particle masses are given by , where is the Higgs vacuum expectation value.

The Higgs mechanism allows for non-zero particle masses. The Higgs parameters can be taken to be the Higgs mass and vacuum expectation value, , or, equivalently, the quadratic and quartic coefficients of the Higgs potential . The two choices of parameters are related by = and . One should keep in mind that more complicated versions of the Higgs potential are possible branco_higgs ().

The quark mixing matrix (generally called the CKM matrix cabibbo (); kobayashi ()) specifies the strength of flavor-changing interactions among the quarks. The matrix is determined by three angles and one phase, and thus requires the specification of four parameters, which can be written in the form = .

The remaining parameters include a phase angle for the QCD vacuum and coupling constants for the gauge group . The latter three parameters are often specified by the strong and weak coupling constants (evaluated at a particular energy scale) and the Weinberg angle, so that the remaining subset of parameters can be written in the form = . The latter three parameters, in conjunction with the Higgs expectation value, define more familiar entities. The mass of the particles are given by and the mass of the particle is . The electromagnetic coupling constant, evaluated at , is given by . The corresponding electromagnetic interaction strength then becomes = ; when scaled to zero energy one obtains . The weak interaction strength can be written in the form = . The strong interaction strength is given by . Finally, we have the Fermi constant .

The neutrino sector includes Yukawa coupling constants to specify the mass of each neutrino, three mixing angles for the neutrino matrix, and an additional phase. Neutrino physics can thus be characterized by seven parameters, which can be written in the form = .

Although gravity is not part of the Standard Model of particle physics, a full accounting of the forces of nature requires a specification of its strength. Most approaches either use the gravitational constant or, equivalently, the gravitational fine-structure constant

(1) |

The incredibly small value of this dimensionless parameter is the source of many instances of Hierarchical Fine-Tuning.

In addition to the parameters of particle physics, a number of cosmological parameters are required to specify the properties of the universe (see Section 3). These properties include the inventory of baryons and dark matter in the universe feng (); jungman (), as well as the amplitude of the primordial spectrum of density fluctuations. With a more comprehensive theory, these abundances — or perhaps their distribution of allowed values — could in principle be calculated from the parameters of particle physics. In the absence of such an overarching theory, however, current approaches consider the particle physics parameters and the cosmological parameters as separate and allow them to vary independently (e.g., see the discussions in bartip (); hogan (); tegmark (), as well as references therein).

The successful operation of a universe does not depend on the specific values for all of the particle physics parameters found in the Standard Model. For example, the mass of the top quark plays little role in everyday life or in any astrophysical processes operating at the present epoch. As a result, when considering the possible fine-tuning of the universe, we can substantially reduce the set of 26 Standard Model parameters. Although not all existing treatments of this issue are identical (compare cahn (), hogan (), reessix (), tegmark (), and others), the following reduction of parameters is representative:

The Higgs parameters and the Yukawa coupling constants determine the masses of quarks and leptons. Since only the first generation survives to form astrophysical structures (including nuclei), the reduced set of parameters must include masses for the up quark, the down quark, and the electron. All of the neutrino sector can be ignored, provided that the neutrino masses are small enough to not be cosmologically interesting lesgourgues (); pogosian (); tegmarkneutrino (). In practice, this constraint requires

(2) |

The four parameters of the CKM matrix determine how rapidly the heavier quarks decay into the lighter ones. As long as the decay mechanisms operate, so that we only need to consider the first generation of particles, the particular values of the mixing matrix need not be fine-tuned. The decay width for a heavy quark of mass can be written in the general form

(3) |

where is a dimensionless factor and is the matrix element corresponding to the decay . The CKM matrix represents the inverse of a fine-tuning problem. Unless the matrix elements were exactly zero, the heavier quarks would decay into lighter ones. We are also implicitly assuming that the masses of the heavy quarks are large compared to and . With these reductions, the minimal set of parameters can be written in the form

(4) |

The value of the gravitational coupling constant is given by equation (1). The remaining coupling constants depend on energy. One common reference scale is the mass of the particle, where current experimental measurements provide the values

(5) |

On the other hand, the coupling constants are sometimes given by their effective values at zero energy. In this limit, the fine structure constant approaches it usual value, . For the strong and weak forces, particle interactions in the low energy limit can be described by potential energy functions of the forms

(6) |

In this treatment, the pion mass and the or masses (represented here as a single value ) determine the effective range of the forces. The coefficient for the weak force is related to the Fermi constant according to so that the weak coupling constant in this limit is given by = . Similarly, the constant is the effective charge of the nucleon-nucleon interaction, and the corresponding coupling constant . As a result, the values for the coupling constants are sometimes quoted in the form .

Another derived parameter that plays a role in many fine-tuning discussions is the ratio of the electron mass to the proton mass. This quantity,

(7) |

is a function of the more fundamental parameters given in equation (4). Note that some authors define as the inverse of that given in equation (7) and the ratio is sometimes denoted by the symbol .

### 2.2 Constraints on Light Quark Masses

A large body of previous work has placed constraints on the allowed range of particle masses, including quark masses barrkhan (); bedaque (); berengut (); damour (); donoghue (); hogan (); jaffe (), the Higgs mass donoghuetwo (), the proton mass page (), and the Standard Model in general hallnomura (); hallnomura2010 (). This section reviews and reconsiders the conventional arguments for the allowed range of light quark masses. Constraints are imposed by the requirements that protons and neutrons do not decay within nuclei, and that both free protons and hydrogen atoms are stable. Previous work often invokes the additional requirements that deuterium nuclei are bound, and that diprotons must remain unstable bartip (); hogan (); reessix (). However, recent studies of stellar evolution in other universes indicate that stars continue to operate with both stable diprotons barnes2015 () and unstable deuterium agdeuterium (), so that the corresponding constraints on quark masses should be removed (see Section 7). With this generalized treatment, the allowed region in parameter space for the light quark masses is larger and thus exhibits less evidence for fine-tuning.

As discussed above, the Standard Model does not specify the values of the quark masses (or, equivalently, the values of the coupling constants that determine the quark masses). Moreover, the distribution of possible quark masses is also unknown. As a starting point, only the masses of the lightest two quarks (up and down) are allowed to vary in the discussion below.

Although we do not need to specify the possible distribution of quark masses to determine the range of possible values, it is useful to plot the allowed parameter space in logarithmic units. If the allowed quark masses were distributed in a log-random manner, then the allowed areas in such diagrams would reflect the probability of successful realizations of the parameters. The only direct input we have on this issue is the experimentally determined masses for the six known quarks. The distribution of these masses is shown in Figure 4, which indicates that the logarithmic quantities are relatively evenly spaced. This apparent trend holds up under more rigorous statistical tests donoghuemass (). As a result, as stated in jaffe (), “there is reason to assume that the logarithms of the quark masses are smoothly distributed over a range of masses small compared to the Planck scale” (see their Figure 2). The measured lepton masses are also distributed in a manner that is consistent with log-uniform donoghuemass (), but definitive conclusions are difficult with only three values. Notice also that the masses of all the quarks and leptons are small compared to the Planck scale (by 17 to 22 orders of magnitude), so that some degree of hierarchical fine-tuning is present.

#### 2.2.1 Stability of Quarks within Hadrons

If the mass difference between up quarks and down quarks is too large, then the heavier quark can decay into the lighter one within a hadron (such as a proton or neutron). In order to prevent such decays, and allow for long-lived particles of interest, there exists an upper limit to the mass difference between the quarks, as outlined below hogan (); barrkhan ().

Down quarks can beta decay into up quarks inside of hadrons so that protons and neutrons could not exist. In this limit, the only stable particles would consist of only up quarks, so the universe would be composed of = . This condition places an upper limit on the down quark mass, which can be written in the from

(8) |

where is the energy required to produce an anti-symmetric state of three quarks. In our universe, we have MeV.

In the opposite limit where the mass of the up quark is much larger than that of the down quark, the opposite decay can happen. This condition thus places an analogous upper limit on the up quark mass,

(9) |

#### 2.2.2 Stability of Protons and Neutrons within Nuclei

Another constraint arises by requiring that both protons and neutrons are stable within atomic nuclei. If the mass differences between the up and down quark are too large, then beta decay can take place within nuclei.

First consider the usual beta decay of a neutron inside a nucleus:

(10) |

The requirement that this decay is not allowed on energetic grounds can be written in the form

(11) |

where is the contribution to the mass difference between the proton and neutron due to the electromagnetic force, and where is the binding energy of the proton within the nucleus. In our universe MeV. The binding energy varies with the nuclear species in question, but has a typical value of MeV.

Similarly, protons should also be stable within atomic nuclei, so that the process

(12) |

should be suppressed. This requirement, in turn, places an upper limit on the mass of the up quark,

(13) |

#### 2.2.3 Stability of Free Protons and Hydrogen

In order for Hydrogen to exist, protons cannot spontaneously decay into neutrons via . Preventing this reaction from occurring implies the limit

(14) |

We get a similar but slightly stronger constraint by requiring that hydrogen atoms cannot convert themselves into neutrons through the reaction . This requirement implies the constraint

(15) |

#### 2.2.4 Unbound Deuterium and Bound Diprotons

Many previous treatments consider a universe to be uninhabitable if deuterium becomes unbound or if the diproton becomes bound. Although these constraints are not necessary for a universe to be habitable, it is nonetheless instructive to consider the conditions required for unbound deuterium or bound diprotons.

The customary argument for the first case is that deuterium is a necessary stepping stone for nuclear reactions. The universe starts with only protons and neutrons, although the latter decay through the weak interaction. The reaction is the first step of the reaction chain in the Sun, whereas is the first step in BBN. If deuterium is unstable, then – the argument goes – no complex nuclei can be made. However, recent work shows that stars can continue to make complex nuclei even if deuterium is unstable (see Section 7 and Refs. agdeuterium (); barnes2017 ()). Nonetheless, it is instructive to review the constraints that would be met if deuterium is required to be stable (see beane () for a more detailed treatment). The stability of deuterium to beta decay is essentially the same as equation (13) for the case where the binding energy is that of deuterium in our universe, so that = MeV and the constraint becomes

(16) |

A weaker but more convincing constraint arises from the requirement that deuterium nuclei are stable to decay from the strong interaction where . This constraint requires that the binding energy of deuterium is positive. One model barrkhan () writes the modified binding energy in the form

(17) |

where MeV is the binding energy for deuterium in our universe. The parameter is not well-determined, but lies in the range MeV. The constraint thus has the form

(18) |

and requires the sum of the quark masses to be less about about twice their measured values.

Going in the other direction, if the quark masses are lighter, then the pion mass is smaller, and the strong force has a greater range. For sufficiently small quark masses, diprotons are stable, so one obtains the constraint

(19) |

The value appearing on the right hand side of this inequality varies with the author (compare barrkhan () and hogan ()). As discussed below (Section 7), stable diprotons are not problematic for habitability, so the constraint of equation (19) is not required to be enforced (Section 7).

#### 2.2.5 Constraints on Quark Masses

The treatment thus far allows for a three dimensional parameter space . However, symmetry considerations barrkhan () suggest that the electron mass could be a fixed fraction of the mass of the down quark, so that the ratio is constant under variations of the quark masses. Under this assumption we can evaluate the above constraints. Using the value appropriate for our universe, the resulting parameter space is shown in Figure 5. The black dot marks the location of our universe in the diagram. (Keep in mind that other choices for are possible, and would lead to corresponding changes in the diagram.)

In the figure, the blue curves delimit the region for which quarks cannot decay within hadrons, where protons and neutrons are of primary interest (Section 2.2.1). The allowed region falls between the two curves. These constraints are not as confining as the others under consideration here due to the large value of the energy required to produce a bound state of three quarks. The green curves in the figure show the region for which nuclei are stable (Section 2.2.2). The allowed region again falls between the two curves. In the region above the upper curve, neutrons are unstable within nuclei, whereas in the region below the lower curve, protons are unstable. The most stringent constraints result from the requirement that protons cannot decay (Section 2.2.3). In the region below the lower red curve, free protons can decay into neutrons and positrons. In the region below the upper red curves, protons in hydrogen atoms can combine with the bound electron to form neutrons. This latter constraint is the most confining. Significantly, our universe lies close to this limit. If the down quark (and hence the neutron) were lighter by MeV, hydrogen atoms would decay via this channel.

Note that the two most important constraints are the upper limit on the down quark mass necessary to keep neutrons from decaying within nuclei (equation [11]) and the lower limit necessary to keep atomic hydrogen from combining into a neutron (equation [15]). We can thus write a combined constraint on the down quark mass

(20) |

In the limit of small mass for the up quark (left side of Figure 5), the allowed range for the down quark mass can be written in the form

(21) |

In the opposite limit of large up quark mass, we obtain

(22) |

These asymptotic forms show that in the limit of small , the mass of the down quark can vary by a factor . In the limit of large , the allowed range of values narrows to (essentially) a line in the plane of parameters. Significantly, the up quark mass can vary (to lower values) by several orders of magnitude while the down quark mass has a range of . The allowed parameter space is not overly restrictive.

Notice also that the two most restrictive bounds (from equations [20–22]) provide a bound on the composite parameter . In this treatment, we specified the electron mass to be a fixed ratio () of the down quark mass, where . For other choices of the ratio , the ranges of allowed quark masses are similar, with the allowed region in the plane of Figure 5 moving up or down accordingly.

#### 2.2.6 Summary of Quark Constraints

The allowed ranges for the light quark masses, shown in Figure 5, are significantly larger than reported in some earlier assessments. The region of allowed quark masses spans a factor of for the down quark over a range of several orders of magnitude for the up quark. One reason for this expanded range, compared with previous treatments, is that this work removes the unnecessary restrictions that deuterium must be stable and that diprotons must be unstable. Although these two constraints would reduce the allowed range of parameter space hogan (); barrkhan (), recent work shows that stars – and hence universes – can operate with either stable diprotons barnes2015 () or unstable deuterium agdeuterium () (see also barnes2017 ()).

Although stars can operate with unstable deuterium, which requires to increase by a factor of (e.g., see Figure 11 of epelbaum2003 ()), the sum of the quark masses cannot be made arbitrarily large. The quark masses determine the pion mass, which in turn sets the range of the strong force. If the quarks become too heavy, then the range of the strong force could become short enough to render all nuclei unstable. Although the required increase in quark masses has not been unambiguously determined, Figure 5 shows the contours where the sum of the light quark masses increases by factors of 2 and 4 (given by the lower and upper black curves in the diagram). This additional constraint cuts off only the tail of parameter space at large quark masses, and leaves most of the range viable.

The discussion thus far has considered only the two lightest quarks. For the case of three light quarks, the range of viable universes is even greater jaffe (). The band of congeniality found in that work is about 29 MeV wide in terms of the mass difference between the lightest two quarks (see jaffe () for further discussion; see also ali2013 () for a less optimistic viewpoint). Note that an even wider range of possible universes may be viable if one considers more light quark masses, but such models have not been worked out.

Finally, notice that most of the constraints summarized in Figure 5 result from some type of beta decay, where neutrons and protons are transformed into each other. Universes can remain viable in the absence of the weak force weakless (), and such universes would not be subject to beta decay. As a result, for scenarios that are somewhat removed from our expectations, these constraints on the light quark masses could be significantly weaker (see also grohsweakless ()).

#### 2.2.7 Mass Difference between the Neutron and Proton

Recent work has provided an ab initio calculation of the mass difference between the neutron and proton using lattice QCD and QED calculations borsanyi (). Historically, the calculation of has been notoriously difficult. Even this state-of-the-art treatment provides a mass splitting estimate of 1.5 MeV, which is somewhat larger than the measured value of = 1.29 MeV. In addition to the successful calculation of this quantity, these results provide estimates for the separate contributions to the mass differences from QCD effects (setting and ) and electromagnetic effects (setting and ). The result is that .

One can use the results outlined above to determine how the mass difference between the neutron and proton depend on the mass difference between their constituent quarks (i.e., = ) and the electromagnetic coupling (). The result is shown in Figure 6 (analogous to Figure 3 of Ref. borsanyi ()). The figure shows the contours of constant neutron-proton mass difference in the plane of parameters. For the value of in our universe, the quark mass difference can only vary downward by a factor of . Greater changes lead to inverse decay. Larger values of the quark mass differences are unconstrained in this diagram, although other considerations come into play (see Figure 5). If the quark masses are held constant, then the fine structure constant can only become larger by a factor of , but has no lower limit in this diagram.

#### 2.2.8 Constraints on the Higgs Parameters

Instead of variations in the masses of the light quarks (and/or the electron), one can also consider possible changes in the parameters of the Higgs potential, e.g., the expectation value = . As the value of increases, the mass difference between neutrons and protons increases, so that neutrons are more likely to decay within nuclei. Larger values of also increase the pion mass, which decreases the effective range of the strong force. Both of these effects lead to nuclei that are more unstable. The maximum allowed increase in the expectation value is estimated to be agrawalprl (); agrawal (), where the subscript corresponds to the value in our universe. For larger , the mass difference is larger than the typical binding energy of a nucleon within an atomic nucleus. As a result, neutrons can decay into protons within bound nuclei, thereby leaving hydrogen as the only truly stable nucleus. Somewhat tighter bounds are derived in Ref. damour () based on considerations of nuclear stability.

Although the range of the vacuum expectation values is not overly restrictive, the observed value GeV and the maximum allowed value GeV remain small compared to the Planck scale (at GeV) and/or the GUT scale (at GeV). A problematic issue arises: In simple grand unified models, the Standard Model parameter , which determines , has a naturalness problem. The quantum corrections are expected to be , so that the relevant terms must cancel to high accuracy in order to produce the observed value (see the discussion of agrawal () and references therein). Such models are fine-tuned in the sense that small changes in the other model parameters would presumably alter this precise cancellation and lead to typical values of and that are much larger than those observed in our universe.

Additional constraints on the Higgs parameters arise from stability considerations. Sufficiently large changes to the Higgs potential could result in vacuum instability chigusa (); coleman (); sher1989 (), which would have important consequences for the habitability of the universe. For example, the Higgs potential generally has more than one minimum. If the Higgs field resides in a higher energy minimum (a false vacuum state), then the field can tunnel into a lower (true) vacuum state sometime in the future. In order for the universe to remain viable, however, the vacuum must be either stable or sufficiently long-lived (if the false vacuum is metastable). The quantum tunneling rate depends on the shape of the Higgs Potential, which in turn depends on the input parameters. As one example, for the case where the Higgs potential has a quartic form, the highest order term must be positive to ensure vacuum stability. The coefficient in the classical potential depends on the Higgs mass, but quantum corrections can modify its value and even render alekhin (); branchina (); buttazzo (). These corrections depend on the Yukawa couplings, where that of the top quark makes the largest contribution. As a result, the shape of the Higgs potential and the fate of the cosmic vacuum state depend on the Higgs mass GeV and the top quark mass GeV. The resulting constraints are determined by the form of the Higgs potential, which is not fully specified (and could have alternate forms in other universes). Recent work alekhin (); branchina (); buttazzo () indicates that vacuum stability requires the ratio to be sufficiently large, where the measured values in our universe are close to the limit.

### 2.3 Constraints on the - Plane

The Standard Model of Particle Physics describes interactions at the fundamental level of quarks and leptons. At lower energies, however, the basic properties of atoms and molecules, and hence chemistry, are determined by the values of the fine structure constant and the mass ratio . Since the neutron and proton have similar masses, the neutron does not introduce a third parameter in this context. In this section, we review basic constraints on the constants and , and find the allowed region in the plane of parameter space.

Many authors (e.g., bartip (); tegmarktoe ()) have argued that both and in order for chemistry to operate (in a manner roughly similar to chemistry in our universe). Several arguments imply that the must be small. Since the kinetic energy of electrons in atoms scales as , the constant must be smaller than unity in order for electrons to remain non-relativistic. In addition, as discussed in Section 7 (see also adams ()), the fine structure constant must be smaller than unity in order for stars to function as nuclear burning objects. If the stars are required to have sufficiently high surface temperatures, the constraint on is somewhat tighter adamsnew (). Of course, if becomes too large relative to the strong nuclear force, then large nuclei would cease to exist (F). Finally, for completeness, we note that the fine structure constant must be less than unity in order for bulk matter to remain stable liebyau (); liebyaualt (). All of these considerations restrict . For purposes of this discussion, we thus adopt the particular bound .

Small values of the mass ratio are required for the existence of stable ordered structures, such as a solid or a living cell tegmarktoe (). For the structure to be well ordered, the fluctuation amplitude of a nucleus must be much smaller than the distance between the atoms. This constraint requires that . Following tegmarktoe (), we enforce the constraint so that . For completeness, note that the localization requires a large mass ratio, but that one could in principle have the electron heavier than the proton. As a result, a second window of allowed parameter space opens up for large mass ratios .

The constants also appear in the equations of stellar structure chandra (); clayton (); hansen (); kippenhahn (); phil () and are thus constrained by stellar considerations. Although stellar masses in our universe can vary by a factor of , if is too large, or is too small, then the minimum mass of a star would exceed the maximum stellar mass, thereby preventing the existence of working stellar entities. The minimum and maximum stellar masses are given in A. Combining equations (231) and (232), this constraint can be written in the form

(23) |

Stable nuclear burning stars can fail to exist for another reason: If the fine structure constant is too small, then the electrical barrier for quantum mechanical tunneling becomes too small and stars would burn all of their nuclear fuel at once adams (). The constraint required to avoid this circumstance can be written in the form

(24) |

where the numerical constant can be evaluated from the equations of stellar structure (see the Appendix of Ref. adamsnew ()).

The constants determine, in part, how the gas in a forming galaxy can dissipate energy, cool, and collapse. This requirement places a limit/estimate for the mass scale of galaxies reesost (); tegmarkrees (), as outlined in A. Since the mass of the galaxy must be larger than the minimum mass of a star, we can combine equations (231) and (239) to derive a constraint of the form

(25) |

Note that this bound depends on the gravitational fine structure constant in addition to . For the sake of definiteness in the following analysis, we fix to be its value in our universe.

Figure 7 shows the allowed parameter space in the - plane subject to the constraints outlined above. The requirement that both and limit the parameter space to the lower left quadrant of the figure, as delimited by the cyan and blue lines. The requirement that the minimum stellar mass is less than the maximum stellar mass requires to lie above the green curve with positive slope. In order for stars to exist as long-lived, stable, nuclear-burning entities, the mass ratio must lie above the green curve with negative slope. For completeness, note that the minimum point of the two green curves would be slightly rounded off if one uses results from a full stellar structure calculation. The requirement that galaxies are larger than the minimum stellar mass requires to lie above the red curve. This latter curve is so steep that it enforces an effective lower bound on , although the nuclear burning constraint is more restrictive for small values of . For completeness, the figure also shows the limit where the galactic mass scale is equal to the typical stellar mass scale (marked by the purple curve).

In Figure 7, the location of our universe is marked by the star symbol. The allowed region of parameter space surrounding that point has a nearly triangular shape, where the base (range of ) and altitude (range of ) span about 4 orders of magnitude. Notice that Figure 7 includes a second allowed region of parameter space in the upper central part of the diagram. This regime corresponds to the case where the electron is much heavier than the proton . Universes with parameters in this region are likely to be quite different from our own, but the constraints enforced here do not rule them out as viable.

The constraints depicted in Figure 7 are based on the existence of known structures, including galaxies, stars, and atoms. However, another type of constraint can be placed on the fine structure constant based on purely theoretical considerations. The three gauge coupling constants of the Standard Model are energy dependent. If one enforces the requirement of Grand Unification — that the three constants have the same value at some large energy scale — then the value of measured at low energy is highly constrained (see the recent review of donoghuethree ()). These limits also assume that proton decay occurs at the GUT scale, with a new heavy -boson, but that protons are stable on stellar timescales. The constraints on the fine structure constant obtained through this argument are more more restrictive than those presented in Figure 7 and are centered around the observed value. Previous estimates for the allowed range include ellisnano () and bartip (). At the present time, however, no experimental evidence exists for Grand Unified theories donoghuethree () and the Standard Model in its current form does not allow for unification (one needs to invoke new physics such as supersymmetry). As a result, the status of these tighter bounds on remains undetermined.

### 2.4 Constraints on the Strong Coupling Constant

This section considers limits on the magnitude of the strong coupling constant . One well-known constraint arises from the requirement that nuclei are stable against fission. This constraint is generally derived by using the Semi-Empirical Mass Function as a model for atomic nuclei semf () and then requiring that the binding energy of a nucleus is larger than the binding energy of two separated nuclei with half the particles bartip (); tegmarktoe (). This consideration results in a limit on the strong force coupling constant as a function of the fine structure (electromagnetic) constant such that

(26) |

where the subscripts denote the values in our universe. The numerical coefficient depends on the largest nucleus that is required to have a bound state, where equation (26) uses the value corresponding to carbon-12.

Additional constraints on the strong coupling constant arise from the required ordering of atomic size and energy scales. In order for bulk matter to have its observed form in our universe, the size scale of atoms, given by the extent of electronic orbits, must be larger than atomic nuclei. Electron orbits have radii , whereas nuclei have radii given approximately by . The ordering of size scales thus implies the constraint

(27) |

where we have used . Similarly, the energy scales for chemical reactions are much lower than those of nuclear reactions. If the opposite were true, then chemical reactions, which provide the basis for life, would instigate nuclear reactions and thereby change the elements that make up life forms during the course of biological processes. The required ordering of energy scales leads to the analogous constraint

(28) |

If equations (27) and (28) did not hold, it is possible that a universe could remain habitable, but it would be much different than our own. However, the previous section shows that in viable universes, so that these constraints are less restrictive than that of equation (26).

Going in the other direction, the strong force cannot be too much greater than that realized in our universe without changing the manner in which nuclear processes occur. The most tightly bound nucleus is iron-56, which has a binding energy per nucleon of MeV. If is too large, however, then the binding energy per nucleon would become larger than the nucleon mass, and energy levels of the nucleus would become relativistic. Although nuclear reactions can still take place under relativistic conditions, the way in which they occur in stars (and BBN) would be vastly different than in our universe. This consideration thus places an upper limit on the strength of the strong force. Here we invoke this constraint in the conservative form

(29) |

The constraints on the strong coupling constant outlined above are depicted in Figure 8, which shows the allowed region in the plane of parameters . The range of the fine structure constant is limited by the same constraints used in the previous section. First we require (see Section 2.3), so that the allowed region falls to the left of the blue line in the diagram. On the other hand, must be large enough that stellar structure solutions exist adams (); adamsnew (). Working stars thus limit the allowed region to the right of the red line. Next we require that the binding energy per particle is small enough that the constituent particles in nuclei remain non-relativistic, so that the allowed region falls below the cyan line. Finally, the strong force must effectively compete with the electromagnetic force to prevent nuclear fission (see equation [26]), as marked by the green curve. The resulting region of parameter space spans a factor of in both and .

For completeness, we note that many authors invoke tighter limits on the strong coupling constant through considerations of nuclei with mass number (see dyson1971 (); bartip (); reessix (); tegmarktoe () and many others; see also Section 2.2.4). If the strong force were somewhat stronger, diprotons would be bound, and the cross sections for nuclear reactions in stars would enhanced by an enormous factor. In spite of many claims of disaster, this enhancement would lead to only a modest decrease in the operating temperatures of stellar cores (from K down to K) and a modest decrease in stellar lifetimes (see Section 7 and barnes2015 ()). If the strong force were somewhat weaker, then deuterium would no longer have a bound state, and the usual pathways for nucleosynthesis would be altered. Nonetheless, this scenario also allows stars to provide both nuclear processing and long-lived supplies of energy (see Section 7 and agdeuterium (); barnes2017 ()).

Estimates for the changes to the strong coupling constant required to make diprotons bound or deuterons unstable depends on the model of the nucleus. In the square well approximation for the nuclear potential, 6% increases in lead to stable diprotons whereas 4% decreases lead to unbound deuterium davies1972 (). For nuclear potentials of Yukawa form hulthen (); pochet (), the required increase (decrease) in becomes 17% (6%). Other authors find similar requirements agdeuterium (); reessix (). Bound states of the nuclei can also be altered with corresponding changes in quark masses, which result in different ranges for the strong force. If the sum of the light quark masses is decreased by 25%, then diprotons become bound, whereas mass increases of 40% lead to unbound deuterium barrkhan () (see also Section 2.2).

### 2.5 Additional Considerations

For completeness, this section considers additional constraints on the parameters of particle physics. Specifically, the issue of charge quantization is discussed in Section 2.5.1. We then present a constraint on the energy scale of Grand Unified Theories from the requirement that nucleons have sufficiently long lifetimes (Section 2.5.2).

#### 2.5.1 Charge Quantization

Our understanding of the laws of physics remains incomplete. One important unresolved issue that could affect the habitability of the universe is the specification of electromagnetic charges on the fundamental particles. In our universe, charge is observed to be quantized. All free particles (notably protons, neutrons, and electrons) have charges that are integer multiples of the electron charge ( for some ). More generally, the charges for all Standard Model particles are integer multiples of the charge of the down quark .

Charge quantization is important for the operation of the universe, as it allows for the existence of atoms that are electrically neutral. In turn, neutral atoms allow for the construction of working stars and other bulk matter. However, in conventional quantum electrodynamics — including the Standard Model — electric charges are not specified by fundamental considerations, but rather are input parameters. On the other hand, as outlined below, both Grand Unified Theories and the removal of anomalies imply constraints on the charges of fundamental particles and can thus provide mechanisms for charge quantization.

Many types of Grand Unified Theories have been put forward particlegroup (). One general feature of unification models is that the quark and leptons are incorporated into a larger symmetry group, so that their properties are related due to constraints on the theory. As one example, in the case of unification, the electric charge operator is the sum of diagonal and generators (e.g., see kanebook () for a textbook treatment). Since the generators must be traceless, the sum of the eigenvalues of electric charge must vanish, which leads to a charge quantization condition of the form

(30) |

The neutrino has no charge in our universe, so that = = . This model not only implies charge quantization, but also provides the fractional charges measured for quarks. More generally, charge quantization must arise in any unified theory georgiguts (). Although charge quantization is a highly desirable feature, Grand Unified Theories have not been experimentally verified, so it is not known if they provide the explanation for charge quantization in our universe. However, any universe described by a unified theory of this class will have its charges quantized.

Another way to achieve charge quantization is through the requirement that anomalies vanish in the theory. A full discussion of this topic requires a rather lengthy formalism and is beyond the scope of this review (for further detail, see Chapter 22 of weinbergbook ()). Briefly, the conditions for anomaly cancellation lead to constraints on the sum of the particle charges, roughly analogous to that of equation (30), and such conditions imply charge quantization (see also foot (); npanomaly ()).

Particle physics theories thus support two different classes of constraints — those arising from Grand Unification and those due to anomaly cancellation. Both considerations enforce charge quantization and thereby lead to viable universes. With the current state of the field, all anomalies must cancel to avoid the prediction of infinite quantities, whereas Grand Unified Theories are not yet experimentally necessary.

In addition to quantization of charge, our universe displays the related properties of charge conservation and charge neutrality. Conservation of charge follows from the symmetries of the Lagrangian of the underlying theory noether (), so that the class of theories with this property is large and well-defined. The relative numbers of positive and negative charges in the universe are determined early in cosmic history through a number of processes, including baryogenesis and leptogenesis. At the present time, these mechanisms remain under study dolgov (); steigman (), but observations indicate that the universe as a whole is close to neutral oritoyoshi (). The excess charge density per baryon is bounded from above. One such estimate caprini () takes the form , where is the number density of baryons, and where this limit holds for uniformly distributed excess charge. In principle, a universe could have a net electric charge and still obey charge conservation and charge quantization. Moreover, such a universe could remain viable provided that the excess charge is not too large (see also lyttleton ()). Although the range of that allows for habitability requires further specification, it includes the value , which could be considered special — and perhaps even likely — in the space of all possible universes barnes2012 ().

#### 2.5.2 Constraint from Proton Decay

In any Grand Unified Theory, conservation of baryon number is necessarily violated gellmann (), which allows for the possibility of nucleon decay (see also langacker81 (); nath2007 (); particlegroup () and references therein). The requirement that nucleons are sufficiently long-lived thus places constraints on the theory. Since the number of possible theories — and operators that violate baryon number — is large, we consider only a representative example. For the simplest class of interactions that drive proton decay, the time scale can be written in the form

(31) |

where is a dimensionless constant of order unity, is the GUT scale ( GeV), and is the coupling constant evaluated at that scale. Current measurements superkamio () indicate that the proton lifetime in our universe has a lower limit of yr for the decay channel and yr for the channel .

If we require that protons (nucleons) live long enough for life to evolve, then we must enforce the limit , where is the atomic time scale and is the number of atomic time scales required for successful biological evolution. Here we take , corresponding to an time scale of Gyr (see lunine (); knoll (); scharf () and Section 7.3 for further discussion). This constraint can be written in the form

(32) |

where is the fine structure constant in the low energy limit. In our universe, the quantity . This quantity is close to unity and the constraint of interest is not overly sensitive to its value. Moreover, the appropriate value of is not precisely known. As a result, we obtain the approximate bound . Any viable universe must have a significant hierarchy between the GUT scale and the mass of the proton in order to keep nucleons stable long enough for life to evolve. However, this minimum hierarchy (a factor of ) is much smaller than that realized in our universe (where ). A similar situation arises for proton decay in supersymmetric theories: the anthropically preferred time scale is much shorter than the observed proton lifetime (see banksdinegorb (); susskindproton () for further discussion). In addition, a number of scenarios have been put forth to allow for proton stability nath2007 () (e.g., in theories with large extra dimensions), so that the bound implied by equation (32) is not ironclad.

## 3 Cosmological Parameters and the Cosmic Inventory

This section outlines the cosmological parameters that are required to describe a universe as a member of the multiverse. We start with a review of the cosmological parameters that are necessary to specify the current state of our own universe. However, some of these parameters have relatively little effect on structure formation and are not necessary for an arbitrary universe to be habitable. We thus define the subset of parameters that are relevant for considerations of fine-tuning across the multiverse. The section then outlines the flatness problem and related cosmological issues, and briefly describes how an early inflationary epoch can drive a universe to become spatially flat. We also discuss how inflationary models can provide cosmological perturbations and elucidate the relationship between the parameters of the inflaton potential and the amplitude of primordial fluctuations.

### 3.1 Review of Parameters

The current state of the universe can be characterized by a relatively small collection of parameters. The expansion of the universe is governed by the Friedmann equation

(33) |

where is the scale factor, the curvature constant , and the energy density includes contributions from all sources. If we are concerned only with the expansion and evolution of the universe as a whole — and not the formation of structure within it — then the current state of the universe can be specified by measuring all of the contributions to the energy density and the Hubble constant

(34) |

where all of these quantities are evaluated at the present epoch. Note that once and are specified, then the equation of motion (33) determines the curvature constant . Following cosmological convention, we take at the present epoch. The total energy density contains a number of components, including matter , radiation , and vacuum energy . The matter density includes at least two contributions, from baryons () and from dark matter (). Notice also that the dark matter could have contributions from different types of particles, including neutrinos and some type of cold dark matter. Many candidates have been put forward, including the lightest super-symmetric partner and axions (e.g., see baer (); feng (); jungman (); steffen () and references therein).

The various components of the energy density evolve differently in the presence of cosmic expansion. The matter components, both dark matter and baryons, vary with the scale factor according to , whereas the radiation component varies according to . Unfortunately, the behavior of the dark energy remains unknown frieman (). Current observations indicate that the energy density of the vacuum energy evolves slowly over cosmic time, so that it acts like a cosmological constant. For simplicity, we assume here that constant. One should keep in mind, however, that more complicated behavior is possible and could be realized in other universes even if is essentially constant in our own.

Instead of working in terms of energy densities themselves, one can also define a critical density,

(35) |

and write the energy densities as ratios

(36) |

where the subject identifies the component of the universe (dark matter, radiation, etc). The set of parameters necessary to determine the expansion properties of the universe thus becomes

(37) |

With these parameters specified, note that the curvature constant is given by

(38) |

Notice also that the total mass density is determined

(39) |

The discussion thus far only accounts for the expansion of a universe, and implicitly assumes that space-time is homogeneous and isotropic. In order for structure to form, the universe in question must contain deviations from homogeneity. In our universe, the starting amplitude of these fluctuations is extremely small. Moreover, as discussed below, considerations of structure formation indicate that such fluctuations should be small in any successful universe. As a result, the expansion of the universe proceeds largely independently of the formation of structure on smaller scales.

The primordial fluctuations can be described in a number of ways. In our universe, these seeds of structure formation are found to be Gaussian-distributed adiabatic fluctuations to a high degree of approximation planck2014 (); planck2016 (); wmap () (see also copi (); muir ()). Moreover, the fluctuations have a nearly scale-invariant spectrum. The theory of inflation (see below) tends to produce such a spectrum, but the scale-invariant hypothesis was proposed as a working model of the fluctuations much earlier harrison1970 (); zeldovich (). In any case, the spectrum of perturbations can be written in the form

(40) |

where is the wavenumber (equivalently, spatial scale) of the fluctuation. In our universe the spectrum is nearly independent of spatial scale so that .

In general, the universe must also contain a contribution to the fluctuations due to gravitational waves, often known as tensor modes. The dimensionless spectrum of tensor modes can be written in a form similar to that considered previously, i.e.,

(41) |

In our universe, tensor modes have not (yet) been observed, but we expect that and the amplitude .

The set of parameters required to specify the departures of the universe from homogeneity thus involves at least four parameters and can be written

(42) |

The number of parameters required for a full specification could be larger if the fluctuations are non-gaussian.

The experiments that determine the cosmological parameters in our universe rely heavily on observations of the cosmic microwave background planck2014 (); cobe (); wmap (). These measurements depend on another cosmological parameter , which is the scattering optical depth of the universe due to reionization. The optical depth = 0.09 in our universe and must be determined in order to make precise estimates for the other cosmological parameters of interest. In the present context, however, the scattering optical depth does not play an important role in structure formation. Reionization occurs only because structure — first galaxies and then massive stars — is able to form. In any case, we will not include the scattering optical depth as a relevant variable for purposes of studying fine-tuning.

The most important cosmological parameters are summarized in Table 1. This list contains quantities that define the cosmological inventory, the current expansion rate, and the characteristics of the primordial density fluctuations (see also liddle2004 (); lahavliddle (). Note that the inventory is not complete, as one can consider the various types of stellar objects and gaseous phases that make up the baryonic component, as well as the radiation fields produced by a wide range of astrophysical processes fukugita (). On the other hand, not all of the parameters listed in Table 1 are important for discussions of fine-tuning. As discussed below, we can reduce the number of cosmological parameters to a minimal set.

We first note that the Hubble constant , while vital to understanding the current state of our universe, essentially defines the current cosmological epoch. In considerations of other universes, however, we only need to consider whether or not structure formation occurs at any epoch. As a result, the Hubble constant, which varies with time, does not need to take on a specific value.

Next, our universe is observed to be nearly spatially flat. We can also argue that successful universes must be close to flat: Some solution to the flatness problem, either by inflation guth1981 () or some other mechanism, is assumed to be operational in any viable universe (see Section 3.3 for further discussion). We can also assume that the horizon problem and monopole problem (unwanted relics) are not impediments to a successful universe. As a result, we can take and hence enforce the constraint

(43) |

In other words, only three of the density contributions are independent. The inventory of a universe is thus specified by three quantities. The values of , however, vary with time or equivalently scale factor. As outlined below, the early universe must emerge from the epoch of Big Bang Nucleosynthesis with an acceptable chemical composition, which in turn depends on the baryon to photon ratio , which is (nearly) constant. We can thus use to specify the baryonic component. We can then use the ratio to specify the dark matter content. Note that even though the vary with time, the ratio of any two matter components does not. Alternately, we can define a dark matter parameter so that we have symmetric definitions for the baryonic component and the dark matter component . Finally, the dark energy density is assumed to be constant and can be specified through its value . The inventory of the universe is thus specified by the reduced set of parameters

(44) |

The number of parameters necessary to specify the spectrum of density fluctuations can also be significantly reduced. Given that the tensor modes are subdominant in our universe, and that the spectrum of perturbations for both contributions is relatively flat, we can characterize the primordial fluctuations with a single parameter , where in our universe. The set of parameters for structure formation thus collapses to the form . This simplification assumes that the index remains close to unity. For much larger (smaller) values of , the spectrum of perturbations will have significantly more power on smaller (larger) spatial scales, and will lead to corresponding changes in structure formation. Unfortunately, a comprehensive assessment of the allowed range of the index has not been carried out. Nonetheless, before the value of was well-determined observationally, explorations of structure formation with a range of indices (e.g., bardeenbig (); blumenthal ()) did not find that the universe becomes uninhabitable.

With our present level of understanding of physics and cosmology, the parameters represent the most important dials that can be adjusted to specify the properties of a given universe. In a complete theory, the values of these parameters — or more likely the distributions of the parameter values — could be calculable from physics beyond the Standard Model. In the meantime, for this discussion, the values of are left as free parameters.

The baryon to photon ratio is nonzero because the universe experienced an epoch of baryogenesis that broke the symmetry between matter and antimatter (unless results from highly unusual initial conditions). Baryogenesis, in turn, requires three essential ingredients sakharov (): The first requirement is that baryon number is not conserved. The second is that both C and CP conservation must be violated, where ‘P’ is the discrete symmetry of parity and ‘C’ is that of charge conjugation. Finally, the universe must depart from thermal equilibrium during the epoch(s) when non-conservation of the aforementioned quantities takes place. Although these three features are known to be required for successful baryogenesis, an accepted theory of this process is not yet available (see kolbturner () for additional detail and dine2003 (); steigman () for more recent reviews). Grand Unified Theories langacker81 (), theories of quantum gravity hawkingpagepope (), and other approaches allow for the violation of baryon number conservation, so that new physics should eventually predict the expected distribution of the baryon to photon ratio .

The abundance of dark matter is also determined by processes taking place in the early universe. The simplest scenario occurs when the universe has a single dark matter species that is produced in thermal equilibrium. In that scenario, the abundance of dark matter is determined when the weak interactions become too slow to maintain statistical equilibrium, typically at cosmic times sec and temperature MeV kolbturner (). The dark matter abundance depends on the particle properties (mass, interaction cross section, etc.), which are not known at the present time jungman (). Again, a description of these properties requires extensions of the Standard Model, so that new physics may eventually predict the possible abundances of dark matter.

The value of the amplitude of the primordial density fluctuations also cannot be predicted using known physics. In a large class of inflationary theories lindeinflate (), quantum fluctuations in the inflaton field produce density perturbations, so that the amplitude could be calculated in principle. In this case, a large number of inflationary scenarios are possible, so that the possible distribution of is similarly enormous. In addition, even for a given set of inflaton fields, the spectrum of density fluctuations can depend on the initial conditions, i.e., the manner in which the universe enters into its inflationary epoch. Although the details are both complicated and unknown, the value of is unlikely to have the same value in all universes, so that the density fluctuations must be described by a distribution of values across the multiverse. Moreover, this discussion assumes a scale-invariant spectrum of fluctuations within a given universe. In addition to the overall amplitude of the spectrum having a distribution of values, the form of the fluctuation spectrum itself (see equations [40,41]) could also vary from universe to universe.

Finally, the value of the vacuum energy density is not understood in the context of known physics. This issue is essentially the cosmological constant problem caldwellkam (); weinberg89 (), which has a long history and no accepted resolution (see Section 4).

### 3.2 Constraints on the Cosmic Inventory

Figure 9 shows the evolution of the various density components of the universe. This figure is scaled to the observed values in our universe, where corresponds to the current cosmological epoch. The universe is radiation dominated at early times, transitions into a matter dominated era at intermediate times, and has just recently become dominated by its vacuum energy. When viewed across the relatively large span of cosmic time shown here, two things are evident: First, the fact that the universe is dominated by the vacuum energy at the present epoch is hard to discern – this near equality of matter and vacuum energy at the present epoch is a manifestation of the well-known cosmological constant problem weinberg89 (); padmanabhan (). Second, the duration of the matter dominated era is relatively short. For some values of the cosmological parameters, the universe could move directly from its radiation era into a vacuum dominated era. Such a universe would have no period of matter domination and hence no structure formation. This scenario — without a matter dominated era — is an extreme version of cases where the vacuum energy density is too large relative to the primordial fluctuation amplitude to allow for structure formation (see Section 4 and Refs. adamsrhovac (); efstathiou (); garriga1999 (); garriga2000 (); garriga2006 (); graesser (); martel (); mersini (); piran (); weinberg87 ()).

We can derive a constraint that depends only on the cosmic inventory — independent of the fluctuation amplitude — by requiring that the universe have a matter dominated era. Equivalently, the energy density of the vacuum must be smaller than the energy density of the universe at the epoch of equality. If we write the vacuum energy density in terms of an energy scale , i.e.,

(45) |

then the constraint takes the form

(46) |

where is the baryon to photon ratio and where is the radiation constant. In our universe, the energy scale of the vacuum eV. For comparison, the right hand side of equation (46) is eV. The universe is thus safe by a factor of for the energy scale (and hence a factor of for energy density ).

The constraint of equation (46) is necessary but not sufficient. Even if the universe has a matter dominated era, structure formation can be suppressed if the vacuum energy density is too large relative to the amplitude of the primordial density fluctuations. This issue is taken up in Section 4 and provides stronger constraints on the energy density of the vacuum. On the other hand, the baryon to photon ratio can be larger in other universes (Section 5), which would allow for even larger values of the vacuum energy scale . If the baryon to photon ratio becomes too large, however, the epoch of matter domination will take place before Big Bang Nucleosynthesis. If we approximate the energy scale of BBN as bartip (); carr (), this constraint can be written in the form

(47) |

where the numerical value corresponds to the parameters of our universe. Note that a universe in which matter domination occurs before the epoch of BBN would not necessarily be sterile, but it would represent a significant departure from the usual ordering of cosmological time scales.

### 3.3 The Flatness Problem

One of the classic fine-tuning problems in cosmology is sometimes known as the flatness problem. This issue can be illustrated by writing the equation of motion (33) for the scale factor in the form

(48) |

where the second equality defines the parameter . Note that, in general, the density parameter is not constant in time. More specifically, for a radiation dominated universe, , so that . Similarly, for a matter dominated universe and . The parameter thus increases as the universe expands for the case of both matter and radiation. On the other hand, if the universe is dominated by vacuum energy, then and hence decreases with time.

In order for the universe to remain flat, equivalently for to remain close to unity, the quantity must remain small. But if is small at a given epoch, then it must have been even smaller at an earlier epoch (for universes dominated by either radiation or matter). To leading order, for ,

(49) |

In order for to be small at the present cosmological epoch, as observed, this quantity must have been extremely small at earlier times. When the cosmic age was sec, near the beginning of Big Bang Nucleosynthesis, the parameter . If we go all the way back to the Planck era, with age sec, the parameter .

Figure 10 shows the evolution of the density parameter as the universe expands. The scale factor is taken to be unity at the start of the evolution and five initial values of the parameter are used, specifically, = . The evolution in is shown for universes with both positive (upper curves) and negative (lower curves) curvature. The density parameter remains close to unity until the parameter becomes significant, and then evolves rapidly. For positive curvature, the value of becomes increasingly large and universe eventually recollapses. For negative curvature, steadily decreases and the expansion becomes that corresponding to an empty universe.

The paradigm of inflation albrechtstein (); guth2000 (); linde1983 () was developed to alleviate this issue of the sensitive fine-tuning of the density parameter (although other motivations, such as the monopole problem, were also important). Since this topic has been discussed extensively elsewhere guth2000 (); guth2007 (); lindeinflate (), the present treatment will be brief. As outlined above, our universe requires the parameter at the Planck epoch in order to evolve into its present state. In contrast, the value of is expected to be of order unity at this time. If the universe experiences an epoch of rapid expansion due to vacuum domination, and during that epoch, then successful inflation requires the scale factor to grow by a factor of . This growth factor is generally expressed in terms of the number of e-foldings so that the requirement becomes

(50) |

The exact number of required e-foldings of the scale factor depends on the energy scale of the inflaton field, where lower energy scales lead to somewhat less stringent requirements. However, the number of e-foldings appears in the exponential, so that the required minimum number generally falls in the range . Moreover, most inflation models tend to produce far more e-foldings so that .

Another problem facing our observed universe is the so-called horizon problem: Observations of the Cosmic Microwave Background show that the universe is isotropic to high precision. Without the aforementioned inflationary epoch, regions that are now observed in opposite sides of the microwave sky would have never been in causal contact. But they have the same temperature within about one part in , where we can now measure the spectrum of these small deviations to high precision planck2014 (); planck2016 (); cobe (); wmap (). Since one effect of the inflationary epoch is to accelerate the expansion and effectively move regions out of the horizon, such regions could have been in contact at earlier epochs provided that the duration of the inflationary epoch is long enough. The number of e-foldings of the scale factor required to alleviate the flatness problem (from equation [50]) is nearly the same as that required to alleviate the horizon problem kolbturner (); baumann (). As a result, universes that emerge from an early inflationary epoch with sufficiently small , which can survive to old ages, will also generally be close to isotropic. Our universe could in principle be habitable without the extreme level of isotropy that is observed, but smoothness and flatness tend to arise together in the simplest versions of inflation.

The paradigm of inflation alleviates the flatness problem — and hence the apparent fine-tuning of the density parameter — as long as the universe can accelerate for the required number of e-foldings (equation [50]) and then subsequently evolve according to the standard, radiation dominated hot Big Bang model. In order for inflation to be successful kolbturner (); steinturner (), a number of constraints must be met:

The universe must first be able to enter into an inflationary state, where the energy density is dominated by the vacuum and the scale factor accelerates (grows faster than linear with time). This superluminal phase must last long enough to solve the flatness, horizon, and monopole problems in our universe. Although other universes are not required to be be as isotropic as our own, so that the horizon problem is not as severe, all of these issues are addressed by the same large growth factors.

The likelihood that the universe can enter into an inflationary state remains under intense debate (for further discussion, see brandenberger (); carroll2014 (); carroll2010 (); corichi (); gibbons (); hawkingpage (); hollandswald (); schiffrin (); steinhardt2011 (); turok2002 ()). The required fine-tuning for achieving successful inflation can be described in terms of the space of possible trajectories for the expansion history of the universe. Some authors argue that if the universe starts at the Planck epoch with reasonable assumptions, then successful inflation becomes “exponentially unlikely” ijjas (). In other words, given that the universe is required to have desirable late-term properties, only a small fraction of the possible starting conditions lead to acceptable cosmological histories. Other authors conclude that the paradigm of cosmic inflation remains on a strong footing guth2013 (). The key question is whether or not the conditions required for successful inflation are more constraining than the cosmological problems that the paradigm seeks to alleviate. These conditions include both the necessary parameter values of the theory (e.g., the properties of the inflaton potential) and the requisite initial conditions. This issue remains unresolved. On a related note, the fraction of cosmological trajectories that lead to smooth universes at late times is dominated by those that are not smooth at early epochs carroll2014 (), which changes the constraints on cosmological initial conditions.

During the inflationary expansion phase, quantum fluctuations in the inflaton field produce density perturbations in the background universe bardeen (); guthpi (); mukhanov (). These perturbations must be sufficiently small in amplitude in order for the inflationary epoch to begin, and this condition is expected to hold in only a small fraction of realistic cosmologies vachaspati (). Provided that inflation is successful, in the late universe these density fluctuations grow into galaxies, clusters, and other large scale structures. Moreover, these fluctuations must have an amplitude that falls within the approximate range in order for the universe to produce galaxies with acceptable densities and hence be habitable coppess (); tegmarkrees (); tegmark () (see Section 6). The relation between the amplitude and the parameters that appear in the inflaton potential is described in Section 3.4.

After the universe has expanded by the required factor, it must leave its accelerating state and begin to expand in the usual subluminal manner. In order for the universe to become potentially habitable, first by producing heavy nuclei and later by forming galaxies and stars, it must become radiation dominated after inflation ends. The rapid, accelerated expansion of the inflationary phase leaves the universe with an extremely low temperature, of order K, so that all of the energy is locked up in the vacuum. This vacuum energy must be converted into radiation and particles through a process known as reheating. The conversion must be efficient enough to reheat the universe to a sufficiently high temperature such that baryogenesis can take place. This minimum temperature is often taken to be the scale of the electroweak phase transition GeV. Successful reheating of the universe to this high temperature is by no means automatic, so this requirement represents another hurdle that a successful universe must negotiate.

### 3.4 Quantum Fluctuations and Inflationary Dynamics

As outlined in the previous subsection, an early epoch of inflation can potentially alleviate a number of cosmological problems, although achieving successful inflation is not without its own issues. Although alternate explanations exist, it is useful to illustrate inflationary dynamics in greater detail. Toward that end, this section describes the semi-classical dynamics for the evolution of the scalar field that drives inflation, generally called the inflaton field. Next we elucidate the relationship between the parameters that appear in the inflaton potential and the spectrum of fluctuations produced during the inflationary epoch.

The equation of motion for the evolution of the inflaton field form is generally written in the from

(51) |

where is the Hubble parameter and is the potential. During the inflationary epoch, the energy density of the universe is dominated by the potential of the inflaton, so that the Hubble parameter is given by

(52) |

Under a wide range of conditions, the first term in equation (51) and the last term in equation (52) can be neglected, and evolution takes place during what are called slow-roll conditions. The number of e-foldings of the inflationary epoch is then given by

(53) |

where the integrals are taken over the time interval (values of ) corresponding to the inflationary epoch.

To fix ideas, we can illustrate the scalar field dynamics of inflation with a simple power-law form for the potential linde1983 (),

(54) |

where is a dimensionless coefficient. With this choice, the number of e-foldings takes the form

(55) |

where () is the initial (final) value of the inflaton field. The requirement of sufficient inflation thus implies that the starting value of the inflaton field satisfies the constraint . The requirement of sufficient inflation thus requires that the value of the scalar field is comparable to — and somewhat larger than — the Planck scale. As summarized below, however, constraints on the perturbation spectrum require the energy density to be well below the Planck scale.

In addition to allowing the universe to become connected and flat, the inflationary epoch also imprints density fluctuations on the otherwise smooth background of the universe guthpi (). To leading order, the spectra of cosmological perturbations produced by inflation can be written in the forms

(56) |

for scalar and tensor contributions, respectively baumann (). These quantities are evaluated at the epoch when a perturbation with a particular length scale crosses outside the horizon kolbturner (). Using the equation of motion (51) for the scalar field and the definition (52) of the Hubble parameter (in the slow-roll approximation), the quantities and can be written in terms of the inflationary potential. For the particular potential given by equation (54), the expressions for the perturbation spectra take the forms

(57) |

The ratio of scalar to tensor perturbations is thus given by

(58) |

where is the number of e-foldings from equation (55). Since the perturbations that are constrained by measurements of the cosmic background radiation are those that left the horizon about e-foldings before the end of inflation kolbturner (), the value of used to evaluate equation (57) is of order . Using this result to evaluate the scalar perturbation allows us to specify the amplitude in terms of inflationary parameters,

(59) |

As outlined above (see Table 1), the primordial fluctuations in our universe have amplitude , so that the required value for the dimensionless constant must be extremely small, roughly .

In the absence of special circumstances, however, the dimensionless parameter is expected to be of order unity. The requirement that its value must be incredibly small thus leads to a fine-tuning problem. More specifically, the problem is one of naturalness (see thooft () and Section 1.1). The value of the parameter is expected to become of order unity due to quantum corrections unless the small required value is protected by a special symmetry. Moreover, one can show that the constraint and hence holds more generally, and that any successful inflation model in this class must have a small parameter afguth ().

Given the tension between the requirement of a small value of the dimensionless constant () and its much larger expected value (), one would expect other universes to have larger values of . As a result, it is natural (in both the technical and colloquial sense) for the amplitude of cosmological fluctuations to be larger in other universes. Larger values of lead to earlier structure formation and denser galaxies. Of course, the amplitude could sometimes be smaller, so that the universe produces more rarefied galaxies. The consequences of these changes, and accompanying constraints, are discussed in Section 6.

Note that for viable universes the value of is bounded from above: If the amplitude becomes of order unity, fluctuations are close to non-linearity — and hence ready for collapse — as soon as they enter the horizon after inflation. In this case, a large fraction of the energy within the horizon could become locked up within black holes in the early universe. The analysis of tegmarkrees () indicates that the resulting density of black holes could dominate the density of dark matter and baryons if . This bound assumes that the spectrum of density perturbations remains relatively flat, with index , down to the mass scales characteristic of the horizon just after inflation (see carrkuhnsand (); greenliddle () for further detail).

Finally, we note that although the tensor perturbations are often subdominant, they provide an important constraint on the energy scale of inflation (see also liddle1994 ()). The requirement that implies that the inflaton potential must obey the bound

(60) |

If we characterize the energy density of the potential by defining an energy scale , then the constraint becomes for our universe. In other words, for applications to our universe, the energy scale of inflation is bounded from above by the GUT scale and must be substantially below the Planck scale. In other universes with larger fluctuation amplitudes , the required hierarchy between the Planck scale and the inflation scale could be less pronounced.

### 3.5 Eternal Inflation

An important generalization of the inflationary universe paradigm is that (in many cases) most of the volume of the entire space-time is in a state of superluminal expansion, so that the inflationary epoch can be eternal guth2007 (); linde1986 (). Sub-regions of space-time detach from the background and form separate ‘pocket universes’, which can evolve to become similar to our own. Moreover, this feature can be generic for some classes of inflationary theories vilenkin83 (). The scenario of eternal inflation provides one specific mechanism for generating multiple universes and is thus of interest for the problem of fine-tuning and the multiverse.

To illustrate the manner in which inflation can be eternal, we consider the simple potential of equation (54). The scalar field obeys the equation of motion (51), so that its classical trajectory would be to slowly evolve to smaller and hence lower potential energy . The scalar field is said to ‘slowly roll downhill’. As the scalar field evolves down the potential, however, quantum fluctuations are superimposed on the classical motion. If these fluctuations are large enough, then parts of the space-time can remain in an inflationary state for an indefinitely long span of time. The conditions required for this scenario to operate are illustrated below.

Following the discussion of guth2000 (); guth2007 (), consider the time interval corresponding to one local Hubble time, i.e., = . At the start of that time interval, the scalar field will have a value corresponding to its average over a Hubble volume . During the Hubble time , the scale factor grows by one factor of and the Hubble volume grows by a factor of . At the same time, the scalar field will evolve. The change in the scalar field due to its classical trajectory can be denoted as , whereas the change due to quantum fluctuations is . The total change in the scalar field during one Hubble time is thus given by

(61) |

To leading order, the quantum fluctuations will have gaussian probability distribution, with width given approximately by starobinsky (). For small fluctuations, the scalar field will evolve to lower values. For sufficiently large fluctuations, however, the scalar field can move up the potential to higher values over the course of one Hubble time. In order for eternal inflation to operate, the probability of moving to larger values of must be high enough that at least one of new Hubble volumes will have this property. This condition implies the constraint

(62) |

Using the equation of motion (51) and the definition (52) of the Hubble parameter, this constraint takes the form

(63) |

Although this required value of the field is larger than the Planck scale, the energy density is given by

(64) |

As a result, the energy density of the scalar field required for eternal inflation is much smaller than that given by the Planck scale. If the universe starts out at high temperatures close to the Planck scale, then it will be born with enough energy density to enter and maintain eternal inflation.

In this paradigm, at any given time, the energy density in most of the volume of the multiverse is dominated by the potential energy of the scalar field and is inflating. Some regions will evolve far enough down the potential so that quantum fluctuations do not push them back up to higher vacuum energy densities. These regions can experience inflation as described in the previous section, with decreasing following its classical trajectory to lower values of . Some subset of these regions will successfully convert vacuum energy into particles and radiation, and eventually evolve according to classical cosmological theory like our own. In this manner, the background space-time of the universe continually gives rise to new universes.

This evolutionary picture is illustrated in Figure 11, which shows a sequence of snapshots of the multiverse over time. The shaded regions depict the background energy density of the vacuum, which causes the universe to inflate. The volume of the multiverse in this rapidly expanding state grows exponentially with time, so that most of the space-time resides in this state dominated by vacuum energy. Some regions can evolve to lower-energy vacuum states and eventually become radiation dominated. These regions thus become universes that evolve according to some realization of cosmology, and are depicted as the square symbols in Figure 11. Although the background space expands exponentially, and the universes expand as well, this growth is suppressed in the diagram. The different universes can in principle have different realizations of the laws of physics and/or different values of the cosmological parameters. Only some fraction of these regions have suitable choices of the parameters for the universe to be habitable, as illustrated by the green regions in the figure. (As an aside: Many versions of eternal inflation result in the production of an infinity of universes, and could result in an infinite number of both red and green squares in Figure 11. Any assessment of the fraction of habitable universes depends on how the counting is carried out.)

## 4 The Cosmological Constant and/or Dark Energy

Although our universe can be specified by relatively few cosmological parameters (see Section 3), one of the necessary ingredients is a substantial energy density of the vacuum — often called dark energy. This quantity acts like a cosmological constant and is currently the dominant component of the cosmic inventory (see Table 1). The dark energy is driving the currently observed acceleration of the universe and will have enormous consequences in the future busha2005 (); busha2007 (); nagamine (). Moreover, the existence and nature of this counter-intuitive component poses an interesting and important problem for fundamental physics. On the other hand, astrophysical processes in our universe — for example, the formation of galaxies and other large scale structure — have been influenced more by dark matter than by dark energy thus far in cosmic history (see the discussion of liviorees2018 ()).

Even though the theory of general relativity allows empty space to have a nonzero energy density, its existence poses (at least) two coupled problems:

[A] If the vacuum energy density plays a significant role at the present cosmological epoch, its value must be exceedingly small relative to theoretically expected values. This extreme ordering of energy scales is one manifestation of the cosmological constant problem, and is an example of a Hierarchical Fine-Tuning problem (Section 4.1).

[B] If the vacuum energy density is large enough to affe