Minimal Length Scale Scenarios for Quantum Gravity

Minimal Length Scale Scenarios for Quantum Gravity

\epubtkAuthorDataSabine HossenfelderNordita
Roslagstullsbacken 23
106 91 Stockholm
Swedenhossi@nordita.org
Abstract

We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.

\epubtkKeywords

minimal length, quantum gravity, generalized uncertainty principle

Contents

1 Introduction

In the 5th century B.C., Democritus postulated the existence of smallest objects that all matter is built from and called them ‘atoms’. In Greek, the prefix ‘a’ means ‘not’ and the word ‘tomos’ means ‘cut’. Thus, atomos or atom means uncuttable or indivisible. According to Democritus’ theory of atomism, “Nothing exists except atoms and empty space, everything else is opinion.” Though variable in shape, Democritus’ atoms were the hypothetical fundamental constituents of matter, the elementary building blocks of all that exists, the smallest possible entities. They were conjectured to be of finite size, but homogeneous and without substructure. They were the first envisioned end of reductionism.

2500 years later, we know that Democritus was right in that solids and liquids are composed of smaller entities with universal properties that are called atoms in his honor. But these atoms turned out to be divisible. And stripped of its electrons, the atomic nucleus too was found to be a composite of smaller particles, neutrons and protons. Looking closer still, we have found that even neutrons and protons have a substructure of quarks and gluons. At present, the standard model of particle physics with three generations of quarks and fermions and the vector fields associated to the gauge groups are the most fundamental constituents of matter that we know.

Like a Russian doll, reality has so far revealed one after another layer on smaller and smaller scales. This begs the question: Will we continue to look closer into the structure of matter, and possibly find more layers? Or is there a fundamental limit to this search, a limit beyond which we cannot go? And if so, is this a limit in principle or one in practice?

Any answer to this question has to include not only the structure of matter, but the structure of space and time itself, and therefore it has to include gravity. For one, this is because Democritus’ search for the most fundamental constituents carries over to space and time too. Are space and time fundamental, or are they just good approximations that emerge from a more fundamental concept in the limits that we have tested so far? Is spacetime made of something else? Are there ‘atoms’ of space? And second, testing short distances requires focusing large energies in small volumes, and when energy densities increase, one finally cannot neglect anymore the curvature of the background.

In this review we will study this old question of whether there is a fundamental limit to the resolution of structures beyond which we cannot discover anything more. In Section 3, we will summarize different approaches to this question, and how they connect with our search for a theory of quantum gravity. We will see that almost all such approaches lead us to find that the possible resolution of structures is finite or, more graphically, that nature features a minimal length scale – though we will also see that the expression ‘minimal length scale’ can have different interpretations. While we will not go into many of the details of the presently pursued candidate theories for quantum gravity, we will learn what some of them have to say about the question. After the motivations, we will in Section 4 briefly review some approaches that investigate the consequences of a minimal length scale in quantum mechanics and quantum field theory, models that have flourished into one of the best motivated and best developed areas of the phenomenology of quantum gravity.

In the following, we use the unit convention , so that the Planck length is the inverse of the Planck mass , and Newton’s constant . The signature of the metric is . Small Greek indices run from 0 to 3, large Latin indices from 0 to 4, and small Latin indices from 1 to 3, except for Section 3.2, where small Greek indices run from 0 to , and small Latin indices run from 2 to . An arrow denotes the spatial component of a vector, for example . Bold-faced quantities are tensors in an index-free notation that will be used in the text for better readability, for example . Acronyms and abbreviations can be found in the index.

We begin with a brief historical background.

2 A Minimal History

Special relativity and quantum mechanics are characterized by two universal constants, the speed of light, , and Planck’s constant, . Yet, from these constants alone one cannot construct either a constant of dimension length or mass. Though, if one had either, they could be converted into each other by use of and . But in 1899, Max Planck pointed out that adding Newton’s constant to the universal constants and allows one to construct units of mass, length and time [265]:

(1)

Today these are known as the Planck time, Planck length and Planck mass, respectively. As we will see later, they mark the scale at which quantum effects of the gravitational interaction are expected to become important. But back in Planck’s days their relevance was their universality, because they are constructed entirely from fundamental constants.

The idea of a minimal length was predated by that of the “chronon,” a smallest unit of time, proposed by Robert Lévi [200] in 1927 in his “Hyphothèse de l’atome de temps” (hypothesis of time atoms), that was further developed by Pokrowski in the years following Lévi’s proposal [266]. But that there might be limits to the divisibility of space and time remained a far-fetched speculation on the fringes of a community rapidly pushing forward the development of general relativity and quantum mechanics. It was not until special relativity and quantum mechanics were joined in the framework of quantum field theory that the possible existence of a minimal length scale rose to the awareness of the community.

With the advent of quantum field theory in the 1930s, it was widely believed that a fundamental length was necessary to cure troublesome divergences. The most commonly used regularization was a cut-off or some other dimensionful quantity to render integrals finite. It seemed natural to think of this pragmatic cut-off as having fundamental significance, an interpretation that however inevitably caused problems with Lorentz invariance, since the cut-off would not be independent of the frame of reference. Heisenberg was among the first to consider a fundamentally-discrete spacetime that would yield a cut-off, laid out in his letters to Bohr and Pauli. The idea of a fundamentally finite length or a maximum frequency was in these years studied by many, including Flint [110], March [219], Möglich [234] and Goudsmit [267], just to mention a few. They all had in common that they considered the fundamental length to be in the realm of subatomic physics on the order of the femtometer ().

The one exception was a young Russian, Matvei Bronstein. Today recognized as the first to comprehend the problem of quantizing gravity [138], Bronstein was decades ahead of his time. Already in 1936, he argued that gravity is in one important way fundamentally different from electrodynamics: Gravity does not allow an arbitrarily high concentration of charge in a small region of spacetime, since the gravitational ‘charge’ is energy and, if concentrated too much, will collapse to a black hole. Using the weak field approximation of gravity, he concluded that this leads to an inevitable limit to the precision of which one can measure the strength of the gravitational field (in terms of the Christoffel symbols).

In his 1936 article “Quantentheorie schwacher Gravitationsfelder” (Quantum theory of weak gravitational fields), Bronstein wrote [138, 70]:

“[T]he gravitational radius of the test-body used for the measurements should by no means be larger than its linear dimensions ; from this one obtains an upper bound for its density . Thus, the possibilities for measurements in this region are even more restricted than one concludes from the quantum-mechanical commutation relations. Without a profound change of the classical notions it therefore seems hardly possible to extend the quantum theory of gravitation to this region.”\epubtkFootnote[orgQuoteDe]“[D]er Gravitationsradius des zur Messung dienenden Probekörpers () soll keineswegs größer als seine linearen Abmessungen () sein; daraus entsteht eine obere Grenze für seine Dichte (). Die Messungsmöglichkeiten sind also in dem Gebiet noch mehr beschränkt als es sich aus den quantenmechanischen Vertauschungsrelationen schliessen läßt. Ohne eine tiefgreifende Änderung der klassischen Begriffe, scheint es daher kaum möglich, die Quantentheorie der Gravitation auch auf dieses Gebiet auszudehnen.” ([70], p. 150)\epubtkFootnoteTranslations from German to English: SH.

Few people took note of Bronstein’s argument and, unfortunately, the history of this promising young physicist ended in a Leningrad prison in February 1938, where Matvei Bronstein was executed at the age of 31.

Heisenberg meanwhile continued in his attempt to make sense of the notion of a fundamental minimal length of nuclear dimensions. In 1938, Heisenberg wrote “Über die in der Theorie der Elementarteilchen auftretende universelle Länge” (On the universal length appearing in the theory of elementary particles) [148], in which he argued that this fundamental length, which he denoted , should appear somewhere not too far beyond the classical electron radius (of the order 100 fm).

This idea seems curious today, and has to be put into perspective. Heisenberg was very worried about the non-renormalizability of Fermi’s theory of -decay. He had previously shown [147] that applying Fermi’s theory to the high center-of-mass energies of some hundred GeV lead to an ‘explosion,’ by which he referred to events of very high multiplicity. Heisenberg argued this would explain the observed cosmic ray showers, whose large number of secondary particles we know today are created by cascades (a possibility that was discussed already at the time of Heisenberg’s writing, but not agreed upon). We also know today that what Heisenberg actually discovered is that Fermi’s theory breaks down at such high energies, and the four-fermion coupling has to be replaced by the exchange of a gauge boson in the electroweak interaction. But in the 1930s neither the strong nor the electroweak force was known. Heisenberg then connected the problem of regularization with the breakdown of the perturbation expansion of Fermi’s theory, and argued that the presence of the alleged explosions would prohibit the resolution of finer structures:

“If the explosions actually exist and represent the processes characteristic for the constant , then they maybe convey a first, still unclear, understanding of the obscure properties connected with the constant . These should certainly express themselves in difficulties of measurements with a precision better than …The explosions would have the effect…that measurements of positions are not possible to a precision better than .”\epubtkFootnote[orgQuoteDe]“Wenn die Explosionen tatsächlich existieren und die für die Konstante eigentlich charakeristischen Prozesse darstellen, so vermitteln sie vielleicht ein erstes, noch unklares Verständnis der unanschaulichen Züge, die mit der Konstanten verbunden sind. Diese sollten sich ja wohl zunächst darin äußern, daß die Messung einer den Wert unterschreitenden Genauigkeit zu Schwierigkeiten führt…[D]ie Explosionen [würden] dafür sorgen…, daß Ortsmessungen mit einer unterschreitenden Genauigkeit unmöglich sind.” ([148], p. 31)

In hindsight we know that Heisenberg was, correctly, arguing that the theory of elementary particles known in the 1930s was incomplete. The strong interaction was missing and Fermi’s theory indeed non-renormalizable, but not fundamental. Today we also know that the standard model of particle physics is renormalizable and know techniques to deal with divergent integrals that do not necessitate cut-offs, such as dimensional regularization. But lacking that knowledge, it is understandable that Heisenberg argued that taking into account gravity was irrelevant for the existence of a fundamental length:

“The fact that [the Planck length] is significantly smaller than makes it valid to leave aside the obscure properties of the description of nature due to gravity, since they – at least in atomic physics – are totally negligible relative to the much coarser obscure properties that go back to the universal constant . For this reason, it seems hardly possible to integrate electric and gravitational phenomena into the rest of physics until the problems connected to the length are solved.”\epubtkFootnote[orgQuoteDe]“Der Umstand, daß [die Plancklänge] wesentlich kleiner ist als , gibt uns das Recht, von den durch die Gravitation bedingten unanschaulichen Zügen der Naturbeschreibung zunächst abzusehen, da sie – wenigstens in der Atomphysik – völlig untergehen in den viel gröberen unanschaulichen Zügen, die von der universellen Konstanten herrühren. Es dürfte aus diesen Gründen wohl kaum möglich sein, die elektrischen und die Gravitationserscheinungen in die übrige Physik einzuordnen, bevor die mit der Länge zusammenhängenden Probleme gelöst sind.” ([148], p. 26)

Heisenberg apparently put great hope in the notion of a fundamental length to move forward the understanding of elementary matter. In 1939 he expressed his belief that a quantum theory with a minimal length scale would be able to account for the discrete mass spectrum of the (then known) elementary particles [149]. However, the theory of quantum electrodynamics was developed to maturity, the ‘explosions’ were satisfactorily explained and, without being hindered by the appearance of any fundamentally finite resolution, experiments probed shorter and shorter scales. The divergences in quantum field theory became better understood and discrete approaches to space and time remained unappealing due to their problems with Lorentz invariance.

In a 1947 letter to Heisenberg, Pauli commented on the idea of a smallest length that Heisenberg still held dearly and explained his reservations, concluding “Extremely put, I would not be surprised if your ‘universal’ length turned out to be a mere figment of imagination.” [254]. (For more about Heisenberg’s historical involvement with the universal length, the interested reader is referred to Kragh’s very recommendable article [199].)

In 1930, in a letter to his student Rudolf Peierls [150], Heisenberg mentioned that he was trying to make sense of a minimal length by letting the position operators be non-commuting . He expressed his hope that Peierls ask Pauli how to proceed with this idea:

“So far, I have not been able to make mathematical sense of such commutation relations…Do you or Pauli have anything to say about the mathematical meaning of such commutation relations?”\epubtkFootnote[orgQuoteDe]“Mir ist es bisher nicht gelungen, solchen Vertauschungs-Relationen einen vernünftigen mathematischen Sinn zuzuordnen…Fällt Ihnen oder Pauli nicht vielleicht etwas über den mathematischen Sinn solcher Vertauschungs-Relationen ein?” ([150], p. 16)

But it took 17 years until Snyder, in 1947, made mathematical sense of Heisenberg’s idea.\epubtkFootnoteThe story has been told [313] that Peierls asked Pauli, Pauli passed the question on to his colleague Oppenheimer, who asked his student Hartland Snyder. However, in a 1946 letter to Pauli [289], Snyder encloses his paper without any mention of it being an answer to a question posed to him by others. Snyder, who felt that that the use of a cut-off in momentum space was a “distasteful arbitrary procedure” [288], worked out a modification of the canonical commutation relations of position and momentum operators. In that way, spacetime became Lorentz-covariantly non-commutative, but the modification of commutation relations increased the Heisenberg uncertainty, such that a smallest possible resolution of structures was introduced (a consequence Snyder did not explicitly mention in his paper). Though Snyder’s approach was criticized for the difficulties of inclusion of translations [316], it has received a lot of attention as the first to show that a minimal length scale need not be in conflict with Lorentz invariance.

In 1960, Peres and Rosen [262] studied uncertainties in the measurement of the average values of Christoffel symbols due to the impossibility of concentrating a mass to a region smaller than its Schwarzschild radius, and came to the same conclusion as Bronstein already had, in 1936,

“The existence of these quantum uncertainties in the gravitational field is a strong argument for the necessity of quantizing it. It is very likely that a quantum theory of gravitation would then generalize these uncertainty relations to all other Christoffel symbols.” ([262], p. 336)

While they considered the limitations for measuring the gravitational field itself, they did not study the limitations these uncertainties induce on the ability to measure distances in general.

It was not until 1964, that Mead pointed out the peculiar role that gravity plays in our attempts to test physics at short distances [222, 223]. He showed, in a series of thought experiments that we will discuss in Section 3.1, that this influence does have the effect of amplifying Heisenberg’s measurement uncertainty, making it impossible to measure distances to a precision better than Planck’s length. And, since gravity couples universally, this is, though usually negligible, an inescapable influence on all our experiments.

Mead’s work did not originally attain a lot of attention. Decades later, he submitted his recollection [224] that “Planck’s proposal that the Planck mass, length, and time should form a fundamental system of units…was still considered heretical well into the 1960s,” and that his argument for the fundamental relevance of the Planck length met strong resistance:

“At the time, I read many referee reports on my papers and discussed the matter with every theoretical physicist who was willing to listen; nobody that I contacted recognized the connection with the Planck proposal, and few took seriously the idea of [the Planck length] as a possible fundamental length. The view was nearly unanimous, not just that I had failed to prove my result, but that the Planck length could never play a fundamental role in physics. A minority held that there could be no fundamental length at all, but most were then convinced that a [different] fundamental length…, of the order of the proton Compton wavelength, was the wave of the future. Moreover, the people I contacted seemed to treat this much longer fundamental length as established fact, not speculation, despite the lack of actual evidence for it.” ([224], p. 15)

But then in the mid 1970s then Hawking’s calculation of a black hole’s thermodynamical properties [145] introduced the ‘transplanckian problem.’ Due to the, in principle infinite, blue shift of photons approaching a black-hole horizon, modes with energies exceeding the Planck scale had to be taken into account to calculate the emission rate. A great many physicists have significantly advanced our understanding of black-hole physics and the Planck scale, too many to be named here. However, the prominent role played by John Wheeler, whose contributions, though not directly on the topic of a minimal length, has connected black-hole physics with spacetime foam and the Planckian limit, and by this inspired much of what followed.

Unruh suggested in 1995 [308] that one use a modified dispersion relation to deal with the difficulty of transplanckian modes, so that a smallest possible wavelength takes care of the contributions beyond the Planck scale. A similar problem exists in inflationary cosmology [220] since tracing back in time small frequencies increases the frequency till it eventually might surpass the Planck scale at which point we no longer know how to make sense of general relativity. Thus, this issue of transplanckian modes in cosmology brought up another reason to reconsider the possibility of a minimal length or a maximal frequency, but this time the maximal frequency was at the Planck scale rather than at the nuclear scale. Therefore, it was proposed [180, 144] that this problem too might be cured by implementing a minimum length uncertainty principle into inflationary cosmology.

Almost at the same time, Majid and Ruegg [213] proposed a modification for the commutators of spacetime coordinates, similar to that of Snyder, following from a generalization of the Poincaré algebra to a Hopf algebra, which became known as -Poincaré. Kempf et al. [175, 174, 184, 178] developed the mathematical basis of quantum mechanics that took into account a minimal length scale and ventured towards quantum field theory. There are by now many variants of models employing modifications of the canonical commutation relations in order to accommodate a minimal length scale, not all of which make use of the complete -Poincaré framework, as will be discussed later in Sections 4.2 and 4.5. Some of these approaches were shown to give rise to a modification of the dispersion relation, though the physical interpretation and relevance, as well as the phenomenological consequences of this relation are still under debate.

In parallel to this, developments in string theory revealed the impossibility of resolving arbitrarily small structures with an object of finite extension. It had already been shown in the late 1980s [140, 10, 9, 11, 310] that string scattering in the super-Planckian regime would result in a generalized uncertainty principle, preventing a localization to better than the string scale (more on this in Section 3.2). In 1996, John Schwarz gave a talk at SLAC about the generalized uncertainty principles resulting from string theory and thereby inspired the 1999 work by Adler and Santiago [3] who almost exactly reproduced Mead’s earlier argument, apparently without being aware of Mead’s work. This picture was later refined when it became understood that string theory not only contains strings but also higher dimensional objects, known as branes, which will be discussed in Section 3.2.

In the following years, a generalized uncertainty principle and quantum mechanics with the Planck length as a minimal length received an increasing amount of attention as potential cures for the transplanckian problem, a natural UV-regulator, and as possible manifestations of a fundamental property of quantum spacetime. In the late 1990s, it was also noted that it is compatible with string theory to have large or warped extra dimensions that can effectively lower the Planck scale into the TeV range. With this, the fundamental length scale also moved into the reach of collider physics, resulting in a flurry of activity.\epubtkFootnoteThough the hope of a lowered Planck scale pushing quantum gravitational effects into the reach of the Large Hadron Collider seems, at the time of writing, to not have been fulfilled.

Today, how to resolve the apparent disagreements between the quantum field theories of the standard model and general relativity is one of the big open questions in theoretical physics. It is not that we cannot quantize gravity, but that the attempt to do so leads to a perturbatively non-renormalizable and thus fundamentally nonsensical theory. The basic reason is that the coupling constant of gravity, Newton’s constant, is dimensionful. This leads to the necessity to introduce an infinite number of counter-terms, eventually rendering the theory incapable of prediction.

But the same is true for Fermi’s theory that Heisenberg was so worried about that he argued for a finite resolution where the theory breaks down, and mistakenly so, since he was merely pushing an effective theory beyond its limits. So we have to ask then if we might be making the same mistake as Heisenberg, in that we falsely interpret the failure of general relativity to extend beyond the Planck scale as the occurrence of a fundamentally finite resolution of structures, rather than just the limit beyond which we have to look for a new theory that will allow us to resolve smaller distances still?

If it was only the extension of classical gravity, laid out in many thought experiments that will be discussed in Section 3.1, that had us believing the Planck length is of fundamental importance, then the above historical lesson should caution us we might be on the wrong track. Yet, the situation today is different from the one that Heisenberg faced. Rather than pushing a quantum theory beyond its limits, we are pushing a classical theory and conclude that its short-distance behavior is troublesome, which we hope to resolve with quantizing the theory. And, as we will see, several attempts at a UV-completion of gravity, discussed in Sections 3.2 – 3.7, suggest that the role of the Planck length as a minimal length carries over into the quantum regime as a dimensionful regulator, though in very different ways, feeding our hopes that we are working on unveiling the last and final Russian doll.

For a more exhaustive coverage of the history of the minimal length, the interested reader is referred to [141].

3 Motivations

3.1 Thought experiments

Thought experiments have played an important role in the history of physics as the poor theoretician’s way to test the limits of a theory. This poverty might be an actual one of lacking experimental equipment, or it might be one of practical impossibility. Luckily, technological advances sometimes turn thought experiments into real experiments, as was the case with Einstein, Podolsky and Rosen’s 1935 paradox. But even if an experiment is not experimentally realizable in the near future, thought experiments serve two important purposes. First, by allowing the thinker to test ranges of parameter space that are inaccessible to experiment, they may reveal inconsistencies or paradoxes and thereby open doors to an improvement in the fundamentals of the theory. The complete evaporation of a black hole and the question of information loss in that process is a good example for this. Second, thought experiments tie the theory to reality by the necessity to investigate in detail what constitutes a measurable entity. The thought experiments discussed in the following are examples of this.

3.1.1 The Heisenberg microscope with Newtonian gravity

Let us first recall Heisenberg’s microscope, that lead to the uncertainty principle [146]. Consider a photon with frequency moving in direction , which scatters on a particle whose position on the -axis we want to measure. The scattered photons that reach the lens of the microscope have to lie within an angle to produce an image from which we want to infer the position of the particle (see Figure 1). According to classical optics, the wavelength of the photon sets a limit to the possible resolution

(2)

But the photon used to measure the position of the particle has a recoil when it scatters and transfers a momentum to the particle. Since one does not know the direction of the photon to better than , this results in an uncertainty for the momentum of the particle in direction

(3)

Taken together one obtains Heisenberg’s uncertainty (up to a factor of order one)

(4)
\epubtkImage

microscope.png

Figure 1: Heisenberg’s microscope. A photon moving along the -axis scatters off a probe within an interaction region of radius and is detected by a microscope (indicated by a lens and screen) with opening angle .

We know today that Heisenberg’s uncertainty is not just a peculiarity of a measurement method but much more than that – it is a fundamental property of the quantum nature of matter. It does not, strictly speaking, even make sense to consider the position and momentum of the particle at the same time. Consequently, instead of speaking about the photon scattering off the particle as if that would happen in one particular point, we should speak of the photon having a strong interaction with the particle in some region of size .

Now we will include gravity in the picture, following the treatment of Mead [222]. For any interaction to take place and subsequent measurement to be possible, the time elapsed between the interaction and measurement has to be at least on the order of the time, , the photon needs to travel the distance , so that . The photon carries an energy that, though in general tiny, exerts a gravitational pull on the particle whose position we wish to measure. The gravitational acceleration acting on the particle is at least on the order of

(5)

and, assuming that the particle is non-relativistic and much slower than the photon, the acceleration lasts about the duration the photon is in the region of strong interaction. From this, the particle acquires a velocity of , or

(6)

Thus, in the time , the acquired velocity allows the particle to travel a distance of

(7)

However, since the direction of the photon was unknown to within the angle , the direction of the acceleration and the motion of the particle is also unknown. Projection on the -axis then yields the additional uncertainty of

(8)

Combining (8) with (2), one obtains

(9)

One can refine this argument by taking into account that strictly speaking during the measurement, the momentum of the photon, , increases by , where is the mass of the particle. This increases the uncertainty in the particle’s momentum (3) to

(10)

and, for the time the photon is in the interaction region, translates into a position uncertainty

(11)

which is larger than the previously found uncertainty (8) and thus (9) still follows.

Adler and Santiago [3] offer pretty much the same argument, but add that the particle’s momentum uncertainty should be on the order of the photon’s momentum . Then one finds

(12)

Assuming that the normal uncertainty and the gravitational uncertainties add linearly, one arrives at

(13)

Any uncertainty principle with a modification of this or similar form has become known in the literature as ‘generalized uncertainty principle’ (GUP). Adler and Santiago’s work was inspired by the appearance of such an uncertainty principle in string theory, which we will investigate in Section 3.2. Adler and Santiago make the interesting observation that the GUP (13) is invariant under the replacement

(14)

which relates long to short distances and high to low energies.

These limitations, refinements of which we will discuss in the following Sections 3.1.2 – 3.1.7, apply to the possible spatial resolution in a microscope-like measurement. At the high energies necessary to reach the Planckian limit, the scattering is unlikely to be elastic, but the same considerations apply to inelastic scattering events. Heisenberg’s microscope revealed a fundamental limit that is a consequence of the non-commutativity of position and momentum operators in quantum mechanics. The question that the GUP then raises is what modification of quantum mechanics would give rise to the generalized uncertainty, a question we will return to in Section 4.2.

Another related argument has been put forward by Scardigli [275], who employs the idea that once one arrives at energies of about the Planck mass and concentrates them to within a volume of radius of the Planck length, one creates tiny black holes, which subsequently evaporate. This effects scales in the same way as the one discussed here, and one arrives again at (13).

3.1.2 The general relativistic Heisenberg microscope

The above result makes use of Newtonian gravity, and has to be refined when one takes into account general relativity. Before we look into the details, let us start with a heuristic but instructive argument. One of the most general features of general relativity is the formation of black holes under certain circumstances, roughly speaking when the energy density in some region of spacetime becomes too high. Once matter becomes very dense, its gravitational pull leads to a total collapse that ends in the formation of a horizon.\epubtkFootnoteIn the classical theory, inside the horizon lies a singularity. This singularity is expected to be avoided in quantum gravity, but how that works or doesn’t work is not relevant in the following. It is usually assumed that the Hoop conjecture holds [306]: If an amount of energy is compacted at any time into a region whose circumference in every direction is , then the region will eventually develop into a black hole. The Hoop conjecture is unproven, but we know from both analytical and numerical studies that it holds to very good precision [107, 168].

Consider now that we have a particle of energy . Its extension has to be larger than the Compton wavelength associated to the energy, so . Thus, the larger the energy, the better the particle can be focused. On the other hand, if the extension drops below , then a black hole is formed with radius . The important point to notice here is that the extension of the black hole grows linearly with the energy, and therefore one can achieve a minimal possible extension, which is on the order of .

For the more detailed argument, we follow Mead [222] with the general relativistic version of the Heisenberg microscope that was discussed in Section 3.1.1. Again, we have a particle whose position we want to measure by help of a test particle. The test particle has a momentum vector , and for completeness we consider a particle with rest mass , though we will see later that the tightest constraints come from the limit .

The velocity of the test particle is

(15)

where , and . As before, the test particle moves in the direction. The task is now to compute the gravitational field of the test particle and the motion it causes on the measured particle.

To obtain the metric that the test particle creates, we first change into the rest frame of the particle by boosting into -direction. Denoting the new coordinates with primes, the measured particle moves towards the test particle in direction , and the metric is a Schwarzschild metric. We will only need it on the -axis where we have , and thus

(16)

where

(17)

and the remaining components of the metric vanish. Using the transformation law for tensors

(18)

with the notation , and the same for the primed coordinates, the Lorentz boost from the primed to unprimed coordinates yields in the rest frame of the measured particle

(19)
(20)

where

(21)

Here, is the mean distance between the test particle and the measured particle. To avoid a horizon in the rest frame, we must have , and thus from Eq. (21)

(22)

Because of Eq. (2), but also , which is the area in which the particle may scatter, thus

(23)

We see from this that, as long as , the previously found lower bound on the spatial resolution can already be read off here, and we turn our attention towards the case where . From (21) we see that this means we work in the limit where .

To proceed, we need to estimate now how much the measured particle moves due to the test particle’s vicinity. For this, we note that the world line of the measured particle must be timelike. We denote the velocity in the -direction with , then we need

(24)

Now we insert Eq. (20) and follow Mead [222] by introducing the abbreviation

(25)

Because of Eq. (22), . We simplify the requirement of Eq. (24) by leaving alone on the left side of the inequality, subtracting and dividing by . Taking into account that and , one finds after some algebra

(26)

and

(27)

One arrives at this estimate with reduced effort if one makes it clear to oneself what we want to estimate. We want to know, as previously, how much the particle, whose position we are trying to measure, will move due to the gravitational attraction of the particle we are using for the measurement. The faster the particles pass by each other, the shorter the interaction time and, all other things being equal, the less the particle we want to measure will move. Thus, if we consider a photon with , we are dealing with the case with the least influence, and if we find a minimal length in this case, it should be there for all cases. Setting , one obtains the inequality Eq. (27) with greatly reduced work.

Now we can continue as before in the non-relativistic case. The time required for the test particle to move a distance away from the measured particle is at least , and during this time the measured particle moves a distance

(28)

Since we work in the limit , this means

(29)

and projection on the -axis yields as before (compare to Eq. (8)) for the uncertainty added to the measured particle because the photon’s direction was known only to precision

(30)

This combines with (2), to again give

(31)

Adler and Santiago [3] found the same result by using the linear approximation of Einstein’s field equation for a cylindrical source with length and radius of comparable size, filled by a radiation field with total energy , and moving in the direction. With cylindrical coordinates , the line element takes the form  [3]

(32)

where the function is given by

(33)
(34)

In this background, one can then compute the motion of the measured particle by using the Newtonian limit of the geodesic equation, provided the particle remains non-relativistic. In the longitudinal direction, along the motion of the test particle one finds

(35)

The derivative of gives two delta-functions at the front and back of the cylinder with equal momentum transfer but of opposite direction. The change in velocity to the measured particle is

(36)

Near the cylinder is of order one, and in the time of passage , the particle thus moves approximately

(37)

which is, up to a factor of 2, the same result as Mead’s (29). We note that Adler and Santiago’s argument does not make use of the requirement that no black hole should be formed, but that the appropriateness of the non-relativistic and weak-field limit is questionable.

3.1.3 Limit to distance measurements

Wigner and Salecker [274] proposed the following thought experiment to show that the precision of length measurements is limited. Consider that we try to measure a length by help of a clock that detects photons, which are reflected by a mirror at distance and return to the clock. Knowing the speed of light is universal, from the travel-time of the photon we can then extract the distance it has traveled. How precisely can we measure the distance in this way?

Consider that at emission of the photon, we know the position of the (non-relativistic) clock to precision . This means, according to the Heisenberg uncertainty principle, we cannot know its velocity to better than

(38)

where is the mass of the clock. During the time that the photon needed to travel towards the mirror and back, the clock moves by , and so acquires an uncertainty in position of

(39)

which bounds the accuracy by which we can determine the distance . The minimal value that this uncertainty can take is found by varying with respect to and reads

(40)

Taking into account that our measurement will not be causally connected to the rest of the world if it creates a black hole, we require and thus

(41)

3.1.4 Limit to clock synchronization

From Mead’s [222] investigation of the limit for the precision of distance measurements due to the gravitational force also follows a limit on the precision by which clocks can be synchronized.

We will consider the clock synchronization to be performed by the passing of light signals from some standard clock to the clock under question. Since the emission of a photon with energy spread by the usual Heisenberg uncertainty is uncertain by , we have to take into account the same uncertainty for the synchronization.

The new ingredient comes again from the gravitational field of the photon, which interacts with the clock in a region over a time . If the clock (or the part of the clock that interacts with the photon) remains stationary, the (proper) time it records stands in relation to by with in the rest frame of the clock, given by Eq. (20), thus

(42)

Since the metric depends on the energy of the photon and this energy is not known precisely, the error on propagates into by

(43)

thus

(44)

Since in the interaction region , we can estimate

(45)

Multiplication of (45) with the normal uncertainty yields

(46)

So we see that the precision by which clocks can be synchronized is also bound by the Planck scale.

However, strictly speaking the clock does not remain stationary during the interaction, since it moves towards the photon due to the particles’ mutual gravitational attraction. If the clock has a velocity , then the proper time it records is more generally given by

(47)

Using (20) and proceeding as before, one estimates the propagation of the error in the frequency by using and

(48)

and so with

(49)

Therefore, taking into account that the clock does not remain stationary, one still arrives at (46).

3.1.5 Limit to the measurement of the black-hole–horizon area

The above microscope experiment investigates how precisely one can measure the location of a particle, and finds the precision bounded by the inevitable formation of a black hole. However, this position uncertainty is for the location of the measured particle however and not for the size of the black hole or its radius. There is a simple argument why one would expect there to also be a limit to the precision by which the size of a black hole can be measured, first put forward in [91]. When the mass of a black-hole approaches the Planck mass, the horizon radius associated to the mass becomes comparable to its Compton wavelength . Then, quantum fluctuations in the position of the black hole should affect the definition of the horizon.

A somewhat more elaborate argument has been studied by Maggiore [208] by a thought experiment that makes use once again of Heisenberg’s microscope. However, this time one wants to measure not the position of a particle, but the area of a (non-rotating) charged black hole’s horizon. In Boyer–Lindquist coordinates, the horizon is located at the radius

(50)

where is the charge and is the mass of the black hole.

To deduce the area of the black hole, we detect the black hole’s Hawking radiation and aim at tracing it back to the emission point with the best possible accuracy. For the case of an extremal black hole () the temperature is zero and we perturb the black hole by sending in photons from asymptotic infinity and wait for re-emission.

If the microscope detects a photon of some frequency , it is subject to the usual uncertainty (2) arising from the photon’s finite wavelength that limits our knowledge about the photon’s origin. However, in addition, during the process of emission the mass of the black hole changes from to , and the horizon radius, which we want to measure, has to change accordingly. If the energy of the photon is known only up to an uncertainty , then the error propagates into the precision by which we can deduce the radius of the black hole

(51)

With use of (50) and assuming that no naked singularities exist in nature one always finds that

(52)

In an argument similar to that of Adler and Santiago discussed in Section 3.1.2, Maggiore then suggests that the two uncertainties, the usual one inversely proportional to the photon’s energy and the additional one (52), should be linearly added to

(53)

where the constant would have to be fixed by using a specific theory. Minimizing the possible position uncertainty, one thus finds again a minimum error of .

It is clear that the uncertainty Maggiore considered is of a different kind than the one considered by Mead, though both have the same origin. Maggiore’s uncertainty is due to the impossibility of directly measuring a black hole without it emitting a particle that carries energy and thereby changing the black-hole–horizon area. The smaller the wavelength of the emitted particle, the larger the so-caused distortion. Mead’s uncertainty is due to the formation of black holes if one uses probes of too high an energy, which limits the possible precision. But both uncertainties go back to the relation between a black hole’s area and its mass.

3.1.6 A device-independent limit for non-relativistic particles

Even though the Heisenberg microscope is a very general instrument and the above considerations carry over to many other experiments, one may wonder if there is not some possibility to overcome the limitation of the Planck length by use of massive test particles that have smaller Compton wavelengths, or interferometers that allow one to improve on the limitations on measurement precisions set by the test particles’ wavelengths. To fill in this gap, Calmet, Graesser and Hsu [72, 73] put forward an elegant device-independent argument. They first consider a discrete spacetime with a sub-Planckian spacing and then show that no experiment is able to rule out this possibility. The point of the argument is not the particular spacetime discreteness they consider, but that it cannot be ruled out in principle.

The setting is a position operator with discrete eigenvalues that have a separation of order or smaller. To exclude the model, one would have to measure position eigenvalues and , for example, of some test particle of mass , with . Assuming the non-relativistic Schrödinger equation without potential, the time-evolution of the position operator is given by , and thus

(54)

We want to measure the expectation value of position at two subsequent times in order to attempt to measure a spacing smaller than the Planck length. The spectra of any two Hermitian operators have to fulfill the inequality

(55)

where denotes, as usual, the variance and the expectation value of the operator. From (54) one has

(56)

and thus

(57)

Since one needs to measure two positions to determine a distance, the minimal uncertainty to the distance measurement is

(58)

This is the same bound as previously discussed in Section 3.1.3 for the measurement of distances by help of a clock, yet we arrived here at this bound without making assumptions about exactly what is measured and how. If we take into account gravity, the argument can be completed similar to Wigner’s and still without making assumptions about the type of measurement, as follows.

We use an apparatus of size . To get the spacing as precise as possible, we would use a test particle of high mass. But then we will run into the, by now familiar, problem of black-hole formation when the mass becomes too large, so we have to require

(59)

Thus, we cannot make the detector arbitrarily small. However, we also cannot make it arbitrarily large, since the components of the detector have to at least be in causal contact with the position we want to measure, and so . Taken together, one finds

(60)

and thus once again the possible precision of a position measurement is limited by the Planck length.

A similar argument was made by Ng and van Dam [238], who also pointed out that with this thought experiment one can obtain a scaling for the uncertainty with the third root of the size of the detector. If one adds the position uncertainty (58) from the non-vanishing commutator to the gravitational one, one finds

(61)

Optimizing this expression with respect to the mass that yields a minimal uncertainty, one finds (up to factors of order one) and, inserting this value of in (61), thus

(62)

Since too should be larger than the Planck scale this is, of course, consistent with the previously-found minimal uncertainty.

Ng and van Dam further argue that this uncertainty induces a minimum error in measurements of energy and momenta. By noting that the uncertainty of a length is indistinguishable from an uncertainty of the metric components used to measure the length, , the inequality (62) leads to

(63)

But then again the metric couples to the stress-energy tensor , so this uncertainty for the metric further induces an uncertainty for the entries of

(64)

Consider now using a test particle of momentum to probe the physics at scale , thus . Then its uncertainty would be on the order of

(65)

However, note that the scaling found by Ng and van Dam only follows if one works with the masses that minimize the uncertainty (61). Then, even if one uses a detector of the approximate extension of a cm, the corresponding mass of the ‘particle’ we have to work with would be about a ton. With such a mass one has to worry about very different uncertainties. For particles with masses below the Planck mass on the other hand, the size of the detector would have to be below the Planck length, which makes no sense since its extension too has to be subject to the minimal position uncertainty.

3.1.7 Limits on the measurement of spacetime volumes

The observant reader will have noticed that almost all of the above estimates have explicitly or implicitly made use of spherical symmetry. The one exception is the argument by Adler and Santiago in Section 3.1.2 that employed cylindrical symmetry. However, it was also assumed there that the length and the radius of the cylinder are of comparable size.

In the general case, when the dimensions of the test particle in different directions are very unequal, the Hoop conjecture does not forbid any one direction to be smaller than the Schwarzschild radius to prevent collapse of some matter distribution, as long as at least one other direction is larger than the Schwarzschild radius. The question then arises what limits that rely on black-hole formation can still be derived in the general case.

A heuristic motivation of the following argument can be found in [101], but here we will follow the more detailed argument by Tomassini and Viaggiu [307]. In the absence of spherical symmetry, one may still use Penrose’s isoperimetric-type conjecture, according to which the apparent horizon is always smaller than or equal to the event horizon, which in turn is smaller than or equal to , where is as before the energy of the test particle.

Then, without spherical symmetry the requirement that no black hole ruins our ability to resolve short distances is weakened from the energy distribution having a radius larger than the Schwarzschild radius, to the requirement that the area , which encloses is large enough to prevent Penrose’s condition for horizon formation

(66)

The test particle interacts during a time that, by the normal uncertainty principle, is larger than . Taking into account this uncertainty on the energy, one has

(67)

Now we have to make some assumption for the geometry of the object, which will inevitably be a crude estimate. While an exact bound will depend on the shape of the matter distribution, we will here just be interested in obtaining a bound that depends on the three different spatial extensions, and is qualitatively correct. To that end, we assume the mass distribution fits into some smallest box with side-lengths , which is similar to the limiting area

(68)

where we added some constant to take into account different possible geometries. A comparison with the spherical case, , fixes . With Eq. (67) one obtains

(69)

Since

(70)

one also has

(71)

which confirms the limit obtained earlier by heuristic reasoning in [101].

Thus, as anticipated, taking into account that a black hole must not necessarily form if the spatial extension of a matter distribution is smaller than the Schwarzschild radius in only one direction, the uncertainty we arrive at here depends on the extension in all three directions, rather than applying separately to each of them. Here we have replaced by the inverse of , rather than combining with Eq. (2), but this is just a matter of presentation.

Since the bound on the volumes (71) follows from the bounds on spatial and temporal intervals we found above, the relevant question here is not whether LABEL:unvolumes is fulfilled, but whether the bound can be violated [165].

To address that question, note that the quantities in the above argument by Tomassini and Viaggiu differ from the ones we derived bounds for in Sections 3.1.1 – 3.1.6. Previously, the was the precision by which one can measure the position of a particle with help of the test particle. Here, the are the smallest possible extensions of the test particle (in the rest frame), which with spherical symmetry would just be the Schwarzschild radius. The step in which one studies the motion of the measured particle that is induced by the gravitational field of the test particle is missing in this argument. Thus, while the above estimate correctly points out the relevance of non-spherical symmetries, the argument does not support the conclusion that it is possible to test spatial distances to arbitrary precision.

The main obstacle to completion of this argument is that in the context of quantum field theory we are eventually dealing with particles probing particles. To avoid spherical symmetry, we would need different objects as probes, which would require more information about the fundamental nature of matter. We will come back to this point in Section 3.2.3.

3.2 String Theory

String theory is one of the leading candidates for a theory of quantum gravity. Many textbooks have been dedicated to the topic, and the interested reader can also find excellent resources online [187, 278, 235, 299]. For the following we will not need many details. Most importantly, we need to know that a string is described by a 2-dimensional surface swept out in a higher-dimensional spacetime. The total number of spatial dimensions that supersymmetric string theory requires for consistency is nine, i.e., there are six spatial dimensions in addition to the three we are used to. In the following we will denote the total number of dimensions, both time and space-like, with . In this Subsection, Greek indices run from to .

The two-dimensional surface swept out by the string in the -dimensional spacetime is referred to as the ‘worldsheet,’ will be denoted by , and will be parameterized by (dimensionless) parameters and , where is its time-like direction, and runs conventionally from 0 to . A string has discrete excitations, and its state can be expanded in a series of these excitations plus the motion of the center of mass. Due to conformal invariance, the worldsheet carries a complex structure and thus becomes a Riemann surface, whose complex coordinates we will denote with and . Scattering amplitudes in string theory are a sum over such surfaces.

In the following is the string scale, and . The string scale is related to the Planck scale by , where is the string coupling constant. Contrary to what the name suggests, the string coupling constant is not constant, but depends on the value of a scalar field known as the dilaton.

To avoid conflict with observation, the additional spatial dimensions of string theory have to be compactified. The compactification scale is usually thought to be about the Planck length, and far below experimental accessibility. The possibility that the extensions of the extra dimensions (or at least some of them) might be much larger than the Planck length and thus possibly experimentally accessible, has been studied in models with a large compactification volume and lowered Planck scale, see, e.g., [1]. We will not discuss these models here, but mention in passing that they demonstrate the possibility that the ‘true’ higher-dimensional Planck mass is in fact much smaller than , and correspondingly the ‘true’ higher-dimensional Planck length, and with it the minimal length, much larger than . That such possibilities exist means, whether or not the model with extra dimensions are realized in nature, that we should, in principle, consider the minimal length a free parameter that has to be constrained by experiment.

String theory is also one of the motivations to look into non-commutative geometries. Non-commutative geometry will be discussed separately in Section 3.6. A section on matrix models will be included in a future update.

3.2.1 Generalized uncertainty

The following argument, put forward by Susskind [297, 298], will provide us with an insightful examination that illustrates how a string is different from a point particle and what consequences this difference has for our ability to resolve structures at shortest distances. We consider a free string in light cone coordinates, with the parameterization , where is the momentum in the direction and constant along the string. In the light-cone gauge, the string has no oscillations in the direction by construction.

The transverse dimensions are the remaining with . The normal mode decomposition of the transverse coordinates has the form

(72)

where is the (transverse location of) the center of mass of the string. The coefficients and are normalized to , and . Since the components are real, the coefficients have to fulfill the relations and .

We can then estimate the transverse size of the string by

(73)

which, in the ground state, yields an infinite sum

(74)

This sum is logarithmically divergent because modes with arbitrarily high frequency are being summed over. To get rid of this unphysical divergence, we note that testing the string with some energy , which corresponds to some resolution time , allows us to cut off modes with frequency or mode number . Then, for large , the sum becomes approximately

(75)

Thus, the transverse extension of the string grows with the energy that the string is tested by, though only very slowly so.

To determine the spread in the longitudinal direction , one needs to know that in light-cone coordinates the constraint equations on the string have the consequence that is related to the transverse directions so that it is given in terms of the light-cone Virasoro generators

(76)

where now and fulfill the Virasoro algebra. Therefore, the longitudinal spread in the ground state gains a factor over the transverse case, and diverges as

(77)

Again, this result has an unphysical divergence, that we deal with the same way as before by taking into account a finite resolution , corresponding to the inverse of the energy by which the string is probed. Then one finds for large approximately

(78)

Thus, this heuristic argument suggests that the longitudinal spread of the string grows linearly with the energy at which it is probed.

The above heuristic argument is supported by many rigorous calculations. That string scattering leads to a modification of the Heisenberg uncertainty relation has been shown in several studies of string scattering at high energies performed in the late 1980s [140, 310, 228]. Gross and Mende [140] put forward a now well-known analysis of the classic solution for the trajectories of a string worldsheet describing a scattering event with external momenta . In the lowest tree approximation they found for the extension of the string

(79)

plus terms that are suppressed in energy relative to the first. Here, are the positions of the vertex operators on the Riemann surface corresponding to the asymptotic states with momenta . Thus, as previously, the extension grows linearly with the energy. One also finds that the surface of the string grows with , where is the genus of the expansion, and that the fixed angle scattering amplitude at high energies falls exponentially with the square of the center-of-mass energy (times ).

One can interpret this spread of the string in terms of a GUP by taking into account that at high energies the spread grows linearly with the energy. Together with the normal uncertainty, one obtains

(80)

again the GUP that gives rise to a minimally-possible spatial resolution.

However, the exponential fall-off of the tree amplitude depends on the genus of the expansion, and is dominated by the large contributions because these decrease slower. The Borel resummation of the series has been calculated in [228] and it was found that the tree level approximation is valid only for an intermediate range of energies, and for the amplitude decreases much slower than the tree-level result would lead one to expect. Yoneya [318] has furthermore argued that this behavior does not properly take into account non-perturbative effects, and thus the generalized uncertainty should not be regarded as generally valid in string theory. We will discuss this in Section 3.2.3.

It has been proposed that the resistance of the string to attempts to localize it plays a role in resolving the black-hole information-loss paradox [204]. In fact, one can wonder if the high energy behavior of the string acts against and eventually prevents the formation of black holes in elementary particle collisions. It has been suggested in [10, 9, 11] that string effects might become important at impact parameters far greater than those required to form black holes, opening up the possibility that black holes might not form.

The completely opposite point of view, that high energy scattering is ultimately entirely dominated by black-hole production, has also been put forward [48, 131]. Giddings and Thomas found an indication of how gravity prevents probes of distance shorter than the Planck scale [131] and discussed the ‘the end of short-distance physics’; Banks aptly named it ‘asymptotic darkness’ [47]. A recent study of string scattering at high energies [127] found no evidence that the extendedness of the string interferes with black-hole formation. The subject of string scattering in the trans-Planckian regime is subject of ongoing research, see, e.g., [12, 90, 130] and references therein.

Let us also briefly mention that the spread of the string just discussed should not be confused with the length of the string. (For a schematic illustration see Figure 2.) The length of a string in the transverse direction is

(81)

where the sum is taken in the transverse direction, and has been studied numerically in [173]. In this study, it has been shown that when one increases the cut-off on the modes, the string becomes space-filling, and fills space densely (i.e., it comes arbitrarily close to any point in space).

\epubtkImage

string.png

Figure 2: The length of a string is not the same as its average extension. The lengths of strings in the groundstate were studied in [173].

3.2.2 Spacetime uncertainty

Yoneya [318] argued that the GUP in string theory is not generally valid. To begin with, it is not clear whether the Borel resummation of the perturbative expansion leads to correct non-perturbative results. And, after the original works on the generalized uncertainty in string theory, it has become understood that string theory gives rise to higher-dimensional membranes that are dynamical objects in their own right. These higher-dimensional membranes significantly change the picture painted by high energy string scattering, as we will see in 3.2.3. However, even if the GUP is not generally valid, there might be a different uncertainty principle that string theory conforms to, that is a spacetime uncertainty of the form

(82)

This spacetime uncertainty has been motivated by Yoneya to arise from conformal symmetry [317, 318] as follows.

Suppose we are dealing with a Riemann surface with metric that parameterizes the string. In string theory, these surfaces appear in all path integrals and thus amplitudes, and they are thus of central importance for all possible processes. Let us denote with a finite region in that surface, and with the set of all curves in . The length of some curve is then . However, this length that we are used to from differential geometry is not conformally invariant. To find a length that captures only the physically-relevant information, one can use a distance measure known as the ‘extremal length’

(83)

with

(84)

The so-constructed length is dimensionless and conformally invariant. For simplicity, we assume that is a generic polygon with four sides and four corners, with pairs of opposite sides named and . Any more complicated shape can be assembled from such polygons. Let be the set of all curves connecting with and the set of all curves connecting with . The extremal lengths and then fulfill property [317, 318]

(85)

Conformal invariance allows us to deform the polygon, so instead of a general four-sided polygon, we can consider a rectangle in particular, where the Euclidean length of the sides will be named and that of sides will be named . With a Minkowski metric, one of these directions would be timelike and one spacelike. Then the extremal lengths are [317, 318]

(86)

Armed with this length measure, let us consider the Euclidean path integral in the conformal gauge () with the action

(87)

(Equal indices are summed over). As before, are the target space coordinates of the string worldsheet. We now decompose the coordinate into its real and imaginary part , and consider a rectangular piece of the surface with the boundary conditions

(88)

If one integrates over the rectangular region, the action contains a factor and the path integral thus contains a factor of the form

(89)

Thus, the width of these contributions is given by the extremal length times the string scale, which quantifies the variance of and by

(90)

In particular the product of both satisfies the condition

(91)

Thus, probing short distances along the spatial and temporal directions simultaneously is not possible to arbitrary precision, lending support to the existence of a spacetime uncertainty of the form (82). Yoneya notes [318] that this argument cannot in this simple fashion be carried over to more complicated shapes. Thus, at present the spacetime uncertainty has the status of a conjecture. However, the power of this argument rests in it only relying on conformal invariance, which makes it plausible that, in contrast to the GUP, it is universally and non-perturbatively valid.

3.2.3 Taking into account Dp-Branes

The endpoints of open strings obey boundary conditions, either of the Neumann type or of the Dirichlet type or a mixture of both. For Dirichlet boundary conditions, the submanifold on which open strings end is called a Dirichlet brane, or Dp-brane for short, where p is an integer denoting the dimension of the submanifold. A D0-brane is a point, sometimes called a D-particle; a D1-brane is a one-dimensional object, also called a D-string; and so on, all the way up to D9-branes.

These higher-dimensional objects that arise in string theory have a dynamics in their own right, and have given rise to a great many insights, especially with respect to dualities between different sectors of the theory, and the study of higher-dimensional black holes [170, 45].

Dp-branes have a tension of ; that is, in the weak coupling limit, they become very rigid. Thus, one might suspect D-particles to show evidence for structure on distances at least down to .

Taking into account the scattering of Dp-branes indeed changes the conclusions we could draw from the earlier-discussed thought experiments. We have seen that this was already the case for strings, but we can expect that Dp-branes change the picture even more dramatically. At high energies, strings can convert energy into potential energy, thereby increasing their extension and counteracting the attempt to probe small distances. Therefore, strings do not make good candidates to probe small structures, and to probe the structures of Dp-branes, one would best scatter them off each other. As Bachas put it [45], the “small dynamical scale of D-particles cannot be seen by using fundamental-string probes – one cannot probe a needle with a jelly pudding, only with a second needle!”

That with Dp-branes new scaling behaviors enter the physics of shortest distances has been pointed out by Shenker [283], and in particular the D-particle scattering has been studied in great detail by Douglas et al. [103]. It was shown there that indeed slow moving D-particles can probe distances below the (ten-dimensional) Planck scale and even below the string scale. For these D-particles, it has been found that structures exist down to .

To get a feeling for the scales involved here, let us first reconsider the scaling arguments on black-hole formation, now in a higher-dimensional spacetime. The Newtonian potential of a higher-dimensional point charge with energy , or the perturbation of , in dimensions, is qualitatively of the form

(92)

where is the spatial extension, and is the -dimensional Newton’s constant, related to the Planck length as . Thus, the horizon or the zero of is located at

(93)

With , for some time by which we test the geometry, to prevent black-hole formation for , one thus has to require

(94)

re-expressed in terms of string coupling and tension. We see that in the weak coupling limit, this lower bound can be small, in particular it can be much below the string scale.

This relation between spatial and temporal resolution can now be contrasted with the spacetime uncertainty (82), that sets the limits below which the classical notion of spacetime ceases to make sense. Both of these limits are shown in Figure 3 for comparison. The curves meet at

(95)

If we were to push our limits along the bound set by the spacetime uncertainty (red, solid line), then the best possible spatial resolution we could reach lies at , beyond which black-hole production takes over. Below the spacetime uncertainty limit, it would actually become meaningless to talk about black holes that resemble any classical object.

\epubtkImage

stu3.png

Figure 3: Spacetime uncertainty (red, solid) vs uncertainty from spherical black holes (blue, dotted) in dimensions, for (left) and (right). After [318], Figure 1. Below the bound from spacetime uncertainty yet above the black-hole bound that hides short-distance physics (shaded region), the concept of classical geometry becomes meaningless.

At first sight, this argument seems to suffer from the same problem as the previously examined argument for volumes in Section 3.1.7. Rather than combining with to arrive at a weaker bound than each alone would have to obey, one would have to show that in fact can become arbitrarily small. And, since the argument from black-hole collapse in 10 dimensions is essentially the same as Mead’s in 4 dimensions, just with a different -dependence of , if one would consider point particles in 10 dimensions, one finds along the same line of reasoning as in Section 3.1.2, that actually and .

However, here the situation is very different because fundamentally the objects we are dealing with are not particles but strings, and the interaction between Dp-branes is mediated by strings stretched between them. It is an inherently different behavior than what we can expect from the classical gravitational attraction between point particles. At low string coupling, the coupling of gravity is weak and in this limit then, the backreaction of the branes on the background becomes negligible. For these reasons, the D-particles distort each other less than point particles in a quantum field theory would, and this is what allows one to use them to probe very short distances.

The following estimate from [318] sheds light on the scales that we can test with D-particles in particular. Suppose we use D-particles with velocity and mass to probe a distance of size in time . Since , the uncertainty (94) gives

(96)

thus, to probe very short distances one has to use slow D-particles.

But if the D-particle is slow, then its wavefunction behaves like that of a massive non-relativistic particle, so we have to take into account that the width spreads with time. For this, we can use the earlier-discussed bound Eq. (58)

(97)

or

(98)

If we add the uncertainties (96) and (98) and minimize the sum with respect to , we find that the spatial uncertainty is minimal for

(99)

Thus, the total spatial uncertainty is bounded by

(100)

and with this one also has

(101)

which are the scales that we already identified in (95) to be those of the best possible resolution compatible with the spacetime uncertainty. Thus, we see that the D-particles saturate the spacetime uncertainty bound and they can be used to test these short distances.

D-particle scattering has been studied in [103] by use of a quantum mechanical toy model in which the two particles are interacting by (unexcited) open strings stretched between them. The open strings create a linear potential between the branes. At moderate velocities, repeated collisions can take place, since the probability for all the open strings to annihilate between one collision and the next is small. At , the time between collisions is on the order of , corresponding to a resonance of width . By considering the conversion of kinetic energy into the potential of the strings, one sees that the particles reach a maximal separation of , realizing a test of the scales found above.

Douglas et al. [103] offered a useful analogy of the involved scales to atomic physics; see Table (1). The electron in a hydrogen atom moves with velocity determined by the fine-structure constant , from which it follows the characteristic size of the atom. For the D-particles, this corresponds to the maximal separation in the repeated collisions. The analogy may be carried further than that in that higher-order corrections should lead to energy shifts.

Electron D-particle
mass mass
Compton wavelength Compton wavelength
velocity velocity
Bohr radius size of resonance
energy levels resonance energy
fine structure energy shifts
Table 1: Analogy between scales involved in D-particle scattering and the hydrogen atom. After [103].

The possibility to resolve such short distances with D-branes have been studied in many more calculations; for a summary, see, for example, [45] and references therein. For our purposes, this estimate of scales will be sufficient. We take away that D-branes, should they exist, would allow us to probe distances down to .

3.2.4 T-duality

In the presence of compactified spacelike dimensions, a string can acquire an entirely new property: It can wrap around the compactified dimension. The number of times it wraps around, labeled by the integer , is called the ‘winding-number.’ For simplicity, let us consider only one additional dimension, compactified on a radius . Then, in the direction of this coordinate, the string has to obey the boundary condition

(102)

The momentum in the direction of the additional coordinate is quantized in multiples of , so the expansion (compare to Eq. (72)) reads

(103)

where is some initial value. The momentum is then

(104)

The total energy of the quantized string with excitation and winding number is formally divergent, due to the contribution of all the oscillator’s zero point energies, and has to be renormalized. After renormalization, the energy is

(105)
(106)

where runs over the non-compactified coordinates, and and are the levels of excitations of the left and right moving modes. Level matching requires . In addition to the normal contribution from the linear momentum, the string energy thus has a geometrically-quantized contribution from the momentum into the extra dimension(s), labeled with , an energy from the winding (more winding stretches the string and thus needs energy), labeled with , and a renormalized contribution from the Casimir energy. The important thing to note here is that this expression is invariant under the exchange

(107)

i.e., an exchange of winding modes with excitations leaves mass spectrum invariant.

This symmetry is known as target-space duality, or T-duality for short. It carries over to multiples extra dimensions, and can be shown to hold not only for the free string but also during interactions. This means that for the string a distance below the string scale is meaningless because it corresponds to a distance larger than that; pictorially, a string that is highly excited also has enough energy to stretch and wrap around the extra dimension. We have seen in Section 3.2.3 that Dp-branes overcome limitations of string scattering, but T-duality is a simple yet powerful way to understand why the ability of strings to resolves short distances is limited.

This characteristic property of string theory has motivated a model that incorporates T-duality and compact extra dimensions into an effective path integral approach for a particle-like object that is described by the center-of-mass of the string, yet with a modified Green’s function, suggested in [285, 111, 291].

In this approach it is assumed that the elementary constituents of matter are fundamentally strings that propagate in a higher dimensional spacetime with compactified additional dimensions, so that the strings can have excitations and winding numbers. By taking into account the excitations and winding numbers, Fontanini et al. [285, 111, 291] derive a modified Green’s function for a scalar field. In the resulting double sum over and , the contribution from the and zero modes is dropped. Note that this discards all massless modes as one sees from Eq. (106). As a result, the Green’s function obtained in this way no longer has the usual contribution

(108)

Instead, one finds in momentum space