sidebar

2 A Minimal History

Special relativity and quantum mechanics are characterized by two universal constants, the speed of light, c, and Planck’s constant, ℏ. Yet, from these constants alone one cannot construct either a constant of dimension length or mass. Though, if one had either, they could be converted into each other by use of ℏ and c. But in 1899, Max Planck pointed out that adding Newton’s constant G to the universal constants c and ℏ allows one to construct units of mass, length and time [265]:
tPl ≈ 10− 43 s − 33 lPl ≈ 10 cm mPl ≈ 1.2 × 1019 GeV. (1 )
Today these are known as the Planck time, Planck length and Planck mass, respectively. As we will see later, they mark the scale at which quantum effects of the gravitational interaction are expected to become important. But back in Planck’s days their relevance was their universality, because they are constructed entirely from fundamental constants.

The idea of a minimal length was predated by that of the “chronon,” a smallest unit of time, proposed by Robert Lévi [200] in 1927 in his “Hyphothèse de l’atome de temps” (hypothesis of time atoms), that was further developed by Pokrowski in the years following Lévi’s proposal [266]. But that there might be limits to the divisibility of space and time remained a far-fetched speculation on the fringes of a community rapidly pushing forward the development of general relativity and quantum mechanics. It was not until special relativity and quantum mechanics were joined in the framework of quantum field theory that the possible existence of a minimal length scale rose to the awareness of the community.

With the advent of quantum field theory in the 1930s, it was widely believed that a fundamental length was necessary to cure troublesome divergences. The most commonly used regularization was a cut-off or some other dimensionful quantity to render integrals finite. It seemed natural to think of this pragmatic cut-off as having fundamental significance, an interpretation that however inevitably caused problems with Lorentz invariance, since the cut-off would not be independent of the frame of reference. Heisenberg was among the first to consider a fundamentally-discrete spacetime that would yield a cut-off, laid out in his letters to Bohr and Pauli. The idea of a fundamentally finite length or a maximum frequency was in these years studied by many, including Flint [110], March [219*], Möglich [234] and Goudsmit [267], just to mention a few. They all had in common that they considered the fundamental length to be in the realm of subatomic physics on the order of the femtometer (− 15 10 m).

The one exception was a young Russian, Matvei Bronstein. Today recognized as the first to comprehend the problem of quantizing gravity [138*], Bronstein was decades ahead of his time. Already in 1936, he argued that gravity is in one important way fundamentally different from electrodynamics: Gravity does not allow an arbitrarily high concentration of charge in a small region of spacetime, since the gravitational ‘charge’ is energy and, if concentrated too much, will collapse to a black hole. Using the weak field approximation of gravity, he concluded that this leads to an inevitable limit to the precision of which one can measure the strength of the gravitational field (in terms of the Christoffel symbols).

In his 1936 article “Quantentheorie schwacher Gravitationsfelder” (Quantum theory of weak gravitational fields), Bronstein wrote [138, 70*]:

“[T]he gravitational radius of the test-body 2 (GρV ∕c ) used for the measurements should by no means be larger than its linear dimensions (V 1∕3); from this one obtains an upper bound for its density (ρ ≲ c2∕GV 2∕3). Thus, the possibilities for measurements in this region are even more restricted than one concludes from the quantum-mechanical commutation relations. Without a profound change of the classical notions it therefore seems hardly possible to extend the quantum theory of gravitation to this region.”1View original Quote ([70], p. 150)2

Few people took note of Bronstein’s argument and, unfortunately, the history of this promising young physicist ended in a Leningrad prison in February 1938, where Matvei Bronstein was executed at the age of 31.

Heisenberg meanwhile continued in his attempt to make sense of the notion of a fundamental minimal length of nuclear dimensions. In 1938, Heisenberg wrote “Über die in der Theorie der Elementarteilchen auftretende universelle Länge” (On the universal length appearing in the theory of elementary particles) [148*], in which he argued that this fundamental length, which he denoted r0, should appear somewhere not too far beyond the classical electron radius (of the order 100 fm).

This idea seems curious today, and has to be put into perspective. Heisenberg was very worried about the non-renormalizability of Fermi’s theory of β-decay. He had previously shown [147] that applying Fermi’s theory to the high center-of-mass energies of some hundred GeV lead to an ‘explosion,’ by which he referred to events of very high multiplicity. Heisenberg argued this would explain the observed cosmic ray showers, whose large number of secondary particles we know today are created by cascades (a possibility that was discussed already at the time of Heisenberg’s writing, but not agreed upon). We also know today that what Heisenberg actually discovered is that Fermi’s theory breaks down at such high energies, and the four-fermion coupling has to be replaced by the exchange of a gauge boson in the electroweak interaction. But in the 1930s neither the strong nor the electroweak force was known. Heisenberg then connected the problem of regularization with the breakdown of the perturbation expansion of Fermi’s theory, and argued that the presence of the alleged explosions would prohibit the resolution of finer structures:

“If the explosions actually exist and represent the processes characteristic for the constant r0, then they maybe convey a first, still unclear, understanding of the obscure properties connected with the constant r0. These should certainly express themselves in difficulties of measurements with a precision better than r0…The explosions would have the effect…that measurements of positions are not possible to a precision better than r0.”3View original Quote ([148*], p. 31)

In hindsight we know that Heisenberg was, correctly, arguing that the theory of elementary particles known in the 1930s was incomplete. The strong interaction was missing and Fermi’s theory indeed non-renormalizable, but not fundamental. Today we also know that the standard model of particle physics is renormalizable and know techniques to deal with divergent integrals that do not necessitate cut-offs, such as dimensional regularization. But lacking that knowledge, it is understandable that Heisenberg argued that taking into account gravity was irrelevant for the existence of a fundamental length:

“The fact that [the Planck length] is significantly smaller than r0 makes it valid to leave aside the obscure properties of the description of nature due to gravity, since they – at least in atomic physics – are totally negligible relative to the much coarser obscure properties that go back to the universal constant r0. For this reason, it seems hardly possible to integrate electric and gravitational phenomena into the rest of physics until the problems connected to the length r0 are solved.”4View original Quote ([148], p. 26)

Heisenberg apparently put great hope in the notion of a fundamental length to move forward the understanding of elementary matter. In 1939 he expressed his belief that a quantum theory with a minimal length scale would be able to account for the discrete mass spectrum of the (then known) elementary particles [149]. However, the theory of quantum electrodynamics was developed to maturity, the ‘explosions’ were satisfactorily explained and, without being hindered by the appearance of any fundamentally finite resolution, experiments probed shorter and shorter scales. The divergences in quantum field theory became better understood and discrete approaches to space and time remained unappealing due to their problems with Lorentz invariance.

In a 1947 letter to Heisenberg, Pauli commented on the idea of a smallest length that Heisenberg still held dearly and explained his reservations, concluding “Extremely put, I would not be surprised if your ‘universal’ length turned out to be a mere figment of imagination.” [254]. (For more about Heisenberg’s historical involvement with the universal length, the interested reader is referred to Kragh’s very recommendable article [199].)

In 1930, in a letter to his student Rudolf Peierls [150*], Heisenberg mentioned that he was trying to make sense of a minimal length by letting the position operators be non-commuting [ˆxν, ˆxμ] ⁄= 0. He expressed his hope that Peierls ask Pauli how to proceed with this idea:

“So far, I have not been able to make mathematical sense of such commutation relations…Do you or Pauli have anything to say about the mathematical meaning of such commutation relations?”5View original Quote ([150], p. 16)

But it took 17 years until Snyder, in 1947, made mathematical sense of Heisenberg’s idea.6 Snyder, who felt that that the use of a cut-off in momentum space was a “distasteful arbitrary procedure” [288*], worked out a modification of the canonical commutation relations of position and momentum operators. In that way, spacetime became Lorentz-covariantly non-commutative, but the modification of commutation relations increased the Heisenberg uncertainty, such that a smallest possible resolution of structures was introduced (a consequence Snyder did not explicitly mention in his paper). Though Snyder’s approach was criticized for the difficulties of inclusion of translations [316], it has received a lot of attention as the first to show that a minimal length scale need not be in conflict with Lorentz invariance.

In 1960, Peres and Rosen [262*] studied uncertainties in the measurement of the average values of Christoffel symbols due to the impossibility of concentrating a mass to a region smaller than its Schwarzschild radius, and came to the same conclusion as Bronstein already had, in 1936,

“The existence of these quantum uncertainties in the gravitational field is a strong argument for the necessity of quantizing it. It is very likely that a quantum theory of gravitation would then generalize these uncertainty relations to all other Christoffel symbols.” ([262], p. 336)

While they considered the limitations for measuring the gravitational field itself, they did not study the limitations these uncertainties induce on the ability to measure distances in general.

It was not until 1964, that Mead pointed out the peculiar role that gravity plays in our attempts to test physics at short distances [222*, 223]. He showed, in a series of thought experiments that we will discuss in Section 3.1, that this influence does have the effect of amplifying Heisenberg’s measurement uncertainty, making it impossible to measure distances to a precision better than Planck’s length. And, since gravity couples universally, this is, though usually negligible, an inescapable influence on all our experiments.

Mead’s work did not originally attain a lot of attention. Decades later, he submitted his recollection [224*] that “Planck’s proposal that the Planck mass, length, and time should form a fundamental system of units…was still considered heretical well into the 1960s,” and that his argument for the fundamental relevance of the Planck length met strong resistance:

“At the time, I read many referee reports on my papers and discussed the matter with every theoretical physicist who was willing to listen; nobody that I contacted recognized the connection with the Planck proposal, and few took seriously the idea of [the Planck length] as a possible fundamental length. The view was nearly unanimous, not just that I had failed to prove my result, but that the Planck length could never play a fundamental role in physics. A minority held that there could be no fundamental length at all, but most were then convinced that a [different] fundamental length…, of the order of the proton Compton wavelength, was the wave of the future. Moreover, the people I contacted seemed to treat this much longer fundamental length as established fact, not speculation, despite the lack of actual evidence for it.” ([224], p. 15)

But then in the mid 1970s then Hawking’s calculation of a black hole’s thermodynamical properties [145] introduced the ‘transplanckian problem.’ Due to the, in principle infinite, blue shift of photons approaching a black-hole horizon, modes with energies exceeding the Planck scale had to be taken into account to calculate the emission rate. A great many physicists have significantly advanced our understanding of black-hole physics and the Planck scale, too many to be named here. However, the prominent role played by John Wheeler, whose contributions, though not directly on the topic of a minimal length, has connected black-hole physics with spacetime foam and the Planckian limit, and by this inspired much of what followed.

Unruh suggested in 1995 [308] that one use a modified dispersion relation to deal with the difficulty of transplanckian modes, so that a smallest possible wavelength takes care of the contributions beyond the Planck scale. A similar problem exists in inflationary cosmology [220*] since tracing back in time small frequencies increases the frequency till it eventually might surpass the Planck scale at which point we no longer know how to make sense of general relativity. Thus, this issue of transplanckian modes in cosmology brought up another reason to reconsider the possibility of a minimal length or a maximal frequency, but this time the maximal frequency was at the Planck scale rather than at the nuclear scale. Therefore, it was proposed [180, 144] that this problem too might be cured by implementing a minimum length uncertainty principle into inflationary cosmology.

Almost at the same time, Majid and Ruegg [213*] proposed a modification for the commutators of spacetime coordinates, similar to that of Snyder, following from a generalization of the Poincaré algebra to a Hopf algebra, which became known as κ-Poincaré. Kempf et al. [175*, 174*, 184*, 178*] developed the mathematical basis of quantum mechanics that took into account a minimal length scale and ventured towards quantum field theory. There are by now many variants of models employing modifications of the canonical commutation relations in order to accommodate a minimal length scale, not all of which make use of the complete κ-Poincaré framework, as will be discussed later in Sections 4.2 and 4.5. Some of these approaches were shown to give rise to a modification of the dispersion relation, though the physical interpretation and relevance, as well as the phenomenological consequences of this relation are still under debate.

In parallel to this, developments in string theory revealed the impossibility of resolving arbitrarily small structures with an object of finite extension. It had already been shown in the late 1980s [140*, 10*, 9*, 11*, 310*] that string scattering in the super-Planckian regime would result in a generalized uncertainty principle, preventing a localization to better than the string scale (more on this in Section 3.2). In 1996, John Schwarz gave a talk at SLAC about the generalized uncertainty principles resulting from string theory and thereby inspired the 1999 work by Adler and Santiago [3*] who almost exactly reproduced Mead’s earlier argument, apparently without being aware of Mead’s work. This picture was later refined when it became understood that string theory not only contains strings but also higher dimensional objects, known as branes, which will be discussed in Section 3.2.

In the following years, a generalized uncertainty principle and quantum mechanics with the Planck length as a minimal length received an increasing amount of attention as potential cures for the transplanckian problem, a natural UV-regulator, and as possible manifestations of a fundamental property of quantum spacetime. In the late 1990s, it was also noted that it is compatible with string theory to have large or warped extra dimensions that can effectively lower the Planck scale into the TeV range. With this, the fundamental length scale also moved into the reach of collider physics, resulting in a flurry of activity.7

Today, how to resolve the apparent disagreements between the quantum field theories of the standard model and general relativity is one of the big open questions in theoretical physics. It is not that we cannot quantize gravity, but that the attempt to do so leads to a perturbatively non-renormalizable and thus fundamentally nonsensical theory. The basic reason is that the coupling constant of gravity, Newton’s constant, is dimensionful. This leads to the necessity to introduce an infinite number of counter-terms, eventually rendering the theory incapable of prediction.

But the same is true for Fermi’s theory that Heisenberg was so worried about that he argued for a finite resolution where the theory breaks down, and mistakenly so, since he was merely pushing an effective theory beyond its limits. So we have to ask then if we might be making the same mistake as Heisenberg, in that we falsely interpret the failure of general relativity to extend beyond the Planck scale as the occurrence of a fundamentally finite resolution of structures, rather than just the limit beyond which we have to look for a new theory that will allow us to resolve smaller distances still?

If it was only the extension of classical gravity, laid out in many thought experiments that will be discussed in Section 3.1, that had us believing the Planck length is of fundamental importance, then the above historical lesson should caution us we might be on the wrong track. Yet, the situation today is different from the one that Heisenberg faced. Rather than pushing a quantum theory beyond its limits, we are pushing a classical theory and conclude that its short-distance behavior is troublesome, which we hope to resolve with quantizing the theory. And, as we will see, several attempts at a UV-completion of gravity, discussed in Sections 3.23.7, suggest that the role of the Planck length as a minimal length carries over into the quantum regime as a dimensionful regulator, though in very different ways, feeding our hopes that we are working on unveiling the last and final Russian doll.

For a more exhaustive coverage of the history of the minimal length, the interested reader is referred to [141].


  Go to previous page Scroll to top Go to next page