Celebrating fifty years of electroweak symmetry breaking theory

Electroweak symmetry breaking theory 

This is the third fragment of a lover's dictionary on spectral physics.

The Standard Model wins all the battles 

This important part of the Standard Model is fifty years old this year 2017 and it is still undefeated experimentally by LHC Run 2. Yet it is far from having been thoroughly tested in its full Standard Model version as you will read it below:

Spontaneous symmetry breaking occurs when the ground state or vacuum, or equilibrium state of a system does not share the underlying symmetries of the theory. It is ubiquitous in condensed matter physics, associated with phase transitions. Often, there is a high-temperature symmetric phase and a critical temperature below which the symmetry breaks spontaneously. A simple example is crystallization. If we place a round bowl of water on a table, it looks the same from every direction, but when it freezes the ice crystals form in specific orientations, breaking the full rotational symmetry. The breaking is spontaneous in the sense that, unless we have extra information, we cannot predict in which directions the crystals will line up... In 1960, Nambu [12] pointed out that gauge symmetry is broken in a superconductor when it goes through the transition from normal to superconducting, and that this gives a mass to the plasmon, although this view was still quite controversial in the superconductivity community (see also Anderson [13]). Nambu suggested that a similar mechanism might give masses to elementary particles... The next year, with Jona-Lasinio [14], he proposed a specific model, though not a gauge theory... 
The model had a significant feature, a massless pseudoscalar particle, which Nambu and Jona-Lasinio tentatively identified with the pion. To account for the non-zero (though small) pion mass, they suggested that the chiral symmetry was not quite exact even before the spontaneous symmetry breaking. Attempts to apply this idea to symmetry breaking of fundamental gauge theories however ran into a severe obstacle, the Goldstone theorem... the spontaneous breaking of a continuous symmetry often leads to the appearance of massless spin-0 particles. The simplest model that illustrates this is the Goldstone model [15]... 
The appearance of the... massless spin-zero Nambu–Goldstone bosons was believed to be an inevitable consequence of spontaneous symmetry breaking in a relativistic theory; this is the content of the Goldstone theorem. That is a problem because such massless particles, if they had any reasonable interaction strength, should have been easy to see, but none had been seen...  
This problem was obviously of great concern to all those who were trying to build a viable gauge theory of weak interactionsWhen Steven Weinberg came to spend a sabbatical at Imperial College in 1961, he and Salam spent a great deal of time discussing the obstacles. They developed a proof of the Goldstone theorem, published jointly with Goldstone [16]...
Spontaneous symmetry breaking implied massless spin-zero bosons, which should have been easy to see but had not been seen. On the other hand adding explicit symmetry-breaking terms led to non-renormalizable theories predicting infinite results. Weinberg commented ‘Nothing will come of nothing; speak again’, a quotation from King Lear. Fortunately, however, our community was able to speak again...
The {Goldstone theorem} argument fails in the case of a gauge theory, for quite subtle reasons ... {its} proof is valid, but there is a hidden assumption which, though seemingly natural, is violated by gauge theories. This was discovered independently by three groups, first Englert and Brout from Brussels [19], then Higgs from Edinburgh [20, 21] and finally Guralnik, Hagen and myself from Imperial College [22]. All three groups published papers in Physical Review Letters during the summer and autumn of 1964... 
The 1964 papers from the three groups attracted very little attention at the time. Talks on the subject were often greeted with scepticism. By the end of that year, the mechanism was known, and Glashow’s (and Salam and Ward’s) SU(2) × U(1) model was known. But, surprisingly perhaps, it still took three more years for anyone to put the two together. This may have been in part at least because many of us were still thinking primarily of a gauge theory of strong interactions, not weak 
In early 1967, I did some further work on the detailed application of the mechanism to models with larger symmetries than U(1), in particular on how the symmetry breaking pattern determines the numbers of massive and massless particles [23]. I had some lengthy discussions with Salam on this subject, which I believed helped to renew his interest in the subject. A unified gauge theory of weak and electromagnetic interactions of leptons was first proposed by Weinberg later that year [24]. Essentially the same model was presented independently by Salam in lectures he gave at Imperial College in the autumn of 1967 — he called it the electroweak theory. (I was not present because I was in the United States, but I have had accounts from others who were.) Salam did not publish his ideas until the following year, when he spoke at a Nobel Symposium [25], largely perhaps because his attention was concentrated on the development in its crucial early years of his International Centre for Theoretical Physics in Trieste. Weinberg and Salam both speculated that their theory was renormalizable, but they could not prove it. An important step was the working out by Faddeev and Popov of a technique for applying Feynman diagrams to gauge theories [26]. Renormalizability was finally proved by a young student, Gerard ’t Hooft [27], in 1971, a real tour de force using methods developed by his supervisor, Martinus Veltman, especially the computer algebra programme Schoonship. 
In 1973, the key prediction of the electroweak theory, the existence of the neutral current interactions — those mediated by Z0 — was confirmed at CERN [28]...The next major step was the discovery of the W and Z particles at CERN in 1983 [29, 30]... 
In 1964, or 1967, the existence of a massive scalar boson had been a rather minor and unimportant feature. The important thing was the mechanism for giving masses to gauge bosons and avoiding the appearance of massless Nambu–Goldstone bosons. But after 1983, the Higgs boson began to assume a key importance as the only remaining undiscovered piece of the standard-model jigsaw — apart that is from the last of the six quarks, the top quark. The standard model worked so well that the Higgs boson, or something else doing the same job, more or less had to be present. Finding the boson was one of the main motivations for building the Large Hadron Collider (LHC) at CERN. Over a period of more than twenty years, the two great collaborations, ATLAS and CMS, have designed, built and operated their two huge and massively impressive detectors. As is by now well known, their efforts were rewarded in 2012 by the unambiguous discovery of the Higgs boson by each of the two detectors [31, 32].
History of electroweak symmetry breaking T.W.B. Kibble
2015

I think it is fair to complete the previous experimental success story of the electroweak symmetry breaking theory by the following facts:
... in computing the theoretical predictions [of the Standard Model], one should include also the strong interactions, so the model is really the gauge theory of the group U(1)×SU(2)×SU(3). Here we shall present only a list of the most spectacular successes in the electroweak sector:
...
The discovery of charmed particles at SLAC in 1974–1976. Their characteristic property is to decay predominantly in strange particles. 
• A necessary condition for the consistency of the Model is that  ∑i Qi =0 inside each family. When the τ lepton was discovered the b and t quarks were predicted with the right electric charges.
...
The t-quark was seen at LEP through its effects in radiative corrections before its actual discovery at Fermilab.
• An impressive series of experiments have tested the Model at a level such that the weak interaction radiative corrections are important.
John Iliopoulos, 2016



And now for a nice outlook of the 125 GeV Higgs boson discovery let us read an eminent superviser of the TeV scale physics exploration using hadron colliders:

The most succinct summary we can give is that the data from the ATLAS and CMS experiments are developing as if electroweak symmetry is broken spontaneously through the work of elementary scalars, and that the emblem of that mechanism is the standard-model Higgs boson... 
As one measure of the progress the discovery of the Higgs boson represents, let us consider some of the questions I posed before the LHC experiments ... 
1. What is the agent that hides the electroweak symmetry? Specifically, is there a Higgs boson? Might there be several? 
To the best of our knowledge, H(125) displays the characteristics of a standard model Higgs boson, an elementary scalar. Searches will continue for other particles that may play a role in electroweak symmetry breaking. 
2. Is the “Higgs boson” elementary or composite? How does the Higgs boson interact with itself? What triggers electroweak symmetry breaking? 
We have not yet seen any evidence that H(125) is other than an elementary scalar. Searches for a composite component will continue. The Higgs-boson self-interaction is almost certainly out of the reach of the LHC; it is a very challenging target for future, very-high-energy, accelerators. We don’t yet know what triggers electroweak symmetry breaking. 
3. Does the Higgs boson give mass to fermions, or only to the weak bosons? What sets the masses and mixings of the quarks and leptons? 
The experimental evidence suggests that H(125) couples to tt, bb, and τ+τ−, so the answer is probably yes. All these are third-generation fermions, so even if the evidence for these couplings becomes increasingly robust, we will want to see evidence that H couples to lighter fermions. The most likely candidate, perhaps in High-Luminosity LHC running, is for the Hµµ coupling, which would already show that the third generation is not unique in its relation to H. Ultimately, to show that spontaneous symmetry breaking accounts for electron mass, and thus enables compact atoms, we will want to establish the Hee coupling. That is extraordinarily challenging because of the minute branching fraction
10. What lessons does electroweak symmetry breaking hold for unified theories of the strong, weak, and electromagnetic interactions? 
Establishing that scalar fields drive electroweak symmetry breaking will encourage the already standard practice of using auxiliary scalars to hide the symmetries that underlie unified theories. 
To close, I offer a revised list of questions to build on what our first look at the Higgs boson has taught us. Issues Sharpened by the Discovery of H (125) 
1. How closely does H(125) hew to the expectations for a standard-model Higgs boson? Does H have any partners that contribute appreciably to electroweak symmetry breaking? 
2. Do the HZZ and HWW couplings indicate that H(125) is solely responsible for electroweak symmetry breaking, or is it only part of the story? 
3. Does the Higgs field give mass to fermions beyond the third generation? Does H(125) account quantitatively for the quark and lepton masses? What sets the masses and mixings of the quarks and leptons? 
4. What stabilizes the Higgs-boson mass below 1 TeV? 
5. Does the Higgs boson decay to new particles, or via new forces? 
6. What will be the next symmetry recognized in Nature? Is Nature supersymmetric? Is the electroweak theory part of some larger edifice? 
7. Are all the production mechanisms as expected? 
8. Is there any role for strong dynamics? Is electroweak symmetry breaking related to gravity through extra spacetime dimensions? 
9. What lessons does electroweak symmetry breaking hold for unified theories of the strong, weak, and electromagnetic interactions? 
10. What implications does the value of the H(125) mass have for speculations that go beyond the standard model?...for the range of applicability of the electroweak theory? 
In the realms of refined measurements, searches, and theoretical analysis and imagination, great opportunities lie before us! 
Electroweak Symmetry Breaking in Historical Perspective Chris Quigg 2015


Now what about the role geometry plays in the game? It may be relevant to go once more to the historical review by Iliopoulos:

The construction of the Standard Model, which became gradually the Standard Theory of elementary particle physics, is, probably, the most remarkable achievement of modern theoretical physics.... as we intend to show, the initial motivation was not really phenomenological. It is one of these rare cases in which a revolution in physics came from theorists trying to go beyond a simple phenomenological model, not from an experimental result which forced them to do so. This search led to the introduction of novel symmetry concepts which brought geometry into physics...
At the beginning of the twentieth century the development of the General Theory of Relativity offered a new paradigm for a gauge theory. The fact that it can be written as the theory invariant under local translations was certainly known to Hilbert, hence the name of Einstein–Hilbert action. The two fundamental forces known at that time, namely electromagnetism and gravitation, were thus found to obey a gauge principle. It was, therefore, tempting to look for a unified theory... 
The transformations of the vector potential in classical electrodynamics are the first example of an internal symmetry transformation, namely one which does not change the space–time point x. However, the concept, as we know it today, belongs really to quantum mechanics. It is the phase of the wave function, or that of the quantum fields, which is not an observable quantity and produces the internal symmetry transformations. The local version of these symmetries are the gauge theories we study here. The first person who realised that the invariance under local transformations of the phase of the wave function in the Schrödinger theory implies the introduction of an electromagnetic field was Vladimir Aleksandrovich Fock in 1926, just after Schrödinger wrote his equation... 
In 1929 Hermann Klaus Hugo Weyl extended this work to the Dirac equation. In this work he introduced many concepts which have become classic, such as the Weyl two-component spinors and the vierbein and spin-connection formalism. Although the theory is no more scale invariant, he still used the term gauge invariance, a term which has survived ever since.
Naturally, one would expect non-Abelian gauge theories to be constructed following the same principle immediately after Heisenberg introduced the concept of isospin in 1932. But here history took a totally unexpected route.  
The first person who tried to construct the gauge theory for SU(2) is Oskar Klein who, in an obscure conference in 1938, he presented a paper with the title: On the theory of charged fields. The most amazing part of this work is that he follows an incredibly circuitous road: He considers general relativity in a five-dimensional space and compactifies à la Kaluza–Klein. Then he takes the limit in which gravitation is decoupled. In spite of some confused notation, he finds the correct expression for the field strength tensor of SU(2). He wanted to apply this theory to nuclear forces by identifying the gauge bosons with the new particles which had just been discovered, (in fact the muons), misinterpreted as the Yukawa mesons in the old Yukawa theory in which the mesons were assumed to be vector particles. He considered massive vector bosons and it is not clear whether he worried about the resulting breaking of gauge invariance. 
The second work in the same spirit is due to Wolfgang Pauli who, in 1953, in a letter to Abraham Pais, developed precisely this approach: the construction of the SU(2) gauge theory as the flat space limit of a compactified higher-dimensional theory of general relativity...  
It seems that the fascination which general relativity had exerted on this generation of physicists was such that, for many years, local transformations could not be conceived independently of general coordinate transformations. Yang and Mills were the first to understand that the gauge theory of an internal symmetry takes place in a fixed background space which can be chosen to be flat, in which case general relativity plays no role...
In particle physics we put the birth of non-Abelian gauge theories in 1954, with the fundamental paper of Chen Ning Yang and Robert Laurence Mills. It is the paper which introduced the SU(2) gauge theory and, although it took some years before interesting physical theories could be built, it is since that date that non-Abelian gauge theories became part of high energy physics. It is not surprising that they were immediately named Yang–Mills theories. Although the initial motivation was a theory of the strong interactions, the first semi-realistic models aimed at describing the weak and electromagnetic interactions. In fact, following the line of thought initiated by Fermi, the theory of electromagnetism has always been the guide to describe the weak interactions... 
Gauge invariance requires the conservation of the corresponding currents and a zero masse for the Yang–Mills vector bosons. None of these properties seemed to be satisfied for the weak interactions. People were aware of the difficulty, but had no means to bypass it. The mechanism of spontaneous symmetry breaking was invented a few years later in 1964... The synthesis of Glashow’s 1961 model with the mechanism of spontaneous symmetry breaking was made in 1967 by Steven Weinberg, followed a year later by Abdus Salam... Many novel ideas have been introduced in this paper, mostly connected with the use of the spontaneous symmetry breaking which became the central point of the theory.
Gauge theories contain three independent worlds. The world of radiation with the gauge bosons, the world of matter with the fermions and the world of BEH scalars. In the framework of gauge theories these worlds are essentially unrelated to each other. Given a group G the world of radiation is completely determined, but we have no way to know a priori which and how many fermion representations should be introduced; the world of matter is, to a great extent, arbitrary.  
This arbitrariness is even more disturbing if one considers the world of BEH scalars. Not only their number and their representations are undetermined, but their mere presence introduces a large number of arbitrary parameters into the theory. Notice that this is independent of our computational ability, since these are parameters which appear in our fundamental Lagrangian. What makes things worse, is that these arbitrary parameters appear with a wild range of values. From the theoretical point of view, an attractive possibility would be to connect the three worlds with some sort of symmetry principle. Then the knowledge of the vector bosons will determine the fermions and the scalars and the absence of quadratically divergent counterterms in the fermion masses will forbid their appearance in the scalar masses. We shall call such transformations supersymmetry transformations and we see that a given irreducible representation will contain both fermions and bosons. It is not a priori obvious that such supersymmetries can be implemented consistently, but in fact they can.  
... supersymmetric field theories have remarkable renormalisation properties [57] which make them unique. In particular, they offer the only field theory solution of the hierarchy problem. Another attractive feature refers to grand unification. The presence of the supersymmetric particles modifies the renormalisation group equations and the effective coupling constants meet at high scales...   
An interesting extension consists of considering gauge supersymmetry transformations, i.e. transformations whose infinitesimal parameters — which are anticommuting spinors — are also functions of the space–time point x... 
The miraculous cancellation of divergences we find in supersymmetry theories raises the hope that the supersymmetric extension of general relativity will give a consistent quantum field theory. In fact local supersymmetry, or “supergravity”, is the only field theoretic extension of the Standard Model which addresses the issue of quantum gravity...

N=8 supergravity promised to give us a truly unified theory of all interactions, including gravitation and a description of the world in terms of a single fundamental multiplet. The main question is whether it defines a consistent field theory. At the moment we have no clear answer to this question, although it sounds rather unlikely. In some sense N = 8 supergravity can be viewed as the end of a road, the road of local quantum field theory. The usual response of physicists whenever faced with a new problem was to seek the solution in an increase of the symmetry. This quest for larger and larger symmetry led us to the standard model, to grand unified theories and then to supersymmetry, to supergravity and, finally, to the largest possible supergravity, that with N=8. In the traditional framework we are working, that of local quantum field theory, there exists no known larger symmetry scheme
Iliopoulos (Id.)

I let the reader compare the above last Iliopoulos claims about supergravity with the following Connes' statement about the potential bonus offered by his geometric perspective in order to appreciate who sticks the most to the two guide lines of i) phenomenological approach in which the introduction of every new concept is motivated by the search of a consistent theory which agrees with experiment and ii) mathematical consistency which both helped in making the Standard Theory.
... the point of view adopted in this essay is to try to understand from a mathematical perspective, how the perplexing combination of the Einstein-Hilbert action coupled with matter, with all the subtleties such as the Brout-Englert-Higgs sector, the V-A and the see-saw mechanisms etc.. can emerge from a simple geometric model. The new tool is the spectral paradigm and the new outcome is that geometry does emerge from purely Hilbert space and operator considerations, i.e. on the stage where Quantum Mechanics happens. The idea that group representations as operators in Hilbert space are relevant to physics is of course very familiar to every particle theorist since the work of Wigner and Bargmann. That the formalism of operators in Hilbert space encompasses the variable geometries which underly gravity is the leitmotiv of our approach. In order to estimate the potential relevance of this approach to Quantum Gravity, one first needs to understand the physics underlying the problem of Quantum Gravity.... Quoting from [40]: “Quantization of gravity is inevitable because part of the metric depends upon the other fields whose quantum nature has been well established”. Two main points are that the presence of the other fields forces one, due to renormalization, to add higher derivative terms of the metric to the Lagrangian and this in turns introduces at the quantum level an inherent instability that would make the universe blow up. This instability is instantly fatal to an interacting quantum field theory. Moreover primordial inflation prevents one from fixing the problem by discretizing space at a very small length scale. What our approach permits is to develop a “particle picture” for geometry and a careful reading of this paper should hopefully convince the reader that this particle picture stays very close to the inner workings of the Standard Model coupled to gravity. For now the picture is limited to the “one-particle” description and there are deep purely mathematical reasons to develop the many particles picture.
Alain Connes
(still draft version February 21, 2017)

Beyond the somewhat vein* comparative on the respective merits of both approaches to unify the standard model interactions with gravitation at the Planck scale, one can't help to notice how far their geometrical premises are different [*As long as no experimental result makes the decision]. On the one side, there is supergravity as the boldest symmetric extension of local quantum field gauge theories on traditional but higher dimensional spacetimes with the hope to quantize gravity. On the other side one contemplates an original reformulation and slight but radical extension of spacetime in a framework derived from quantum mechanics with the full Standard Model theory emerging from an action principle inspired by general relativity.

As a consequence, the grand unification scheme present in both approaches borrows nevertheless quite distinct paths. In the evocative words of some bold pioneers of the spectral noncommutative phenomenology:

... at the higher [unification scale Λ]... it is not the particle spectrum that changes, but the geometry of spacetime itself. We shall assume that the (commutative) Riemannian geometry of spacetime is only a low energy approximation of a – not yet known – noncommutative geometry. Being noncommutative, this geometry has radically different short distance properties and is expected to produce quite a different renormalisation flow... At energies below Λ, this noncommutativity manifests itself only in its mild, almost commutative version through the gauge- and Higgs-fields of the standard model, which are magnetic-like fields accompanying the gravitational field
Spectral action and big desert  Marc Knecht, Thomas Schucker (2006)

To insist now on the foresights, one has also two very different landscapes. Roughly speaking:

- focussing on a solution to the naturalness problem of the Brout-Englert-Higgs scalar boson, supersymmetry predicts a new superparticle spectrum. From the knowledge of the vector bosons it will determine the fermions and the scalars and the absence of quadratically divergent counterterms in the fermion masses forbidding their appearance in the scalar masses. Then one can envision hopefully a supergravity theory amenable to quantize gravitation. 
- Looking for a geometric understanding of the electroweak symmetry breaking, the spectral noncommutative framework distills from the knowledge of the spin one-half fermion particle spectrum of the current Standard Model completed minimally with three right-handed Majorana neutrinos (required to explain neutrino oscillations with a type I seesaw mechanism) the full scalar and vector boson spectra. Its operator theoretic formalism develops a “particle picture” for geometry that stays very close to the inner workings of the Standard Model coupled to gravity and it makes it already possible to describe a volume quantized 4D spacetime with a Euclidean signature translating phenomenologically in mimetic dark energy and dark matter models.


Considering the fact that no experimental evidence for supersymmetric particles has been found yet, one may appreciate then from a heuristic point of view the potential relevance of the spectral noncommutative geometrization of the Standard Model leading to a minimal Pati-Salam extension. The latter provides indeed a unification of electroweak and strong gauge interactions pretty close in its particle spectrum to the non-supersymmetric minimal SO(10) models currently consistent with current neutrino oscillations data that goes beyond the Standard Model (thus not under the scope of Iliopoulos review) and also with a leptogenesis scenario able to explain the asymmetry between matter and antimatter.

At last, one may add the following from a more consistent* effective field theory perspective.
The spectral standard model post-diction for the 125 GeV mass of the Higgs boson that breaks the electroweak symmetry requires its very small mixing with a "big" Higgs brother responsible in a Pati-Salam symmetry breaking at around 1012 GeV consistent with a see-saw mechanism amenable to explain the known data on left-handed neutrinos. Even if the naturalness problem is not settled here it is phenomenologically encouraging that the Higgs boson already discovered may talk with a very high seesaw scale well motivated as a natural effective field theory to explain the very low mass of active neutrinos. The ultra heavy singlet scalar could also help to unitarise the theory in the sub-Planckian regime where inflation happens. Last but not least one may be reminded that provided the arbitrary mass scale in the spectral action is made dynamical by introducing a dilaton field, the resulting action is almost identical to the one proposed for making the standard model scale invariant and has the same low-energy limit as the Randall-Sundrum model and remarkably, all desirable features with correct signs for the relevant terms are obtained uniquely and without any fine tuning.

Whatever the path chosen by space-time-matter-radiation to cool down to nowadays cosmological background temperature one may conclude that the spectrum of particles required for an electroweak symmetry breaking theory consistent with energies beyond the TeV scale has not been fully probed yet. To know if this search will bring a novel symmetry concept to tame the Higgs scalar feared quantum instabilities and require noncommutative geometry into physics to do so, only future will tell but may be the past laying in the dark sky already knows...




* about the role of consistency in theory choice I would like to offer the following thoughts that seems  to me particularly relevant at the present time for obvious reasons:

One of the most interesting questions in philosophy of science is how to determine the quality of a theory. Given the data, how can we infer a “best explanation” for the data. This often goes by the name “Inference to Best Explanation” (IBE) [1, 2, 3]. The wide variety of claims for important criteria are a measure of how difficult it is to come up with a clear and general algorithm for choosing between theories. Some claim even that it is intrinsically not possible to come up with a methodology of deciding.

... in our discussion of IBE criteria... we must first ask ourselves what is non-negotiable. Falsifiability is clearly something that can be haggled over. Simplicity is subject to definitional uncertainty, and furthermore has no universally accepted claim to preeminence. Naturalness, calculability, unifying ability, predictivity, etc. are also subject to preeminence doubts

What is non-negotiable is consistency. A theory shown definitively to be inconsistent does not live another day. It might have its utility, such as Newton’s theory of gravity for crude approximate calculations, but nobody would ever say it is a better theory than Einstein’s theory of General Relativity.

Consistency has two key parts to it. The first is that what can and has been computed must be consistent with all known observational facts. As Murray Gell-Mann said about his early graduate student years, “Suddenly, I understood the main function of the theoretician: not to impress the professors in the front row but to agree with observation [10].” Experimentalists of course would not disagree with this non-negotiable requirement of observational consistency. If you cannot match the data what are you doing, they would say?


However, theorists have a more nuanced approach to establishing observational consistency. They often do not spend the time to investigate all the consequences of their theories. Others do not want to “mop up” someone else’s theory, so they are not going to investigate it either. We often get into a situation of a new theory being proposed that solves one problem, but looks like it might create dozens of other incompatibilities with the data but nobody wants to be bothered to compute it. Furthermore, the implications might be extremely difficult to compute.

Sometimes there must be suspended judgment in the competition between excellent theories and observational consequences. Lord Kelvin claimed Darwin’s evolution ideas could not be right because the sun could not burn long enough to enable long-term evolution over millions of years that Darwin knew was needed. Darwin rightly ignored such arguments, deciding to stay on the side of geologists who said the earth appeared to be millions of years old [11]. Of course we know now that Kelvin made a bad inference because he did not know about the fusion source of burning within the sun that could sustain its heat output for billions of years.

A second part to consistency is mathematical consistency. There are numerous examples in the literature of subtle mathematical consistency issues that need to be understood in a theory. Massive gauge theories looked inconsistent for years until the Higgs mechanism was understood. Some gauge theories you can dream up are “anomalous” and inconsistent. Some forms of string theory are inconsistent unless there are extra spatial dimensions. Extra time dimensions appear to violate causality, even when one tries to demand it from the outset, thereby rendering the theory inconsistent. Theories with ghosts, which may not be obvious upon first inspection, give negative probabilities of scattering
Mathematical consistency is subtle and hard at times, and like observational consistency there is no theorem that says that it can be established to comfortable levels by theorists on time scales convenient to humans. Sometimes the inconsistency is too subtle for the scientists to see right off. Other times the calculability of the mathematical consistency question is too difficult to give definitive answer and it is a “coin flip” whether the theory is ultimately consistent or not. For example, pseudomoduli potentials that could cause a runaway problem are incalculable in some interesting dynamically broken supersymmetric theories [12].

It is not controversial that observational consistency and mathematical consistency are non-negotiable; however, the due diligence given to them in theory choice is often lacking. The establishment of observational consistency or mathematical consistency can remain in an embryonic state for years while research dollars flow and other IBE criteria become more motivational factors in research and inquiry, and the consistency issues become taken for granted.

This is one of the themes of Gerard ‘t Hooft’s essay “Can there be physicist without experiments?”. He reminds the reader that some of the grandest theories are investigations of the nature of spacetime at the Planck scale, which is many orders of magnitude beyond where we currently have direct experimental probes. If this is to continue as a physics enterprise it “may imply that we should insist on much higher demands of logical and mathematical rigour than before.” Despite the weakness of verb tense employed, it is an incontestable point. It is in these Planckian theories, such as string theory and loop quantum gravity, where the lack of consistency rigor is so plainly unacceptable. However, the cancer of lax attention to consistency can spread fast in an environment where theories and theorists are feted before vetted.

(2012)

//Added on February 28:


This long retroactive analysis of the already 50 years old story of electroweak symmetry breaking mechanism has been carried out in the light of experimental discovery of the 125 GeV resonance at LHC Run1 and through the prism of its geometrization with a tentative noncommutative biais to uncover a new spectrum of bright colours entangled in the pale glow of beyond the Standard Model physics.


As reported above, Iliopoulos explains nicely in his review how Yang and Mills succeeded in providing the first geometric setting to describe quantum non abelian gauge fields focusing on the interpretation of the latter as internal symmetries in a fixed background space where general relativity plays no role (even if it inspired them). It’s hard to miss the reversal and more extensive move operated by the spectral noncommutative paradigm of Connes and Chamseddine that have patiently build and polish a new mathematically and experimentally coherent geometric spectral standard model where the internal symmetries appear in a natural manner as a slight refinement of the algebraic rules of coordinates (different from supersymmetry).

Yang-Mills theories where first criticized by Pauli, as their quanta had to be massless in order to maintain gauge invariance. Thus this theory was set aside for a while before the concept of particles acquiring mass through symmetry breaking in massless theories was discovered triggering a significant restart of Yang–Mills theory studies.

As far as spectral geometric models are concerned there are at best marginally quoted in reviews but rarely considered seriously. What major advance will prompt a significant interest in the physics community is hard to anticipate. One can hope the already established connection of some mimetic gravity models with a possible quantization of the volume of spacetime will light the fire for a new kind of investigations on the cosmological standard model dark sector…

To come back to ground, one other obstacle for a more extensive study of spectral models is the emptiness of their expected spectrum of new fundamental particles to discover with man-made accelerators, but well, this is also a perspective sketched by the study of minimal but realistic grand unified SO(10) or recent SMASH models all accommodating the full spectrum of low energy phenomenology (with the exception of a very low axion particle).

Hopefully there is more to search for with nuclear reactors and hadron or lepton colliders than new elementary particles! A lot of physicists are involved in flavor mixing for instance. It could be that noncommutative geometry gives a fresh look here too.

For the theorist, a critical of spectral noncommutative geometry might come from the prejudice against models that do not provide a solution to naturalness problem. May be this requirement might be suspended for a while waiting for a more extensive study of the fine tuning "parameters" (coming from new degrees of freedom like a singlet scalar and right-handed neutrinos) computable from the spectral action principle or required to make it mathematically coherent. Indeed these parameters involved in the renormalisation flow would have values constrained on the full energy spectrum : from low energy scale to the unification one in order to tame the quantum mass corrections to the Higgs boson and also on the intermediate seesaw scale to accommodate left-handed neutrino masses and leptogenesis cosmological scenario. If such a scenario were miraculously possible it could help to uncover some new hidden symmetry from possible accidental corrections in the quadratic divergence of in some extended versions of the Standard model Higgs sector ...


//third fragment of a tentative Fragments of a lover's dictionary on spectral physics.

Comments