jeudi 16 mars 2017

La cħorégraphie d'Alice et BoB sur l'horizon d'un trou noir

En attendant l'hypothétique rendez-vous des quanta d'espacetemps avec les paires intriquées de particules de Hawkings  ...

Physiciens écoutez cette phrase est pour vous
Le trou noir est un astre à la taille volage
Qui ne se laisse pas quantifier par vous
Ses quanta se cachent sur l'horizon face à nous
Quand on le croit plein son coeur vide pour nous
vibre, vibre tandis que voyagent
 et dansent appariés ses quanta en un ballet flou
Cherchez-les ils sont un peu partout...
sur un air célèbre du fou chantant 


Comme le Soleil a rendez-vous avec la Lune d'après la chanson française fameuse de Charles Trenet, la relativité générale et la mécanique quantique doivent bien se rencontrer quelque part, par exemple sur l'horizon des trous noirs. Les modalités de cette réunion n'ont pas encore été observées ni même probablement complètement comprises malgré la théorie "thermodynamique" initiée dans les années 70 puis popularisées dans les années 90 entre autre par Stephen Hawking. Certes elle apporte des éléments d'informations qui convainquent la majorité des physiciens en postulant l'existence d'un processus d'émission de particules qui émergent du trou noir via des fluctuations du vide quantique et conduisent à une lente évaporation de l'astre sombre à cause de son couplage gravitationnel au reste de l'univers ... Or il existe une théorie sinon dissidente du moins nettement moins vulgarisée, développée avec patience et obstination par Gerard 't Hooft, théoricien hollandais et héros quantique reconnu de ses pairs mais méconnu du grand public, qui ne renonce pas à une description quantique unitaire de l'évolution des trous noirs. Ses réflexions initiées dans les années quatre-vingt se sont enrichies de nouvelles idées et d'une hypothèse sinon originale du moins presque oubliée et récemment remise en avant permettant une avancée significative déjà discutée dans ce blog ici et


Le billet d'aujourd'hui se penche sur le travail d'un trio de jeunes physicien-ne-s éduqué-e-s comme l'immense majorité de leurs contemporain-e-s au sein généreux de la théorie des cordes et qui explorent la voie ouverte par leur illustre aîné avec leurs propres outils conceptuels, ce qui permet d'apprécier sous un angle différent le travail de 't Hooft et nous offrent l'occasion de présenter un tableau imagé assez clair du ballet chorégraphié par la mécanique quantique et l'interaction gravitationnelle entre Alice et BoB ou plus explicitement entre un paquet d'onde de matière "traversant" l'horizon d'un trou noir et une paire de particules de Hawking quantiquement intriquées de façon antipodale sur l'horizon en question :

We revisit the old black hole S-Matrix construction and its new partial wave expansion of ’t Hooft. Inspired by old ideas from non-critical string theory & Matrix Quantum Mechanics, we reformulate the scattering in terms of a quantum mechanical model—of waves scattering off inverted harmonic oscillator potentials—that exactly reproduces the unitary black hole S-Matrix for all spherical harmonics; each partial wave corresponds to an inverted harmonic oscillator with ground state energy that is shifted relative to the s-wave oscillator. Identifying a connection to 2d string theory allows us to show that there is an exponential degeneracy in how a given total initial energy may be distributed among many partial waves of the 4d black hole...

Antipodal entanglement 
Unitarity of the S-Matrix demands that both the left and right exteriors in the two-sided Penrose diagram need to be accounted for; they capture the transmitted and reflected pieces of the wave-function, respectively. In the quantum mechanics model, there appears to be an ambiguity of how to associate the two regions I and II of the scattering diagram in Fig. 1 to the two exteriors of the Penrose diagram. We saw, in the previous section, that the quantum mechanical model appears to support the creation of physical black holes by exciting appropriate oscillators. Therefore, in this picture there is necessarily only one physical exterior. To resolve the issue of two exteriors, it was proposed that one must make an antipodal identification on the Penrose diagram [19]; see figure 3. Unitarity is arguably a better physical consistency condition than a demand of the maximal analytic extension. The precise identification is given by x → Jx with  
J : (u +, u−, θ, φ) ←→ (−u +, −u −, π − θ, π + φ).                  (5.1)
[... the simpler mapping of identifying points in I, II via (u+, u, θ, φ)↔(−u+,−u,θ,φ) is singular on the axis u+, u=0]. Note that J has no fixed points and is also an involution, in that J2=1. Such an identification implies that spheres on antipodal points in the Penrose diagram are identified with each other. In particular, this means 
u±(θ, φ) = − u±(π − θ, π + φ) and p±(θ, φ) = − p±(π − θ, π + φ).     (5.2) 
Therefore, noting that the spherical harmonics then obey Yl,m(π-θ,π+φ)= (−1)l Yl,m(θ,φ), we see that only those modes with an l that is odd contribute. However, owing to the validity of the S-Matrix only in the region of space-time that is near the horizon, this identification is presumably valid only in this region. 
Global identifications of the two exteriors have been considered in the past [56–58]. The physics of the scattering, with this identification is now clear. In-going wave-packets move towards the horizon where gravitational back-reaction is strongest according to an asymptotic observer. Most of the information then passes through the antipodal region and a small fraction is reflected backTurning on quantum mechanics implies that ingoing position is imprinted on outgoing momenta and consequently, an highly localised ingoing wave-packet transforms into two outgoing pieces—transmitted and reflected ones—but both having highly localised momentaTheir positions, however, are highly delocalised. This is how large wavelength Hawking particles are produced out of short wavelength wave-packets and an IR-UV connection seems to be at play. Interestingly, the maximal entanglement between the antipodal out-going modes suggests a wormhole connecting each pair [59]; the geometric wormhole connects the reflected and transmitted Hilbert spaces. Furthermore, as the study of the Wigner time-delay showed, the reflected and transmitted pieces arrive with a time-delay that scales logarithmically in the energy of the in-going wave. This behaviour appears to be very closely related to scrambling time (not the lifetime of the black hole) and we leave a more detailed investigation of this feature to the future. One may also wonder why transmitted pieces dominate the reflected ones. It may be that the attractive nature of gravity is the actor behind the scene. 
Approximate thermality
We now turn to the issue of thermality of the radiated spectrum. Given a number density, say Nin(k) as a function of the energy k, we know that there is a unitary matrix that relates it to radiated spectrum. This unitary matrix is precisely the S-Matrix of the theory...
In our context, since we do not yet have a first principles construction of the appropriate second quantised theory, this in-state may be chosen. For instance, a simple pulse with a wide-rectangular shape would suffice. One may hope to create such a pulse microscopically, by going to the second quantised description and creating a coherent state. Alternatively, one may hope to realize a matrix quantum mechanics model that realizes a field theory in the limit of large number of particles. After all, we know that each oscillator in our model really corresponds to a partial wave and not a single particle in the four dimensional black hole picture.

Second Quantization v/s Matrix Quantum Mechanics

Given the quantum mechanical model we have studied in this article, we may naively promote the wave-functions ψlm into fields to obtain a second quantized Lagrangian: ...
The form of the Lagrangian being first order in derivatives indicates that the Rindler fields are naturally fermionic. In this description we have a collection of different species of fermionic fields labelled by the {l, m} indices. And the interaction between different harmonics would correspond to interacting fermions of the kind above. The conceptual trouble with this approach is that each “particle” to be promoted to a field is in reality a partial wave as can be seen from the four-dimensional picture. Therefore, second quantizing this model may not be straight-forward [20]. It appears to be more appealing to think of each partial wave as actually arising from an N-particle matrix quantum mechanics model which in the large-N limit yields a second quantized description. Since N counts the number of degrees of freedom, it is naturally related to c via 
1/N2∼ c = 8πG R2∼ l2P /R2.                   (5.8) 
Therefore, N appears to count the truly microscopic Planckian degrees of freedom that the black hole is composed of. The collection of partial waves describing the Schwarzschild black hole would then be a collection of such N-particle matrix quantum mechanics models. Another possibility is to describe the total system in terms of a single matrix model but including higher representations/non-singlet states to describe the higher l modes. This seems promising because if one fixes the ground state energy of the lowest l=0 (or l=1 after antipodal) oscillator, the higher l oscillators have missing poles in their density of states compared to the l=0, much similar to what was found for the adjoint and higher representations in [60]...
To sharpen any microscopic statements about the S-matrix, one might first need to derive an MQM model that regulates Planckian effects.

(Submitted on 26 Jul 2016 (v1), last revised 23 Nov 2016 (this version, v2))


Puissent d'autres physiciennes et physiciens à l'oreille cette fois familière au hiératique chant spectral noncommutatif suivre l'exemple précédent et enrichir à leur tour de leur répertoir propre la mystérieuse musique du cℏoeur quantique des trous noirs révélé par G. 't Hooft en trouvant comment accorder ses paires de particules antipodales intriquées sur l'horizon du trou noir avec les deux types de quanta de volume à la base des solutions d'espacetemps spinoriels à quatre dimensions pour l'équation de Chamseddine-Connes-Mukhanov.


//Modification du titre du billet et du corps du texte en français le 25 Mars 2017

mercredi 8 mars 2017

The birth of (an intuition about) at(t)oms of spacetime

Thinking for some time now on how to conceptually grasp and empirically catch attoms of spacetime I could not fail to share this interesting and accessible lecture, taking the opportunity to prove today the progress made by physics community to put women contributions in the forefront!

Fay Dowker Public Lecture - 
Spacetime Atoms and the Unity of Physics (Perimeter Public Lecture)


One will find below a reading proposal to experience again the very pedagogical quality of Fay Dowker (I am not competent to emit judgment on her research) talking about the dynamics of spacetime in general relativity:

... I wish to suggest, our inability to reach consensus on the passage of time could be a consequence of our not yet having made the necessary scientific progress. We do not have a successful theory of spacetime that coherently incorporates the quantum nature of the physical world, so we do not yet know the nature of the deep structure of spacetime. Some of the observational facts on which the new theory will be built may, therefore, now be only roughly communicable and our sense-experience of the passage of time may be an example of such a fact. In the last decades, however, progress on one approach to finding a theory of quantum spacetime – or quantum gravity as it is usually called – affords us a forward look at how the passage of time may eventually find a place in science. The approach, causal set theory, is based on the hypothesis that spacetime is fundamentally granular or atomic at the Planck scale and this atomicity opens the door to new dynamical possibilities for spacetime and, hence, to a new perspective on the dichotomy of Being and Becoming. In this article I will describe this progress and will expand upon R. D. Sorkin’s claim that it gives us scientific purchase on the concept of passage [2].
 Spacetime is a continuous, smooth, four-dimensional material that bends, warps and ripples according to dynamical law as specified by the Einstein equations. Even when there is no matter present, “empty” spacetime itself can carry energy in the form of gravitational waves. Indeed this is the explanation for the variation in the rotation rate of the Hulse-Taylor binary pulsar system, which can be accurately modelled as a system losing energy via this gravitational radiation. The spacetime material is, however, very different from those substances that populated pre-relativistic physical theory in that it is intrinsically four-dimensional. It cannot be understood as a three-dimensional entity – “space” – evolving in time because that would imply a global time in which the evolution occurs and there is no such global, physical time in General Relativity (GR). The notion that at one moment of time there is space, a 3-d geometry, and at the next moment space has evolved to another 3-d geometry is wrong in GR. There is no such physically meaningful entity as 3-d space, no physically meaningful slicing of spacetime into space-changing-in-time. 
Having focussed on what spacetime in GR is not, we can ask what it is. The structure of spacetime that takes centre stage in understanding the physics of GR is its causal structure. This causal structure is a partial order on the points of spacetime.1 Given two points of spacetime, call them A and B, they will either be ordered or unordered. If they are ordered then one, let’s say A without loss of generality, precedes – or, is to the past of – the other, B. This ordering is transitive: if A precedes B and B precedes C then A precedes C. The order is partial because there are pairs of spacetime points such that there is no physical sense in which one of the pair precedes the other, they are simply unordered. This lack of ordering does not mean the points of the pair are simultaneous because that would imply they occur at the same “time” and require the existence of a global time for them to be simultaneous in. Again: global time does not exist in GR. 
This partial ordering of the points of spacetime is referred to as the causal structure of spacetime because it coincides with the potential for physical effects to propagate. Physical effects can propagate from A to B in spacetime if and only if A precedes B in the causal structure. If two spacetime points are unordered then no physical effect can propagate from one to the other because to do so would require something physical to travel faster than light. Causal structure plays a central role in GR and indeed the epitome of the theory, a black hole, is defined in terms of the causal structure: it is a region of spacetime such that nothing that happens in that region can affect anything outside the region. It is only by thinking of a black hole in terms of causal structure that its physics can be understood. [As an example of this, it is very difficult to answer the question, “Does someone falling feet first into a black hole lose sight of their feet as their feet cross the horizon?” without drawing the conformal “Penrose” diagram that depicts the causal structure.] 
If there is no global, universal time, where do we find within GR the concept of physical time at all? Physical time in GR is associated, locally and “personally,” with every localised physical system, in a manner that more closely reflects our intimate experience of time than the global time of pre-relativistic Newtonian mechanics. Each person or object traces out a trajectory or worldline through spacetime, a continuous line in spacetime that is totally ordered by the causal order: for any 2 points on the worldline, one precedes the other. GR also provides a quantitatively precise concept of proper time that elapses along each timelike worldline. A clock carried by a person following a worldline through spacetime will measure this proper time as it elapses, locally along the worldline. Viewed from this perspective, the famous “twin paradox” is no longer a paradox: two people who meet once, then follow different worldlines in spacetime and meet a second time in the future will in general have experienced different amounts of proper time – real, physical time – elapsing along their different worldlines between the meetings. Clocks are “odometers for time” along worldlines through spacetime. The remarkable thing, from this perspective, is that we get by in everyday life quite well under the assumption that there is a global Now, a universal global time, and that we can synchronise our watches when we meet, then do different things and when we meet again our watches will still be synchronised. GR explains this because it predicts that as long as the radius of curvature of spacetime is large compared to the physical scale of the system and the relative velocities of the subsystems involved are small compared with the speed of light, the differences in proper time that elapse along our different worldlines will be negligible. We can behave as if there is a global time, a global moment of Now, because for practical everyday purposes our clocks will remain synchronised to very high precision.    
In addition to being the key to understanding GR, the causal structure of spacetime is a unifying concept. Theorems by Kronheimer and Penrose [4], Hawking [5] and Malament [6] establish that the causal order unifies within itself topology, including dimension, differentiable structure (smoothness) and almost all the spacetime geometry. The only geometrical information that the causal structure lacks is local physical scale.[Technically, the result states that if two distinguishing spacetimes are causally isomorphic then they are conformally isometric. In 4 dimensions this implies that the causal structure provides 9/10 of the metrical information as the metric is given by a symmetric 4 × 4 matrix field of 10 spacetime functions, 9 of which can be fixed in terms of the 10th.]  This local scale information can be furnished by providing the spacetime volume of every region of spacetime or, alternatively, the amount of proper time that elapses – the duration – along every timelike worldline. In the continuum, the causal structure and local scale information complement each other to give the full geometry of spacetime, the complete spacetime fabric... 
There are strong, physical arguments that the smooth manifold structure of spacetime must break down at the Planck scale where quantum fluctuations in the structure of spacetime cannot be ignored. The most convincing evidence that spacetime cannot remain smooth and continuous at arbitrarily small scales and that the scale at which the continuum breaks down is the Planck scale is the finite value of the entropy of a Black Hole [7]. Fundamental spacetime discreteness is a simple proposal that realises the widely held expectation that there must be a physical, Planck scale cutoff in nature. According to this proposal, spacetime is comprised of discrete “spacetime atoms” at the Planck scale. The causal set programme for quantum gravity [8, 9, 10] is based on the observation that such atomicity is exactly what is needed in order to conceive of spacetime as pure causal order since in a discrete spacetime, physical scale – missing in the continuum – can be provided by counting. For example, a worldline in a discrete spacetime would consist of a chain of ordered spacetime atoms and its proper time duration, in fundamental Planckian scale units of time of roughly 10 -43 seconds, would be simply the number of spacetime atoms that comprise the worldline.
Causal set theory thus postulates that underlying our seemingly smooth continuous spacetime there is an atomic spacetime taking the form of a discrete, partially ordered set or causal set, whose elements are the spacetime atoms. The order relation on the set gives rise to the spacetime causal order in the approximating continuum spacetime and the number of causal set elements comprising a region gives the spacetime volume of that region in fundamental units. The Planckian scale of the atomicity means that there would be roughly 10240spacetime atoms comprising our observable universe. 
According to causal set theory, spacetime is a material comprised of spacetime atoms which are, in themselves, featureless, with no internal structure and are therefore identical. Each atom acquires individuality as an element of a discrete spacetime, a causal set, in view of its order relations with the other elements of the set. Let me stress here a crucial point: the elements of the causal set, the discrete spacetime, are atoms of 4-d spacetime, not atoms of 3-d space. An atomic theory of space would run counter to the physics of GR in which 3-d space is not a physically meaningful concept. An atom of spacetime is an idealisation of a click of the fingers, an explosion of a firecracker, a here-and-now.
(Submitted on 14 May 2014)

More about causal set theory and quantum gravity in the former post...

I remind the reader that my choice of the words attoms of spacetime has many motives, the most important one conceptually can be formulated now in the following way:
the Higgs boson discovery which is a new fundamental piece of local information at the attometer scale, if it is understood in a spectral noncommutative geometric perspective, confirms the global piece of information provided by dark matter and possibly dark energy interpreted as mimetic gravity aspects of the quantisation of spacetime as described by a higher Heisenberg equation proposed by Chamseddine, Connes and Mukhanov that fixes the volume form of 4 dimensional spacetimes.


Next station : naturalness, terminus cosmos

What if the cosmological constant was a nonlocal quantum residue of discreteness of spacetime ... just like mimetic dark matter is a nonlocal noncommutative consequence of the quantisation of space-time volume?


Lieber Ehrenfest!  

... Ich habe auch wieder etwas verbrochen in der Gravitationstheorie, was mich ein wenig in Gefahr setzt, in einem Tollhaus interniert zu werde.
Einstein's letter February 4th 1917



The former post provides an(other) opportunity to the readers of this blog to watch a presentation of the new conceptual framework envisioned by the geometer Alain Connes and his closest physicist collaborator Ali Chamseddine with the help of cosmologist Sacha Mukhanov in order to show how the standard model of quantum matter-radiation interactions emerges from the discreteness of spacetime formulated with a spectral noncommutative geometric equation. If the physical consequences of this breakthrough that leads to understanding dark matter as some mimetic gravity is analysed in detail by Chamseddine in a very recent review article, the impact on cosmological constant is more elusive not to speak about the quantisation of spacetime dynamics.

Looking for more insight about what could be a heuristic hypothesis toward the discreteness of spacetime volume I could not afford to talk about, or rather quote, causal set approach to quantum gravity:

The evidence ... points to a cosmological constant of magnitude, Λ≈10-120κ-2 , and this raises two puzzles: [I prefer the word puzzle or riddle to the word problem, which suggests an inconsistency, rather than merely an unexplained feature of our theoretical picture.] Why is Λ so small without vanishing entirely, and Why is it so near to the critical density ρcritical = 3H2... 
Is the latter just a momentary occurrence in the history of the universe (which we are lucky enough to witness), or has it a deeper meaning? Clearly both puzzles would be resolved if we had reason to believe that Λ ≈ H2 always. In that case, the smallness of Λ today would merely reflect the large age of the cosmos. But such a Λ would conflict with our present understanding of nucleosynthesis in the early universe and of “structure formation” more recently. (In the first case, the problem is that the cosmic expansion rate influences the speed with which the temperature falls through the “window” for synthesizing the light nuclei, and thereby affects their abundances. According to {the Friedmann equations} a positive Λ at that time would have increased the expansion rate, which however is already somewhat too big to match the observed abundances. In the second case, the problem is that a more rapid expansion during the time of structure formation would tend to oppose the enhancement of density perturbations due to gravitational attraction, making it difficult for galaxies to form.) But neither of these reasons excludes a fluctuating Λ with typical magnitude |Λ|∼H2 but mean value <Λ>=0. The point now is that such fluctuations can arise as a residual, nonlocal quantum effect of discreteness, and specifically of the type of discreteness embodied in the causal set... 
In order to explain this claim, I will need to review some basic aspects of causet theory. [5] According to the causal set hypothesis, the smooth manifold of general relativity dissolves, near the Planck scale, into a discrete structure whose elements can be thought of as the “atoms of spacetime”. These atoms can in turn be thought of as representing “births”, and as such, they carry a relation of ancestry that mathematically defines a partial order, x ≺ y. Moreover, in our best dynamical models [6], the births happen sequentially in such a way that the number n of elements plays the role of an auxiliary time-parameter. (In symbols, n ∼ t.)[It is an important constraint on the theory that this auxiliary time-label n should be “pure gauge” to the extent that it fails to be determined by the physical order-relation ≺. That is, it must not influence the dynamics, this being the discrete analog of general covariance] Two basic assumptions complete the kinematic part of the story by letting us connect up a causet with a continuum spacetime. One posits first, that the underlying microscopic order ≺ corresponds to the macroscopic relation of before and after, and second, that the number of elements N comprising a region of spacetime equals the volume of that region in fundamental (i.e. Planckian) units. (In slogan form: geometry = order + number.) The equality between number N and volume V is not precise however, but subject to Poisson fluctuations, whence instead of N=V, we can write only 
N∼V±√V.                     (5) 
(These fluctuations express a “kinematical randomness” that seems to be forced on the theory by the noncompact character of the Lorentz group.) To complete the causet story, one must provide a “dynamical law” governing the birth process by which the causet “grows” (the discrete counterpart of {the Einstein} equation...). This we still lack in its quantum form, but for heuristic purposes we can be guided by the classical sequential growth (CSG) models referred to above; and this is what I have done in identifying n as a kind of time-parameter...   
We can now appreciate why one might expect a theory of quantum gravity based on causal sets to lead to a fluctuating cosmological constant. Let us assume that at sufficiently large scales the effective theory of spacetime structure is governed by a gravitational path-integral, which at a deeper level will of course be a sum over causets. That n plays the role of time in this sum suggests that it must be held fixed, which according to (5) corresponds to holding V fixed in the integral over 4-geometries. If we were to fix V exactly, we’d be doing “unimodular gravity”, in which setting it is easy to see that V and Λ are conjugate to each other in the same sense as energy and time are conjugate in nonrelativistic quantum mechanics. [This conjugacy shows up most obviously in the Λ-term in the gravitational action-integral, which is simply 
−Λ √−g d2x = −ΛV . (6) 
It can also be recognized in canonical formulations of unimodular gravity [7], and in the fact that (owing to (6)) the “wave function” Ψ(3g; Λ) produced by the unrestricted path-integral with parameter Λ is just the Fourier transform of the wave function Ψ(3g; Λ)n produced at fixed V.] In analogy to the ∆E∆t uncertainty relation, we thus expect in quantum gravity to obtain 
∆Λ ∆V ∼ ℏ  (7) 
Remember now, that even with N held exactly constant, V still fluctuates, following (5), between N + √ N and N − √ N; that is, we have N ∼ V ± √N ⇒ V ∼ N ± √V , or ∆V∼√V . In combination with (7), this yields for the fluctuations in Λ the central result 
∆Λ ∼ V -1/2 (8) 
Finally, let us assume that, for reasons still to be discovered, the value about which Λ fluctuates is strictly zero: <Λ>=0. (This is the part of the Λ puzzle we are not trying to solve...) A rough and ready estimate identifying spacetime volume with the Hubble scale H -1 then yields V∼(H -1)4⇒ Λ∼-1/2∼H2∼ρcritical (where I’ve used that Λ=Λ−<Λ> since <Λ>=0). In other words, Λ would be “ever-present” (at least in 3+1 dimensions)... 
In trying to develop (8) into a more comprehensive model, we not only have to decide exactly which spacetime volume ‘V’ refers to, we also need to interpret the idea of a varying Λ itself. Ultimately the phenomenological significance of V and Λ would have to be deduced from a fully developed theory of quantum causets, but until such a theory is available, the best we can hope for is a reasonably plausible scheme which realizes (8) in some recognizable form.  
As far as V is concerned, it pretty clearly wants to be the volume to the past of some hypersurface, but which one? If the local notion of “effective Λ at x” makes sense, and if we can identify it with the Λ that occurs in (8), then it seems natural to interpret V as the volume of the past of x, or equivalently (up to Poisson fluctuations) as the number of causet elements which are ancestors of x: 
V = volume(past(x)).
One could imagine other interpretations... but this seems as simple and direct as any... 
As far as Λ is concerned, the problems begin with Einstein's equation itself, whose divergence implies (at least naively... ) that Λ = constant. The model of [2] and [3] addresses this difficulty... we are forced to modify the Friedmann equations... The most straightforward way of doing so is to retain only one of them, or possibly some other linear combination... Then our dynamical scheme is just 
3(ȧ/a)2 = ρ + ρΛ                (9a)
 2 ä/a + (ȧ/a)2 = −(p + pΛ)  (9b) 
with ρΛ=Λ and pΛ= −Λ − ̇Λ/3H. Finally, to complete our model and obtain a closed system of equations, we need to specify Λ as a (stochastic) function of V , and we need to choose it so that ∆Λ∼-1/2. But this is actually easy to accomplish, if we begin by observing that (with κ =  = 1) Λ = S/V ≈ S/N can be interpreted as the action per causet element that is present even when the spacetime curvature vanishes. (As one might say, it is the action that an element contributes just by virtue of its existence.† ) Now imagine that each element contributes (say) ± to S, with a random sign. Then S is just the sum of N independent random variables, and we have S/ ∼ ±√ N ∼ ±√(V /ℓ4), where ℓ∼√(κ) is the fundamental time/length of the underlying theory, which thereby enters our model as a free phenomenological parameter. This in turn implies, as desired, that 
Λ = S/V ∼ ± (/ℓ2)/√V       (10) 
We have thus arrived at an ansatz that, while it might not be unique, succeeds in producing the kind of fluctuations we were seeking. Moreover, it lends itself nicely to simulation by computer...    
An extensive discussion of the simulations can be found in [3] and [2]. The most important finding was that... the absolute value of Λ follows ρradiation very closely during the era of radiation dominance, and then follows ρmatter when the latter dominates. Secondly, the simulations confirmed that Λ fluctuates with a “coherence time” which is O(1) relative to the Hubble scale. Thirdly, a range of present-day values of ΩΛ is produced, and these are O(1) when ℓ2= O(κ). (Notice in this connection that the variable Λ of our model cannot simply be equated to the observational parameter Λobs that gets reported on the basis of supernova observations, for example, because Λobs results from a fit to the data that presupposes a constant Λ, or if not constant then a deterministically evolving Λ with a simple “equation of state”. It turns out that correcting for this tends to make large values of ΩΛ more likely [3].) Fourthly, the Λ-fluctuations affect the age of the cosmos (and the horizon size), but not too dramatically. In fact they tend to increase it more often than not. Finally, the choice of (9a) for our specific model seems to be “structurally stable“ in the sense that the results remain qualitatively unchanged if one replaces (9a) by some linear combination thereof with (9b), as discussed above...
Heuristic reasoning rooted in the basic hypotheses of causal set theory predicted Λ∼±1/√V , in agreement with current data. But a fuller understanding of this prediction awaits the ... new ... quantum causet dynamics”... Meanwhile, a reasonably coherent phenomenological model exists, based on simple general arguments. It is broadly consistent with observations but a fuller comparison is needed. It solves the “why now” problem: Λ is “ever-present”. It predicts further that pΛ  −ρΛ (w  −1) and that Λ has probably changed its sign many times in the past.[ It also tends to favor the existence of something, say a “sterile neutrino”, to supplement the energy density at nucleosynthesis time. Otherwise, we might have to assume that ΩΛ had fluctuated to an unusually small value at that time. It also carries the implication that “large extra dimensions” will not be observed at the LHC...] The model contains a single free parameter of order unity that must be neither too big nor too small.[unless we want to try to make sense of imaginary time (= quantum tunneling?) or to introduce new effects to keep the right hand side of (9) positive (production of gravitational waves? onset of large-scale spatial curvature or “buckling”?).] In principle the value of this parameter is calculable, but for now it can only be set by hand.   
In this connection, it’s intriguing that there exists an analog condensed matter system the “fluid membrane”, whose analogous parameter is not only calculable in principle from known physics, but might also be measurable in the laboratory! [9]...
In itself the smallness of Λ is a riddle and not a problem. But in a fundamentally discrete theory, recovery of the continuum is a problem, and I think that the solution of this problem will also explain the smallness of Λ. (The reason is that if Λ were to take its “natural”, Planckian value, the radius of curvature of spacetime would also be Planckian, but in a discrete theory such a spacetime could no more make sense than a sound wave with a wavelength smaller than the size of an atom. Therefore the only kind of spacetime that can emerge from a causet or other discrete structure is one with Λ≪1.) One can also give good reasons why the emergence of a manifold from a causet must rely on some form of nonlocality. The size of Λ should also be determined nonlocally then, and this is precisely the kind of idea realized in the above model. 
One pretty consequence of this kind of nonlocality is a certain restoration of symmetry between the very small and the very big. Normally, we think of G (gravity) as important on large scales, with h (quantum) important on small ones. But we also expect that on still smaller scales G regains its importance once again and shares it with ℏ  (quantum gravity). If the concept of an ever-present Λ is correct then symmetry is restored, because ℏ rejoins G on the largest scales in connection with the cosmological constant. 
Finally, let me mention a “fine tuning” that our model has not done away with, namely the tuning of the spacetime dimension to d=4. In any other dimension but 4, Λ could not be “ever-present”, or rather it could not remain in balance with matter. Instead, the same crude estimates that above led us to expect Λ∼H2 , lead us in other dimensions to expect either matter dominance (d>4) or Λ-dominance (d<4). Could this be a dynamical reason favoring 3+1 as the number of noncompact dimensions?...[10]...  
A last word
The cosmological constant is just as constant as Hubble’s constant.

Rafael D. Sorkin (Perimeter Institute and Syracuse University)
(Submitted on 9 Oct 2007)

The spring is coming...

... as a possible relevant quantum geometry of space-time-matter-radiation springs ...






... out of the Heisenberg-like Connes-Chamseddine-Mukhanov equation and the spectral action principle


....expressed in Noncommutative Geometry as a Framework for Unification of all Fundamental Interactions including Gravity. 




Here is the part II video follow-up of Part I article 


mercredi 1 mars 2017

Consequences of volume quantisation of space-time

Here is the fourth (and last fragment for a while) of my Lover's Dictionary of Spectral Physics:


Quantisation of space-time (and a heuristic point of view towards a spectral unification of fundamental interactions based on the noncommutative Heisenberg-like Chamseddine-Connes-Mukhanov equation to develop*...)


... when the intellectual despotism of the Church, which had been maintained through the Middle Ages, had crumbled, and a wave of scepticism threatened to sweep away all that had seemed most fixed, those who believed in Truth clung to Geometry as to a rock, and it was the highest ideal of every scientist to carry on his science "more geometrico".

Hermann Weyl, 1917

La nature est localement quantique et globalement noncommutative
Folklore 



As the last echo to the quantisation of matter-radiation interaction started with Planck in 1900, the recent completion of the standard model of unification of the strong and electroweak interactions has advanced our ideas of the subatomic world a step further. It is even not unreasonable to conceive that the next scale of particle physics lies beyond 1010 GeV! Now wider expanses and greater depths are thus exposed to the searching eye of knowledge (to quote Hermann Weyl in a 100 year-old reflexive text about the foundation of differential geometry and general relativity). The more so as tremendous progress in astrophysics have seen the emergence of a testable cosmological standard model that offers a window on energy scales of which we had hardly a hope to probe a decade ago. Both standard models have already brought us much nearer to grasping the plan that underlies all physical happening.
May be it is time now to repeat the saga initiated by Planck and Einstein (from the experimental data collected by an army of spectroscopists raised by Newton) to envision a quantisation of space-time. This is what  I want to is reported below through a spectral ride on the loop where the micro and macro worlds meet to uncover regions of which we had not even a presentiment. To say shortly the consequence of the quantisation of space-time heuristic hypothesis based on a Heisenberg-like equation found by Chamseddine, Connes and Mukhanov might be a pretty unique, spectrally unified and global noncommutative framework for space-time-matter-radiation as we know it here (13 TeV) and now (2,7 K).

... by starting from a quantization condition on the volume of the noncommutative space, all fields and their interactions are predicted and given by a Pati-Salam model which has three special cases one of which is the Standard Model with neutrino masses and a singlet field. The spectral Standard Model predicts unification of gauge couplings and the correct mass for the top quark and is consistent with a low Higgs mass of 125 Gev. The unification model is assumed to hold at the unification scale and when the gauge, Yukawa and Higgs couplings relations are taken as initial conditions on the RGE, one finds complete agreement with experiment, except for the meeting of the gauge couplings which are off by 4%. This suggests that a Pati-Salam model defines the physics beyond the Standard Model, and where we have shown [16] that it allows for unification of gauge couplings, consistent with experimental data. 
The assumption of volume quantization has consequences on the structure of General Relativity. Equations of motion agree with Einstein equations except for the trace condition, which now determines the Lagrange multiplier enforcing volume quantization. The cosmological constant, although not included in the action, is now an integration constant... To have a physical picture of time we have also considered a four-manifold formed with the topology of R × Σ3, where Σ3 is a three dimensional hypersurface, to allow for space-times with Lorentzian signature. The quantization condition is modified to have two mappings from Σ3 → S 3 and a mapping X : R → R. The resulting algebra of the noncommutative space is unchanged, and the three dimensional volume is quantized provided that the mapping field X is constrained to have unit gradient. This field X modifies only the longitudinal part of the graviton and plays the role of mimetic dust. It thus solves, without extra cost, the dark matter problem [33]. Recently, we have shown that this field X can be used to build realistic cosmological models [34]. In addition, and under certain conditions, could be used to avoid singularities in General relativity for Friedmann, Kasner [35] and Black hole solutions [36]. This is possible because this scalar field modifies the longitudinal sector in GR...   
We have presented enough evidence that a framework where space-time assumed to be governed by noncommutative geometry results in a unified picture of all particles and their interactions. The axioms could be minimized by starting with a volume quantization condition, which is the Chern character formula of the noncommutative space and a special case of the orientability condition. This condition determines uniquely the structure of the noncommutative space. Remarkably, the same structure was also derived, in slightly less unique way, by classifying all finite noncommutative spaces [10].
The picture is very compelling, in contrast to other constructions, such as grand unification, supersymmetry or string theory, where there is no limit on the number of possible models that could be constructed. The picture, however, is ... incomplete as there are still many unanswered questions and we now list few of them. Further studies are needed to determine the structure and hierarchy of the Yukawa couplings, the number of generations, the form of the spectral function and the physics at unification scale, quantizing the fields appearing in the spectral action and in particular the gravitational field. To conclude, noncommutative geometry as a basis for unification, is a predictive and exciting field with very appealing features and many promising new directions for research.
Contribution to the special issue of IJGMMP celebrating the one century anniversary of the program announced in 1916 by Hilbert entitled " Foundations of Mathematics and Physics", (Submitted on 27 Feb 2017)


Remark: the reader is warmly invited to have a careful look at the above article particularly on the twenty pages starting from section 9 Consequences of volume quantization (for astrophysics and cosmology) where the technical details are thoroughly discussed by Ali Chamseddine in the perfectly classical differential geometric language used for general relativity but with a new conformal nondynamical so called mimetic degree of freedom.


*A heuristic point of view... in progress (?)

An accidental conceptual distinction exists between the theoretical concepts which physicists have forged building the quantum gauged interactions between fundamental spinor fermions and vector bosons on a flat space-time and the Einstein theory of gravitational processes on a curved space-time. While we can consider the energetic state of matter-radiation in the universe to be completely determined by a very large, yet finite, number of quanta, we make use of continuous spatial functions to describe the geometrodynamical state of a given volume of spacetime, and a finite number of parameters cannot be regarded as sufficient for the complete determination of such a state.

Classical differential geometry, which operates with commutative algebra of coordinates, has worked quite well up to now as the handmaiden of general relativity to induce the matter content of our local universe from the matching between the radiation observations and general relativity predictions and will continue to provide invaluable services to scrutinize dark compact objects with the advent of gravitational waves detectors. It should be kept in mind, however, that the current astrophysical inferences from galactic to cosmological scales suffer huge discrepancies when one confronts 
the global matter-radiation content of the universe with its local spectrum observed on Earth (thanks to telescopes, particle accelerators/detectors and both Standard and ΛCDM models) 
In spite of the complete experimental confirmations of Einstein's general relativity based on two degrees of freedom (inferred from commutative geometric insight) as applied to the dynamics of the dilute solar system, denser pairs of neutron stars not to mention more compact black holes, it is now conceivable that the standard cosmological model may lead to contradictions with experience when its dark matter phenomenology parameterisation and the current cosmological acceleration are confronted with the particle spectrum inferred from sub-attoscale experiments and our understanding of quantum vacuum.

It seems to me that the observations associated with dark matter and possibly dark energy both connected with correlations of matter-radiation in spectral data collected on very large and very small scales are more readily understood if one assumes a spacetime volume quantization provided by the spectral noncommutative geometric foresight supported by the 125 GeV Higgs boson hindsight.
Walking in the footsteps of a giant

//last edit March 2, 2017
//new edit only in the heuristic point of view part on March 4 8

Les physiciens : à table ! (invitation à un banquet spectral)


Vous trouvez la géométrie noncommutative un peu grosse à avaler pour un spécialiste de physique des particules et vous restez sur votre faim en contemplant le menu du jour du LHC ? PAS DE PANIQUE, faites confiance au guide du boson de Higgs et du voyageur spectral et suivez l'exemple des bosons de jauge qui mangent des bosons de Goldstone au festin dont le Soleil nous régale quotidiennement* ;-)


The last issue (draft version) of the Spectral-hiker's Guide to noncommutative geometry for particle physics is:

Noncommutative Geometry and Particle Physics  by Walter D. van Suijlekom (2015)
(published version here)


For gourmets only, looking for The Restaurant at the Planck Scale, I recommend the great Per Non Commutativa Prisma, Quantum Ratio Quoris Encyclopædia one can find in a nice hypertext version below:

by Alain Connes and Matilde Marcolli (2008)

*grâce à la survie du photon qui ne s'est pas fait croqué en se fondant dans le décor quantique relativiste local pour prendre la tangente dans l'espace-temps relativiste...

lundi 27 février 2017

(Celebrating fifty years of) electroweak symmetry breaking theory

This is the third fragment of my Lover's dictionary of Spectral Physics, for the new entry:


Electroweak symmetry breaking theory 

The Standard Model wins all the battles 

Yes but only those requiring a limited weapon finesse ;-)
a grand unified theory building gamer troll



This important part of the Standard Model is fifty years old this year 2017 and it is still undefeated experimentally by LHC Run 2. Yet it is far from having been thoroughly tested in its full Standard Model version as you will read it below:

Spontaneous symmetry breaking occurs when the ground state or vacuum, or equilibrium state of a system does not share the underlying symmetries of the theory. It is ubiquitous in condensed matter physics, associated with phase transitions. Often, there is a high-temperature symmetric phase and a critical temperature below which the symmetry breaks spontaneously. A simple example is crystallization. If we place a round bowl of water on a table, it looks the same from every direction, but when it freezes the ice crystals form in specific orientations, breaking the full rotational symmetry. The breaking is spontaneous in the sense that, unless we have extra information, we cannot predict in which directions the crystals will line up... In 1960, Nambu [12] pointed out that gauge symmetry is broken in a superconductor when it goes through the transition from normal to superconducting, and that this gives a mass to the plasmon, although this view was still quite controversial in the superconductivity community (see also Anderson [13]). Nambu suggested that a similar mechanism might give masses to elementary particles... The next year, with Jona-Lasinio [14], he proposed a specific model, though not a gauge theory... 
The model had a significant feature, a massless pseudoscalar particle, which Nambu and Jona-Lasinio tentatively identified with the pion. To account for the non-zero (though small) pion mass, they suggested that the chiral symmetry was not quite exact even before the spontaneous symmetry breaking. Attempts to apply this idea to symmetry breaking of fundamental gauge theories however ran into a severe obstacle, the Goldstone theorem... the spontaneous breaking of a continuous symmetry often leads to the appearance of massless spin-0 particles. The simplest model that illustrates this is the Goldstone model [15]... 
The appearance of the... massless spin-zero Nambu–Goldstone bosons was believed to be an inevitable consequence of spontaneous symmetry breaking in a relativistic theory; this is the content of the Goldstone theorem. That is a problem because such massless particles, if they had any reasonable interaction strength, should have been easy to see, but none had been seen...  
This problem was obviously of great concern to all those who were trying to build a viable gauge theory of weak interactionsWhen Steven Weinberg came to spend a sabbatical at Imperial College in 1961, he and Salam spent a great deal of time discussing the obstacles. They developed a proof of the Goldstone theorem, published jointly with Goldstone [16]...
Spontaneous symmetry breaking implied massless spin-zero bosons, which should have been easy to see but had not been seen. On the other hand adding explicit symmetry-breaking terms led to non-renormalizable theories predicting infinite results. Weinberg commented ‘Nothing will come of nothing; speak again’, a quotation from King Lear. Fortunately, however, our community was able to speak again...
The {Goldstone theorem} argument fails in the case of a gauge theory, for quite subtle reasons ... {its} proof is valid, but there is a hidden assumption which, though seemingly natural, is violated by gauge theories. This was discovered independently by three groups, first Englert and Brout from Brussels [19], then Higgs from Edinburgh [20, 21] and finally Guralnik, Hagen and myself from Imperial College [22]. All three groups published papers in Physical Review Letters during the summer and autumn of 1964... 
The 1964 papers from the three groups attracted very little attention at the time. Talks on the subject were often greeted with scepticism. By the end of that year, the mechanism was known, and Glashow’s (and Salam and Ward’s) SU(2) × U(1) model was known. But, surprisingly perhaps, it still took three more years for anyone to put the two together. This may have been in part at least because many of us were still thinking primarily of a gauge theory of strong interactions, not weak 
In early 1967, I did some further work on the detailed application of the mechanism to models with larger symmetries than U(1), in particular on how the symmetry breaking pattern determines the numbers of massive and massless particles [23]. I had some lengthy discussions with Salam on this subject, which I believed helped to renew his interest in the subject. A unified gauge theory of weak and electromagnetic interactions of leptons was first proposed by Weinberg later that year [24]. Essentially the same model was presented independently by Salam in lectures he gave at Imperial College in the autumn of 1967 — he called it the electroweak theory. (I was not present because I was in the United States, but I have had accounts from others who were.) Salam did not publish his ideas until the following year, when he spoke at a Nobel Symposium [25], largely perhaps because his attention was concentrated on the development in its crucial early years of his International Centre for Theoretical Physics in Trieste. Weinberg and Salam both speculated that their theory was renormalizable, but they could not prove it. An important step was the working out by Faddeev and Popov of a technique for applying Feynman diagrams to gauge theories [26]. Renormalizability was finally proved by a young student, Gerard ’t Hooft [27], in 1971, a real tour de force using methods developed by his supervisor, Martinus Veltman, especially the computer algebra programme Schoonship. 
In 1973, the key prediction of the electroweak theory, the existence of the neutral current interactions — those mediated by Z0 — was confirmed at CERN [28]...The next major step was the discovery of the W and Z particles at CERN in 1983 [29, 30]... 
In 1964, or 1967, the existence of a massive scalar boson had been a rather minor and unimportant feature. The important thing was the mechanism for giving masses to gauge bosons and avoiding the appearance of massless Nambu–Goldstone bosons. But after 1983, the Higgs boson began to assume a key importance as the only remaining undiscovered piece of the standard-model jigsaw — apart that is from the last of the six quarks, the top quark. The standard model worked so well that the Higgs boson, or something else doing the same job, more or less had to be present. Finding the boson was one of the main motivations for building the Large Hadron Collider (LHC) at CERN. Over a period of more than twenty years, the two great collaborations, ATLAS and CMS, have designed, built and operated their two huge and massively impressive detectors. As is by now well known, their efforts were rewarded in 2012 by the unambiguous discovery of the Higgs boson by each of the two detectors [31, 32].
History of electroweak symmetry breaking T.W.B. Kibble
2015

I think it is fair to complete the previous experimental success story of the electroweak symmetry breaking theory by the following facts:
... in computing the theoretical predictions [of the Standard Model], one should include also the strong interactions, so the model is really the gauge theory of the group U(1)×SU(2)×SU(3). Here we shall present only a list of the most spectacular successes in the electroweak sector:
...
The discovery of charmed particles at SLAC in 1974–1976. Their characteristic property is to decay predominantly in strange particles. 
• A necessary condition for the consistency of the Model is that  ∑i Qi =0 inside each family. When the τ lepton was discovered the b and t quarks were predicted with the right electric charges.
...
The t-quark was seen at LEP through its effects in radiative corrections before its actual discovery at Fermilab.
• An impressive series of experiments have tested the Model at a level such that the weak interaction radiative corrections are important.
John Iliopoulos, 2016



And now for a nice outlook of the 125 GeV Higgs boson discovery let us read an eminent superviser of the TeV scale physics exploration using hadron colliders:

The most succinct summary we can give is that the data from the ATLAS and CMS experiments are developing as if electroweak symmetry is broken spontaneously through the work of elementary scalars, and that the emblem of that mechanism is the standard-model Higgs boson... 
As one measure of the progress the discovery of the Higgs boson represents, let us consider some of the questions I posed before the LHC experiments ... 
1. What is the agent that hides the electroweak symmetry? Specifically, is there a Higgs boson? Might there be several? 
To the best of our knowledge, H(125) displays the characteristics of a standard model Higgs boson, an elementary scalar. Searches will continue for other particles that may play a role in electroweak symmetry breaking. 
2. Is the “Higgs boson” elementary or composite? How does the Higgs boson interact with itself? What triggers electroweak symmetry breaking? 
We have not yet seen any evidence that H(125) is other than an elementary scalar. Searches for a composite component will continue. The Higgs-boson self-interaction is almost certainly out of the reach of the LHC; it is a very challenging target for future, very-high-energy, accelerators. We don’t yet know what triggers electroweak symmetry breaking. 
3. Does the Higgs boson give mass to fermions, or only to the weak bosons? What sets the masses and mixings of the quarks and leptons? 
The experimental evidence suggests that H(125) couples to tt, bb, and τ+τ−, so the answer is probably yes. All these are third-generation fermions, so even if the evidence for these couplings becomes increasingly robust, we will want to see evidence that H couples to lighter fermions. The most likely candidate, perhaps in High-Luminosity LHC running, is for the Hµµ coupling, which would already show that the third generation is not unique in its relation to H. Ultimately, to show that spontaneous symmetry breaking accounts for electron mass, and thus enables compact atoms, we will want to establish the Hee coupling. That is extraordinarily challenging because of the minute branching fraction
10. What lessons does electroweak symmetry breaking hold for unified theories of the strong, weak, and electromagnetic interactions? 
Establishing that scalar fields drive electroweak symmetry breaking will encourage the already standard practice of using auxiliary scalars to hide the symmetries that underlie unified theories. 
To close, I offer a revised list of questions to build on what our first look at the Higgs boson has taught us. Issues Sharpened by the Discovery of H (125) 
1. How closely does H(125) hew to the expectations for a standard-model Higgs boson? Does H have any partners that contribute appreciably to electroweak symmetry breaking? 
2. Do the HZZ and HWW couplings indicate that H(125) is solely responsible for electroweak symmetry breaking, or is it only part of the story? 
3. Does the Higgs field give mass to fermions beyond the third generation? Does H(125) account quantitatively for the quark and lepton masses? What sets the masses and mixings of the quarks and leptons? 
4. What stabilizes the Higgs-boson mass below 1 TeV? 
5. Does the Higgs boson decay to new particles, or via new forces? 
6. What will be the next symmetry recognized in Nature? Is Nature supersymmetric? Is the electroweak theory part of some larger edifice? 
7. Are all the production mechanisms as expected? 
8. Is there any role for strong dynamics? Is electroweak symmetry breaking related to gravity through extra spacetime dimensions? 
9. What lessons does electroweak symmetry breaking hold for unified theories of the strong, weak, and electromagnetic interactions? 
10. What implications does the value of the H(125) mass have for speculations that go beyond the standard model?...for the range of applicability of the electroweak theory? 
In the realms of refined measurements, searches, and theoretical analysis and imagination, great opportunities lie before us! 
Electroweak Symmetry Breaking in Historical Perspective Chris Quigg 2015


Now what about the role geometry plays in the game? It may be relevant to go once more to the historical review by Iliopoulos:

The construction of the Standard Model, which became gradually the Standard Theory of elementary particle physics, is, probably, the most remarkable achievement of modern theoretical physics.... as we intend to show, the initial motivation was not really phenomenological. It is one of these rare cases in which a revolution in physics came from theorists trying to go beyond a simple phenomenological model, not from an experimental result which forced them to do so. This search led to the introduction of novel symmetry concepts which brought geometry into physics...
At the beginning of the twentieth century the development of the General Theory of Relativity offered a new paradigm for a gauge theory. The fact that it can be written as the theory invariant under local translations was certainly known to Hilbert, hence the name of Einstein–Hilbert action. The two fundamental forces known at that time, namely electromagnetism and gravitation, were thus found to obey a gauge principle. It was, therefore, tempting to look for a unified theory... 
The transformations of the vector potential in classical electrodynamics are the first example of an internal symmetry transformation, namely one which does not change the space–time point x. However, the concept, as we know it today, belongs really to quantum mechanics. It is the phase of the wave function, or that of the quantum fields, which is not an observable quantity and produces the internal symmetry transformations. The local version of these symmetries are the gauge theories we study here. The first person who realised that the invariance under local transformations of the phase of the wave function in the Schrödinger theory implies the introduction of an electromagnetic field was Vladimir Aleksandrovich Fock in 1926, just after Schrödinger wrote his equation... 
In 1929 Hermann Klaus Hugo Weyl extended this work to the Dirac equation. In this work he introduced many concepts which have become classic, such as the Weyl two-component spinors and the vierbein and spin-connection formalism. Although the theory is no more scale invariant, he still used the term gauge invariance, a term which has survived ever since.
Naturally, one would expect non-Abelian gauge theories to be constructed following the same principle immediately after Heisenberg introduced the concept of isospin in 1932. But here history took a totally unexpected route.  
The first person who tried to construct the gauge theory for SU(2) is Oskar Klein who, in an obscure conference in 1938, he presented a paper with the title: On the theory of charged fields. The most amazing part of this work is that he follows an incredibly circuitous road: He considers general relativity in a five-dimensional space and compactifies à la Kaluza–Klein. Then he takes the limit in which gravitation is decoupled. In spite of some confused notation, he finds the correct expression for the field strength tensor of SU(2). He wanted to apply this theory to nuclear forces by identifying the gauge bosons with the new particles which had just been discovered, (in fact the muons), misinterpreted as the Yukawa mesons in the old Yukawa theory in which the mesons were assumed to be vector particles. He considered massive vector bosons and it is not clear whether he worried about the resulting breaking of gauge invariance. 
The second work in the same spirit is due to Wolfgang Pauli who, in 1953, in a letter to Abraham Pais, developed precisely this approach: the construction of the SU(2) gauge theory as the flat space limit of a compactified higher-dimensional theory of general relativity...  
It seems that the fascination which general relativity had exerted on this generation of physicists was such that, for many years, local transformations could not be conceived independently of general coordinate transformations. Yang and Mills were the first to understand that the gauge theory of an internal symmetry takes place in a fixed background space which can be chosen to be flat, in which case general relativity plays no role...
In particle physics we put the birth of non-Abelian gauge theories in 1954, with the fundamental paper of Chen Ning Yang and Robert Laurence Mills. It is the paper which introduced the SU(2) gauge theory and, although it took some years before interesting physical theories could be built, it is since that date that non-Abelian gauge theories became part of high energy physics. It is not surprising that they were immediately named Yang–Mills theories. Although the initial motivation was a theory of the strong interactions, the first semi-realistic models aimed at describing the weak and electromagnetic interactions. In fact, following the line of thought initiated by Fermi, the theory of electromagnetism has always been the guide to describe the weak interactions... 
Gauge invariance requires the conservation of the corresponding currents and a zero masse for the Yang–Mills vector bosons. None of these properties seemed to be satisfied for the weak interactions. People were aware of the difficulty, but had no means to bypass it. The mechanism of spontaneous symmetry breaking was invented a few years later in 1964... The synthesis of Glashow’s 1961 model with the mechanism of spontaneous symmetry breaking was made in 1967 by Steven Weinberg, followed a year later by Abdus Salam... Many novel ideas have been introduced in this paper, mostly connected with the use of the spontaneous symmetry breaking which became the central point of the theory.
Gauge theories contain three independent worlds. The world of radiation with the gauge bosons, the world of matter with the fermions and the world of BEH scalars. In the framework of gauge theories these worlds are essentially unrelated to each other. Given a group G the world of radiation is completely determined, but we have no way to know a priori which and how many fermion representations should be introduced; the world of matter is, to a great extent, arbitrary.  
This arbitrariness is even more disturbing if one considers the world of BEH scalars. Not only their number and their representations are undetermined, but their mere presence introduces a large number of arbitrary parameters into the theory. Notice that this is independent of our computational ability, since these are parameters which appear in our fundamental Lagrangian. What makes things worse, is that these arbitrary parameters appear with a wild range of values. From the theoretical point of view, an attractive possibility would be to connect the three worlds with some sort of symmetry principle. Then the knowledge of the vector bosons will determine the fermions and the scalars and the absence of quadratically divergent counterterms in the fermion masses will forbid their appearance in the scalar masses. We shall call such transformations supersymmetry transformations and we see that a given irreducible representation will contain both fermions and bosons. It is not a priori obvious that such supersymmetries can be implemented consistently, but in fact they can.  
... supersymmetric field theories have remarkable renormalisation properties [57] which make them unique. In particular, they offer the only field theory solution of the hierarchy problem. Another attractive feature refers to grand unification. The presence of the supersymmetric particles modifies the renormalisation group equations and the effective coupling constants meet at high scales...   
An interesting extension consists of considering gauge supersymmetry transformations, i.e. transformations whose infinitesimal parameters — which are anticommuting spinors — are also functions of the space–time point x... 
The miraculous cancellation of divergences we find in supersymmetry theories raises the hope that the supersymmetric extension of general relativity will give a consistent quantum field theory. In fact local supersymmetry, or “supergravity”, is the only field theoretic extension of the Standard Model which addresses the issue of quantum gravity...

N=8 supergravity promised to give us a truly unified theory of all interactions, including gravitation and a description of the world in terms of a single fundamental multiplet. The main question is whether it defines a consistent field theory. At the moment we have no clear answer to this question, although it sounds rather unlikely. In some sense N = 8 supergravity can be viewed as the end of a road, the road of local quantum field theory. The usual response of physicists whenever faced with a new problem was to seek the solution in an increase of the symmetry. This quest for larger and larger symmetry led us to the standard model, to grand unified theories and then to supersymmetry, to supergravity and, finally, to the largest possible supergravity, that with N=8. In the traditional framework we are working, that of local quantum field theory, there exists no known larger symmetry scheme
Id.

I let the reader compare the above last Iliopoulos claims about supergravity with the following Connes' statement about the potential bonus offered by his geometric perspective in order to appreciate who sticks the most to the two guide lines of i) phenomenological approach in which the introduction of every new concept is motivated by the search of a consistent theory which agrees with experiment and ii) mathematical consistency which both helped in making the Standard Theory.
... the point of view adopted in this essay is to try to understand from a mathematical perspective, how the perplexing combination of the Einstein-Hilbert action coupled with matter, with all the subtleties such as the Brout-Englert-Higgs sector, the V-A and the see-saw mechanisms etc.. can emerge from a simple geometric model. The new tool is the spectral paradigm and the new outcome is that geometry does emerge from purely Hilbert space and operator considerations, i.e. on the stage where Quantum Mechanics happens. The idea that group representations as operators in Hilbert space are relevant to physics is of course very familiar to every particle theorist since the work of Wigner and Bargmann. That the formalism of operators in Hilbert space encompasses the variable geometries which underly gravity is the leitmotiv of our approach. In order to estimate the potential relevance of this approach to Quantum Gravity, one first needs to understand the physics underlying the problem of Quantum Gravity.... Quoting from [40]: “Quantization of gravity is inevitable because part of the metric depends upon the other fields whose quantum nature has been well established”. Two main points are that the presence of the other fields forces one, due to renormalization, to add higher derivative terms of the metric to the Lagrangian and this in turns introduces at the quantum level an inherent instability that would make the universe blow up. This instability is instantly fatal to an interacting quantum field theory. Moreover primordial inflation prevents one from fixing the problem by discretizing space at a very small length scale. What our approach permits is to develop a “particle picture” for geometry and a careful reading of this paper should hopefully convince the reader that this particle picture stays very close to the inner workings of the Standard Model coupled to gravity. For now the picture is limited to the “one-particle” description and there are deep purely mathematical reasons to develop the many particles picture.
Alain Connes
(still draft version February 21, 2017)

Beyond the somewhat vein comparative on the respective merits of both approaches to unify the standard model interactions with gravitation at the Planck scale, one can't help to notice how far their geometrical premises are different. On the one side, there is supergravity as the boldest symmetric extension of local quantum field gauge theories on traditional but higher dimensional spacetimes with the hope to quantize gravity. On the other side one contemplates an original reformulation and slight but radical extension of spacetime in a framework derived from quantum mechanics with the full Standard Model theory emerging from an action principle inspired by general relativity.

As a consequence, the grand unification scheme present in both approaches borrows nevertheless quite distinct paths. In the evocative words of some bold pioneers of the spectral noncommutative phenomenology:

... at the higher [unification scale Λ]... it is not the particle spectrum that changes, but the geometry of spacetime itself. We shall assume that the (commutative) Riemannian geometry of spacetime is only a low energy approximation of a – not yet known – noncommutative geometry. Being noncommutative, this geometry has radically different short distance properties and is expected to produce quite a different renormalisation flow... At energies below Λ, this noncommutativity manifests itself only in its mild, almost commutative version through the gauge- and Higgs-fields of the standard model, which are magnetic-like fields accompanying the gravitational field
Spectral action and big desert  Marc Knecht, Thomas Schucker (2006)

To insist now on the foresights, one has also two very different landscapes. Roughly writing:

- focussing on a solution to the naturalness problem of the Brout-Englert-Higgs scalar boson, supersymmetry predicts a new superparticle spectrum. From the knowledge of the vector bosons it will determine the fermions and the scalars and the absence of quadratically divergent counterterms in the fermion masses forbidding their appearance in the scalar masses. Then one can envision hopefully a supergravity theory amenable to quantize gravitation. 
- Looking for a geometric understanding of the electroweak symmetry breaking, the spectral noncommutative framework distills from the knowledge of the spin one-half fermion particle spectrum of the current Standard Model completed minimally with three right-handed Majorana neutrinos (required to explain neutrino oscillations with a type I seesaw mechanism) the full scalar and vector boson spectra. Its operator theoretic formalism develops a “particle picture” for geometry that stays very close to the inner workings of the Standard Model coupled to gravity and it makes it already possible to describe a volume quantized 4D spacetime with a Euclidean signature translating phenomenologically in mimetic dark energy and dark matter models.


Considering the fact that no experimental evidence for supersymmetric particles has been found yet, one may appreciate then from a heuristic point of view the potential relevance of the spectral noncommutative geometrization of the Standard Model leading to a minimal Pati-Salam extension. The latter provides indeed a unification of electroweak and strong gauge interactions pretty close in its particle spectrum to the non-supersymmetric minimal SO(10) models currently consistent with current neutrino oscillations data that goes beyond the Standard Model (thus not under the scope of Iliopoulos review) and also with a leptogenesis scenario able to explain the asymmetry between matter and antimatter.

At last, one may add the following from a more consistent* effective field theory perspective.
The spectral standard model post-diction for the 125 GeV mass of the Higgs boson that breaks the electroweak symmetry requires its very small mixing with a "big" Higgs brother responsible in a Pati-Salam symmetry breaking at around 1012 GeV consistent with a see-saw mechanism amenable to explain the known data on left-handed neutrinos. Even if the naturalness problem is not settled here it is phenomenologically encouraging that the Higgs boson already discovered may talk with a very high seesaw scale well motivated as a natural effective field theory to explain the very low mass of active neutrinos. The ultra heavy singlet scalar could also help to unitarise the theory in the sub-Planckian regime where inflation happens. Last but not least one may be reminded that provided the arbitrary mass scale in the spectral action is made dynamical by introducing a dilaton field, the resulting action is almost identical to the one proposed for making the standard model scale invariant and has the same low-energy limit as the Randall-Sundrum model and remarkably, all desirable features with correct signs for the relevant terms are obtained uniquely and without any fine tuning.

Whatever the path chosen by space-time-matter-radiation to cool down to nowadays cosmological background temperature one may conclude that the spectrum of particles required for an electroweak symmetry breaking theory consistent with energies beyond the TeV scale has not been fully probed yet. To know if this search will bring a novel symmetry concept to tame the Higgs scalar feared quantum instabilities and require noncommutative geometry into physics to do so, only future will tell but may be the past laying in the dark sky already knows...



* about the role of consistency in theory choice I would like to offer the following thoughts that seems  to me particularly relevant at the present time for obvious reasons:

One of the most interesting questions in philosophy of science is how to determine the quality of a theory. Given the data, how can we infer a “best explanation” for the data. This often goes by the name “Inference to Best Explanation” (IBE) [1, 2, 3]. The wide variety of claims for important criteria are a measure of how difficult it is to come up with a clear and general algorithm for choosing between theories. Some claim even that it is intrinsically not possible to come up with a methodology of deciding.

... in our discussion of IBE criteria... we must first ask ourselves what is non-negotiable. Falsifiability is clearly something that can be haggled over. Simplicity is subject to definitional uncertainty, and furthermore has no universally accepted claim to preeminence. Naturalness, calculability, unifying ability, predictivity, etc. are also subject to preeminence doubts

What is non-negotiable is consistency. A theory shown definitively to be inconsistent does not live another day. It might have its utility, such as Newton’s theory of gravity for crude approximate calculations, but nobody would ever say it is a better theory than Einstein’s theory of General Relativity.

Consistency has two key parts to it. The first is that what can and has been computed must be consistent with all known observational facts. As Murray Gell-Mann said about his early graduate student years, “Suddenly, I understood the main function of the theoretician: not to impress the professors in the front row but to agree with observation [10].” Experimentalists of course would not disagree with this non-negotiable requirement of observational consistency. If you cannot match the data what are you doing, they would say?


However, theorists have a more nuanced approach to establishing observational consistency. They often do not spend the time to investigate all the consequences of their theories. Others do not want to “mop up” someone else’s theory, so they are not going to investigate it either. We often get into a situation of a new theory being proposed that solves one problem, but looks like it might create dozens of other incompatibilities with the data but nobody wants to be bothered to compute it. Furthermore, the implications might be extremely difficult to compute.

Sometimes there must be suspended judgment in the competition between excellent theories and observational consequences. Lord Kelvin claimed Darwin’s evolution ideas could not be right because the sun could not burn long enough to enable long-term evolution over millions of years that Darwin knew was needed. Darwin rightly ignored such arguments, deciding to stay on the side of geologists who said the earth appeared to be millions of years old [11]. Of course we know now that Kelvin made a bad inference because he did not know about the fusion source of burning within the sun that could sustain its heat output for billions of years.

A second part to consistency is mathematical consistency. There are numerous examples in the literature of subtle mathematical consistency issues that need to be understood in a theory. Massive gauge theories looked inconsistent for years until the Higgs mechanism was understood. Some gauge theories you can dream up are “anomalous” and inconsistent. Some forms of string theory are inconsistent unless there are extra spatial dimensions. Extra time dimensions appear to violate causality, even when one tries to demand it from the outset, thereby rendering the theory inconsistent. Theories with ghosts, which may not be obvious upon first inspection, give negative probabilities of scattering
Mathematical consistency is subtle and hard at times, and like observational consistency there is no theorem that says that it can be established to comfortable levels by theorists on time scales convenient to humans. Sometimes the inconsistency is too subtle for the scientists to see right off. Other times the calculability of the mathematical consistency question is too difficult to give definitive answer and it is a “coin flip” whether the theory is ultimately consistent or not. For example, pseudomoduli potentials that could cause a runaway problem are incalculable in some interesting dynamically broken supersymmetric theories [12].

It is not controversial that observational consistency and mathematical consistency are non-negotiable; however, the due diligence given to them in theory choice is often lacking. The establishment of observational consistency or mathematical consistency can remain in an embryonic state for years while research dollars flow and other IBE criteria become more motivational factors in research and inquiry, and the consistency issues become taken for granted.

This is one of the themes of Gerard ‘t Hooft’s essay “Can there be physicist without experiments?”. He reminds the reader that some of the grandest theories are investigations of the nature of spacetime at the Planck scale, which is many orders of magnitude beyond where we currently have direct experimental probes. If this is to continue as a physics enterprise it “may imply that we should insist on much higher demands of logical and mathematical rigour than before.” Despite the weakness of verb tense employed, it is an incontestable point. It is in these Planckian theories, such as string theory and loop quantum gravity, where the lack of consistency rigor is so plainly unacceptable. However, the cancer of lax attention to consistency can spread fast in an environment where theories and theorists are feted before vetted.

(2012)
Added on February 28


This long retroactive analysis of the already 50 years old story of electroweak symmetry breaking mechanism has been carried out in the light of experimental discovery of the 125 GeV resonance at LHC Run1 and through the prism of its geometrization with a tentative noncommutative biais to uncover a new spectrum of bright colours entangled in the pale glow of beyond the Standard Model physics.

As reported above, Iliopoulos explains nicely in his review how Yang and Mills succeeded in providing the first geometric setting to describe quantum non abelian gauge fields focusing on the interpretation of the latter as internal symmetries in a fixed background space where general relativity plays no role (even if it inspired them). It’s hard to miss the reversal and more extensive move operated by the spectral noncommutative paradigm of Connes and Chamseddine that have patiently build and polish a new mathematically and experimentally coherent geometric spectral standard model where the internal symmetries appear in a natural manner as a slight refinement of the algebraic rules of coordinates (different from supersymmetry).

Yang-Mills theories where first criticized by Pauli, as their quanta had to be massless in order to maintain gauge invariance. Thus this theory was set aside for a while before the concept of particles acquiring mass through symmetry breaking in massless theories was discovered triggering a significant restart of Yang–Mills theory studies.

As far as spectral geometric models are concerned there are at best marginally quoted in reviews but rarely considered seriously. What major advance will prompt a significant interest in the physics community is hard to anticipate. One can hope the already established connection of some mimetic gravity models with a possible quantization of the volume of spacetime will light the fire for a new kind of investigations on the cosmological standard model dark sector…

To come back to ground, one other obstacle for a more extensive study of spectral models is the emptiness of their expected spectrum of new fundamental particles to discover with man-made accelerators, but well, this is also a perspective sketched by the study of minimal but realistic grand unified SO(10) or recent SMASH models all accommodating the full spectrum of low energy phenomenology (with the exception of a very low axion particle).

Hopefully there is more to search for with nuclear reactors and hadron or lepton colliders than new elementary particles! A lot of physicists are involved in flavor mixing for instance. It could be that noncommutative geometry gives a fresh look here too.

For the theorist, a critical of spectral noncommutative geometry might come from the prejudice against models that do not provide a solution to naturalness problem. May be this requirement might be suspended for a while waiting for a more extensive study of the fine tuning "parameters" (coming from new degrees of freedom like a singlet scalar and right-handed neutrinos) computable from the spectral action principle or required to make it mathematically coherent. Indeed these parameters involved in the renormalisation flow would have values constrained on the full energy spectrum : from low energy scale to the unification one in order to tame the quantum mass corrections to the Higgs boson and also on the intermediate seesaw scale to accommodate left-handed neutrino masses and leptogenesis cosmological scenario. If such a scenario were miraculously possible it could help to uncover some new hidden symmetry from possible accidental corrections in the quadratic divergence of in some extended versions of the Standard model Higgs sector ...