dimanche 19 juillet 2015

A tale of two attoms (new hypotheses should not be half-hearted but doubled)

This post is a continuation of a more than 1 year oder one written in French. To target a wider audience I have switched to the lingua franca of contemporary science literature since then.

Quantum History starts with Atoms of matter confirmed by Perrin...
Following wikipedia let's recall that Jean Perrin (30 September 1870 – 17 April 1942) was a French physicist who, in his studies of the Brownian motion of minute particles suspended in liquids, verified Albert Einstein’s explanation of this phenomenon and thereby confirmed the atomic nature of matter. He was awarded the physics Nobel prize in 1926 for having ‘put a definite end to the long struggle regarding the real existence of molecules’. Perrin reviewed and explained in a very pedagogical way his (and former) studies to establish the reality of atoms and molecules in a book entitled Atoms published in French in 1913 and in English in 1916. Here is an extract from the foreword:

Atoms Jean Perrin 1916

Quantum Higgstory leads to the two "attoms" of spacetime uncovered by Chamseddine, Connes et Mukhanov 
Following parallel lines to the former paragraph one could say that the periphysicist what I am is a French blogger (born 15 November 1972) who, in posts about the zitterbewegung of chiral fermions through the Higgs boson vacuum, tries to understand and publicize the so called Connes' construction of the standard model that leads to a spectral noncommutative unification of the four fundamental forces. This latter ambitious program has recently inspired a bold hypothesis : the existence of quanta of geometry which happen to come in two kinds. This might be a first step towards a theory of quantum gravity and it appears to be connected to the mimetic dark matter proposed by Chamseddine and Mukhanov. Unlike Perrin, I have a modest talent to popularization so I would just propose the two quanta of geometry to be dubbed attoms and provide serious interesting bibliography and relevant extracts to support my moonshine talkings. Needless to say that the experimental confirmation of attoms of spacetime will be difficult. So I do not pretend to be able to provide any proof of the attoms hypothesis today but simply indulge myself to imagine a possible narrative describing this fascinating hypothesis of theoretical and mathematical physics rooted in already validated experiments (I have Higgs bosons and cosmic microwave background in mind).

So the tale would start following the explanation of the "gauging" of the Higgs boson thanks to noncommutative geometry as explained below by nobody less than R. Brout  one of the fathers of the (LA)BEH(GHKW) mechanism:  

R. Brout Aug 1999

Personally I would like to parallel the role of the Brownian motion for the proof of atoms with the unloved or just naive enough to be dangerous zitterbewegung interpretation of the BEH mechanism and its possible connection with attoms. To explain very briefly how the noncommutative geometrization of physics leads to them, let's quote their fathers:

Motivated by the construction of spectral manifolds in noncommutative geometry, we introduce a higher degree Heisenberg commutation relation involving the Dirac operator and the Feynman slash of scalar fields. This commutation relation appears in two versions, one sided and two sided. It implies the quantization of the volume. In the one-sided case it implies that the manifold decomposes into a disconnected sum of spheres which will represent quanta of geometry. The two sided version in dimension 4 predicts the two algebras M_2(H) and M_4(C) which are the algebraic constituents of the Standard Model of particle physics. This taken together with the non-commutative algebra of functions allows one to reconstruct, using the spectral action, the Lagrangian of gravity coupled with the Standard Model. We show that any connected Riemannian Spin 4-manifold with quantized volume >4 (in suitable units) appears as an irreducible representation of the two-sided commutation relations in dimension 4 and that these representations give a seductive model of the "particle picture" for a theory of quantum gravity in which both the Einstein geometric standpoint and the Standard Model emerge from Quantum Mechanics.. Physical applications of this quantization scheme will follow in a separate publication.
(Submitted on 4 Nov 2014)

The separate publication will have to meet great expectations. Said it differently Chamseddine, Connes and Mukhanov and their coworkers and followers are still writing the story. Here is an excerpt from the last development:
... we conclude that in noncommutative geometry the volume of the compact manifold is quantized in terms of Planck units. This solves a basic difficulty of the spectral action [1] whose huge cosmological term is now quantized and no longer contributes to the field equations... 
One immediate application is that, in the path integration formulation of gravity, and in light of having only the traceless Einstein equation ..., integration over the scale factor is now replaced by a sum of the winding numbers with an appropriate weight factor. We note that for the present universe, the winding number equal to the number of Planck quanta would be ∼ 1061 [9]
(Submitted on 8 Sep 2014 (v1), last revised 11 Feb 2015 (this version, v4))

Time will tell if this 1061 number of Planck quanta will play for the attoms hypothesis the one Avogadro constant played for the demonstration of atoms. But even if it doesn't its name is easily found: the Eddington-Dirac-Jordan constant of course! 

//addendum 31 october 2015
With hindsight I propose a better name for the number of spacetime at(t)oms in the present universe : Archimedes observable! The reading of the last words in the famous Sand Reckoner by one of the oldest known physicist-mathematician shows indeed an approximate coïncidence too beautiful to stay unquoted:
 ... it is obvious that the multitude of sand having a magnitude equal to the sphere of the fixed stars which Aristarchus supposes is smaller than 1000 myriads of the eighth numbers*.
King Gelon, to the many who have not also had a share of mathematics I suppose that these will not appear readily believable, but to those who have partaken of them and have thought deeply about the distances and sizes of the earth and sun and moon and the whole world this will be believable on the basis of demonstration. Hence, I thought that it is not inappropriate for you too to contemplate these things.
3rd century B.C
1000 myriads of the eighth numbers = 1063

From the standard theory attoscopy to spectral models exploration (summer homeworks ;-)

Riding on the momentum gained reading the seminal work of Jean Iliopoulos on the standard model (penultimate post) I would be tempted to jump into an enthousiast discussion about the prospects for (and discuss some difficulties of) a program of geometric unification of the four fundamental interactions thanks to the spectral action principle and the noncommutative geometry framework.

Having no expertise but an epistemological acquaintance for this theoretical program all what is no quotation would be just Moonshine talking of course...

... but it's late and tomorrow is time for summer vacation so more about this another day. Happy summer days to everyone!

vendredi 17 juillet 2015

From standard model premise to standard theory completion (in two pics and two comments)

A follow up to the former post, much shorter ;-)

Two pics to illustrate the begining and the end of the experimental evidenceof the standard model/theory of particle physics

The first sign of neutral current was an antineutrino-electron scattering event observed in December 1972 (figure above) in the Gargamelle bubble chamber. For the total exposure of 1.4*106 pictures 5 to 30 events were expected depending ...  finally 3 events were seen.

Higgs event candidate in the ATLAS detector containing a pair of muons and a pair of electrons (recorded on June 30th 2011)
//Added on August 1st 2015:
A phenomenologist and an experimentalist point of view on this fantastic endeavour
The 1973 discovery of weak neutral currents in neutrino scattering by the Gargamelle bubble-chamber collaboration at CERN [9] made experimentalists, and the world at large, aware of Yang-Mills theories, much as the 1971 work of ’t Hooft [7] immediately attracted attention from (field) theorists to the same subject. Both lines of work [7,8,9] were monumental in their difficulty, and run against deep prejudices. In the case of neutral currents, the prejudices had various sources, among them [I have refreshed my memory on these issues by re-reading the talk by Bernard Aubert in [10], an interestingly uneven book; and their rendering by Peter Galison in [11], a thoroughly documented report, that reads like a good novel. I quote them freely in this section]: 
• The very strong limits on their strangeness-changing variant, which in the 70’s were at the branching ratio level of 10−6 (10−9 ) for K± (K0 L ) decays [12]. 
• The perception that the measurement of neutrino-induced weak neutral currents —at least in the semileptonic channels with larger cross-sections than the purely leptonic ones— was nearly impossible in its practical difficulty [13]. 
• The existence of severe (and incorrect) upper limits on strangeness-conserving neutral-current processes, such as the one by Ben Lee, stating that [The results of W. Lee [14]] rule out the existence of the neutral current predicted by Weinberg’s model... [15], or the one by John Bell, J. Løvseth and Tini Veltman: Thus the ratio of neutral-current “elastic events” is less than 3% [16]. 
• The fact that neutrino experiments at the time were primarily designed to look for sequential heavy leptons and for the Lee–Yang process [17] —νµ+N → W+µ+N— for light W’s, but not for neutral currents. 
Naturally, the neutral-current processes favoured by theorists were the ones whose cross sections could be calculated with confidence in the standard model: νlνl elastic scattering on electrons, whose standard cross-sections were worked out by ’t Hooft as early as 1971 [18] (he may also have been the first to emphasize a trivial but important fact: a measurement of the weak-mixing angle, θW , would imply a prediction for the —then— enormous masses of the W and Z). By January 1973, the Aachen group of the Gargamelle collaboration had found a “picture book” event, with a single recoiling electron [19]. But it was just one event and —while it immediately had the effect of putting Gargamellers even harder at work on neutral currents— various cautions and dangerous backgrounds kept the team from publishing this result until July, right before they published their work on semileptonic neutral currents [9].
(Submitted on 23 Apr 2004 (v1), last revised 15 Jun 2004 (this version, v4))

... discovery does not only mean signal identification but also background analysis, cross check of the information , on the experimental aspect but also theoretical aspect. In the neutral current history, the possibility to check the neutron background and also to measure the rate in so many different channels as elastic, semi elastic, purely leptonic was a capital pieces in the process of building evidence. 
... More and more high energy particle physics calls for large collaboration. The Gargamelle experiment is the first large collaboration in modern physics. Announcing a discovery must first imply convincing evidence for the members of the collaboration. Each member has his own history and personal background . This is specially sensitive in what we calls an "evidence", for some of them more convincing is statistical analysis, for other it will be the presence of "golden platted" events. Clearly more active is the collaboration in an experiment, more solid must be the evidence, before to call for discovery.  
NEUTRAL CURRENTS, Bernard Aubert.  

From the "standard" model (programmatic construction) to the standard theory (accomplishments)

Returning to the roots of the standard model
1974 : in a summary talk for a conference, John Iliopoulos presents, for the first time in a single report, the view of physics now called the Standard Model. Starting from this basic fact let us wandering around the scientific literature available online to know more (and find out when the expression Standard Model was used for the first time ;-).
We start with the transcription of the Iliopoulos' talk at the 17th conference of the Rochester series held in London: 

The idea of unifying the weak and electromagnetic interactions is as old as the weak interactions themselves and already by the late 1950's several models were proposed (10) which incorporated most of the features that we find in present day theories. In particular the Yang-Mills (11) couplings were used with the photon, as one of the neutral gauge bosons. However at that time the gauge symmetry had to be explicitly broken by the vector meson mass terms and, as a result, these theories were not renormalizable. The last ingredient was discovered in 1964 with the study of spontaneously broken gauge symmetries (12a12b). 
It is remarkable that these two ingredients, namely Yang-Mills gauge invariance and spontaneously broken symmetries, each one taken separately, seem useless for weak interactions, both being hopelessly inflicted with zero mass bosons. However, when combined together in a spontaneously broken gauge symmetry, the two diseases cure each other, and the resulting theory, although still gauge invariant, can be made to have the correct spectrum of states. The synthesis of the previous models with the Higgs mechanism was first proposed (13) by S. Weinberg in 1967 and, although this model is the simplest one which seems to fit all the existing data today, it attracted little attention at the time it was published. The reason is that renormalizability (13) of such a theory was suggested in the original papers of Weinberg and Salam, but no proof was presented, so this model looked like one more, in a seemingly endless series of attempts to go beyond the Fermi theory of weak interactions.
It has become customary for any speaker who discusses this subject, to complain about the absence of the ultimate model of the world which the gauge theories were supposed to give us. I consider this attitude a very negative one. We have already forgotten that just three years ago we were still struggling to get all the divergencies of the Fermi theory under some sort of control, and we are complaining today because we do not yet know why the muon is so heavy and the pion so light, or why CP is violated in one place and isospin is not in another. Here I would like to adopt the opposite view and, taking for granted that with gauge theories we are on the right track, to make a list of the desired properties of this ultimate model, for which the gauge theories have given at least the glimmer of a possibility. I think that all these properties can be deduced from the requirement of maximum predictive power (33).
(i) Universality...
(ii) The lepton spectrum...
(iii) 1+γ5 structure...
(iv) The |ΔI| = 1/2...
(v) CP violation...
(vi) Strong interaction symmetries...
(vii) Baryon number conservation...
(viii) Asymptotic freedom...
(ix) A general picture: Hierarchy of interactions...
Finally let me remark that the order of magnitude of the superheavy masses may sound enormous, however if one accepts the initial idea, namely that in some range all coupling constants are equal, there is no other possibility. In fact the dépendance of the coupling constants on the momentum range predicted by the renormalization group is logarithmic ... and therefore one needs these orders of magnitude in order to explain the observed differences of strength among the interactions in ordinary energies. Furthermore these are precisely the energies in which the gravitational interactions are expected to become important, which suggest some possible connections between superheavy breaking of G and gravitation. 
 I could continue my list of desired properties for the ultimate model of the world, but it must be obvious by now that no existing model satisfies all of them. However, continuing my optimistic view, I believe that all these conditions are so restrictive that the model will be quite unique. The reason why it has escaped discovery up to now, must certainly be attributed to insufficiency of experimental data (after all until some months ago there were people who did not believe in the existence of neutral currents). In any case, and I am sure everybody agrees, the fact that we can seriously discuss all these properties outside the science fiction conventions, is a tremendous progress which is solely due to the adventure of gauge theories.
Jean Iliopoulos 1974

This report can definitely be seen as the date of birth of the standard model program. But it does not contain the minimal standard model as it is generally called today. One particular reason is at the time the spectrum of elementary fermions was far from complete, the third generation of fermions was completely unknown for instance. In a more anecdotical way the colours of quarks was not settled either as one can see in the following pedagogical presentation by the same authour three years afterwards at the 1977 CERN School of Physics:

The "Standard" model (C. Bouchiat, J. Iliopoulos and Ph. Meyer) 
This is the extension of the Weinberg-Salam model to the hadronic world. This extension is completely straightforward. We have four quartets of elementary fermions; the leptons 

and the three-coloured quarks, and all are treated identically. Thus we form doublets:
 ... and for each doublet (there are eight of them) we repeat the construction of Section 5. Notice that in SU(4) all quarks participate in weak interactions, while in SU(3) the λc quark does not.  
We can try to go beyond this standard model and give real meaning to the "lepton-hadron symmetry" by putting leptons and quarks in the same representation. There exist already several such models, but there is no time to go into them now. They usually tend to violate the separate conservation of baryon and lepton numbers, and we need super-heavy W's — sometimes as heavy as one gramme! — in order to suppress unwanted transitions such as proton decays. Their attractive feature is that, sometimes, they suggest an intimate connection between gauge theories and gravity.

Jean Iliopoulos is a greek physicist but most of his career was spent in France so let us appreciate his vision of the modèle standard in French:

Le titre de mon cours à l'école de Gif de 1974 était les théories de jauge des interactions faibles et électromagnétiques. Mais, étant donné que j'avais déjà traîté le même sujet l'année dernière, j'ai décidé cette fois de mettre l'accent plutôt sur les développements récents, comme la liberté asyrnptotique, la recherche du charme et de la couleur, ou les spéculations sur l'unification générale de toutes les interactions... 
L'année dernière nous avions exposé la stratégie générale pour la construction des modèles renormalisables des interactions faibles et électromagnétiques. Cette stratégie s'est avérée très fructueuse en produisant un grand nombre de modèles plus ou moins réalistes. Il n'est pas question de les examiner tous ici, mais il y a quelques propriétés générales que nous retrouvons dans tous les modèles présentés jusqu' à maintenant. 
Comme nous avions vu l'année dernière , tous les modèles contiennent soit des courants neutres, soit les leptons lourds, soit enfin les deux. Au début les physiciens ne croyaient pas aux courants neutres, et cette méfiance se reflettait aux modèles proposés qui, le plus souvent, étaient spécialement conçus pour éviter des processus du type v+Nv+X. Aujourd'hui les expérimentateurs se sont enfin mis d'accord sur l'existence de ces courants et ceci élimine déjà la plupart des anciens modèles. Par contre la présence des leptons lourds n'est nullement exclue et , en fait , elle est nécessaire à là formulation de presque tous les modèles à l'exception du modèle initial de Weinberg et Salam. Le problème avec ces leptons est, d'abord qu'ils deviennent de plus en plus lourds au fur et à mesure que les expériences avancent et, ensuite, qu'il ne semble y avoir aucune règle pour prédire leurs masses et leurs propriétés. Néanmoins; étant donné que nous ne comprenons pas encore le problème des masses des leptons connus (neutrinos - electron - muon) nous ne pouvons pas exclure la possibilité des leptons lourds.   
La seconde caractéristique commune à tous les modèles est l'élargissement de la symétrie des interactions, fortes de SU(3) à un groupe plus large (SU (4) etc.). Cela entraîne l'existence d'autres nombres quantiques, conservés par les interactions fortes, et, par conséquent, de nouvelles particules qui n'ont pas été observées jusqu'au aujourd'hui. Nous en reparlerons en détail dans les chapitres suivants. 
A part ces propriétés qui nous sont imposées par les regles du jeu, le théoricien jouit d'une grande liberté pour la construction des modèles. Si il ne se sent pas très géné pour introduire des grands nombres de particules nouvelles et assez souvent exotiques, il peut construire des modèles avec pratiquement toute propriété voulue. Ceci explique en partie la grande popularité des théories de jauge.
... je voudrais d'abord vous signaler que dans le cadre du modèle SU(4) prélente plus haut, les trois triplets colorés deviennent trois quartets...
Le groupe naturel des interactions fortes est maintenant SU(4)×SU(3)', ou SU(3)' est "le groupe de la couleur", ou le groupe des transformations qui mélangent les trois colonnes de la matrice des quarks sans toucher aux quatre lignes. Dans ce modèle les nombres quantiques, autres que la couleur des trois quartets, sont supposés être les mêmes... et tous les états physiques sont des singulets de SU(3)'. Il en résulte que, si toutes les interactions conservent SU(3)' le monde est "daltonien", c'est à dire les états "colorés" (non singulets de SU(3)' ) ne peuvent pas être produits. Ceci expliquerait l'absence des quarks libres ou des états à deux quarks. 
Ce monde hadronique à douze quarks est aussi suggéré par des considérations de la théorie de la renormalisation. On démontre en effet que le modèle de Weinberg-Salam pour les leptons est, strictement parlant, non-renormalisable, à cause des anomalies dans les identités de Ward des courants axiaux connues sous le nom d'"anomalies d'Adler". Or, l'introduction de trois quartets de quarks colorés élimine ces anomalies et rend la théorie vraiment renormalisable. 
Finalement, je voudrais mentionner déjà que l'introduction de la couleur nous permet de construire des théories de jauge pour les interactions fortes.En effet, comme nous allons expliquer au chapitre suivant, nous ayons maintenant des bonnes raisons de croire que les interactions fortes sont aussi décrites par des théories de jauge non abéliennes car elles sont les seules à avoir la propriété de la "liberté asynptotique".
Le modèle de Pati-Salam n'est pas le plus économique, car la symétrie leptons-quarks exige l'utilisation des spineurs à quatre composantes pour les neutrinos, donc l'introduction de neutrinos "droits". On peut se demander quel est le plus petit groupe qui décrit uniquement les leptons observés et les douze quarks. La réponse est évidemment, SU(2)×U(1)×SU(3)' où SU(2)×U(1) est le groupe de Glashow-Weinberg-Salam-Ward et SU(3)' est le groupe de la couleur. Pourtant ce groupe est très compliqué et n'est pas satisfaisant du point de vue esthétique. Est-ce possible de décrire toutes les interactions, y compris les interactions fortes, à l'aide d'un groupe simple ? La réponse est donnée par Georgi et Glashow. Elle est simple mais conduit de nouveau à une violation du nombre baryonique. Le plus petit groupe est SU(5). Les fermions (leptons et quarks) sont placés dans deux représentations de dimensions cinq et dix. La violation du nombre baryoniqùe est "forte", c'est à dire elle arrive eu première ordre des interactions. La constante de couplage effective doit être par conséquent très petite et ceci conduit à des masses des bosons vectoriels M ≥ 1015 Gev ... Ce modèle est très intéressant parce qu'il introduit l'idée de hiérarchie des interactions :  
Au début nous avons la symétrie SU(5). Les leptons et les quarks y sont traités d'une façon symmétrique. SU(5) est brisé spontanément à  SU(2)×U(1)×SU(3)' . Cette brisure est supposée être super-forte produisant des masses de bosons vectoriels de l'ordre de 1015 - 1019 GeV. Ceci est nécessaire afin de supprimer, des désintégrations du proton ou d'autres baryons. Cependant l'ordre de.grandeur de ces masses gigantesques rappelle "la masse de Planck" qui est 1019 GeV, et suggère une connection possible entre cette brisure super-forte et les interactions gravitationnelles.  
La brisure super-forte laisse SU(2)×U(1)×SU(3)' exact. Les bosons vectoriels des interactions, faibles ont encore des masses égales à zéro. Alors intervient une seconde brisure, qui produit les masses MW et MZ de l'ordre de 50 à 100 GeV et laisse exact l'ëlectromagnétisme et le groupe des interactions forces 
Un des avantages de ces modèles totalement unifiés, à part leurs valeurs esthétiques, est que toutes les interactions sont décrites à l'aide d'une seule constante de couplage...

Nous avons essayé de conbiner les résultats expérimentaux de tous les processus qui font intervenir les courants, aussi bien à basse énergie, que dans la région du scaling et nous avons vu qu'un ensemble coherent peut être obtenu si nous postulons que les théories de jauge décrivent toutes les interactions entre particules élémentaires. 1es arguments en faveur de ce postulat sont : 
(i) Pour les interactions fortes : Les theories de jauge non abéliennes sont les seules à être asymptotiquement libres et fournissent ainsi une justification théorique du modèle intuitif des partons
(ii) Pour les interactions e.m. et faible» parce qu'elles donnant une théorie renormalisable
(iii) Il est enfin connu que la théorie de la gravitation est aussi une théorie de jauge.
Les prix que nous sommes appelés a payer sont les suivants:
(i) Existence des bosons vectoriels intermédiaires. 
(ii) Existence des courants faibles neutres ou des leptons lourds. Les premiers ont déjà été observés. Du coup les seconds ne sont plus nécessaires. 
(iii) Existence des particules charmées pour expliquer l'absence des courants neutres avec ΔS = 1 ou des processus avec ΔS = 2. 
(iv) Existence du groupe de la couleur. 

Chacun de ces prix ouvre un vaste domaine de recherche expérinentale. Mis à part le point (i) qui semble plus hors de portée à présent, les suivants sont possibles : 
Nous avons déjà souligné l'importance capitale que présente l'étude détaillée des propriétés des courants neutres. Nous ne savons pas encore si ils partagent les propriétés des courants chargés comme la violation maximale de parité, l'universalité ... etc. Autant des questions qui se posent aux expérimentateurs 
L'intérêt de la découverte des particules charmées n'a pas besoin d'être souligné. A part le fait qu'elle constitue le test crucial, de ces théories de jauge, il est évident que nous souhaitons savoir ci le groupe des interactions fortes est bien SU(3), ou quelque chose de plus large... 
Les mêmes remarques s'appliquent à la découverte du groupe de la couleur. Comme nous avons déjà mentionné, son étude expérimentale nécessiter la détermination précise (à 1 à 5% près) des fonctions de structure pour différentes valeurs de x et q2 de l'ordre de 50 à 130 GeV

Il semble que SPS, avec ses meilleurs faisceaux de neutrino et surtout de muon, sera l'endroit idéal pour ce genre d'expériences.

Dans tous les cas les questions soulevées ici dépassent le cadre des seules théories de jauge et entrent parmi les problèmes les plus urgents et les plus intéressants qui se posent aujourd'hui aux expérimentateurs des hautes énergie.
Récents progrès en théorie de jauge / Recent progress in gauge theory

Jean Iliopoulos 1974
The charmed quark would be discovered in november 1974 and since then the standard model has not ceased been confirmed and completed by experimental physics.

"What is past, is prologue" 
What better way to end this post than sharing some slides from the Theory Summary Talk of Iliopoulos (from whom I borrow the former quote) at the Rencontres de Moriond 2014:

//The title has been slightly edited on August 8, 2015.

mercredi 15 juillet 2015

Another sunny day without susy but no moonshine without dark matter

Looking for discrete beauties
Today the blogger proposes a reading not of the hottest paper about the last hype in particle physics but a ten year old work that deals with one (almost sleeping*) or two beauties leaving at very high energy scales: non supersymmetric SO(10) theories and Peccei-Quinn [12] symmetry. 

SO(10) grand unified theory [1] is probably the best motivated candidate for the unification of the strong and electro-weak interactions. It unifies the family of fermions; it includes the SU(4) C quark-lepton symmetry [2] and the left-right (LR) symmetry [3] in the form of charge conjugation as a finite gauge symmetry; it predicts the existence of right-handed neutrinos and through the see-saw mechanism [4] offers an appealing explanation for the smallness of neutrino masses. Due to the success of supersymmetric unification, and the use of supersymmetry in controlling the gauge hierarchy, most of the attention in recent years has focused on the supersymmetric version of SO(10). However, supersymmetry may not be there.  After all, it controls the Higgs gauge hierarchy, but not the cosmological constant. The long standing failure of understanding the smallness of the cosmological constant suggests that the unwelcome fine-tuning may be necessary.. 

What about grand unification without supersymmetry? At first glance, one may worry about the unification of gauge couplings in this case. Certainly, in the minimal SU(5) theory, the gauge couplings do not unify without low-energy supersymmetry. What happens is the following: the colour and weak gauge couplings meet at around 1016GeV, an ideal scale from the point of view of the proton stability and perturbativity (i.e., sufficiently below MPlanck). The problem is the U(1) coupling. Without supersymmetry it meets the SU(2) L coupling at around 1013 GeV [6]; with low-energy supersymmetry the onestep unification works as is well known [7].  

On the other hand, the fact that neutrinos are massive indicates strongly that SU(5) is not the right grand unified theory: it simply requires too many disconnected parameters in the Yukawa sector [8, 9]. The SO(10) theory is favored by the neutrino oscillation data. Most interestingly, SO(10) needs no supersymmetry for a successful unification of gauge couplings. On the contrary, the failure of ordinary SU(5) tells us that in the absence of supersymmetry there is necessarily an intermediate scale such as the left-right symmetry breaking scale MR. Namely, in this case the SU(2)L and SU(3)C couplings run as in the standard model or with a tiny change depending whether or not there are additional Higgs multiplets at MR (recall that the Higgs contribution to the running is small). However, the U(1) coupling is strongly affected by the embedding in SU(2)R above MR. The large contributions of the right-handed gauge bosons makes the U(1) coupling increase much slower and helps it meet the other two couplings at the same point. The scale MR typically lies between 1010 GeV and 1014 GeV (see for example [10, 11] and references therein), which fits very nicely with the see-saw mechanism. Now, having no supersymmetry implies the loss of a dark matter candidate. One may be even willing to introduce an additional symmetry in order to achieve this. In this case it should be stressed that SO(10) provides a framework for the axionic dark matter: all one needs is a Peccei-Quinn [12] symmetry U(1)PQ which simultaneously solves the strong CP problem 
This seems to us more than sufficient motivation to carefully study ordinary non-supersymmetric SO(10). What is missing in this program is the construction of a well defined predictive theory with the realistic fermionic spectrum. This paper is devoted precisely to this task. 
(Submitted on 11 Oct 2005 (v1), last revised 1 Nov 2005 (this version, v2))

Asking burning questions waiting for realistic answers
In particular, the search of the minimal realistic Yukawa sector is a burning question. In the absence of higher dimensional operators at least two Higgs multiplets with the corresponding Yukawa matrices are needed, otherwise there would be no mixings. The Yukawa Higgs sector can contain 10 H , 120 H and 126 H representations, since 
 16×16 = 10+120+126 . (1)  
One version of the theory with only 10H and 126H was studied in great detail in the case of low-energy supersymmetry [13, 14, 15]. In spite of having a small number of parameters it seems to be consistent with all the data [16, 17, 18, 19, 20]. For the type II seesaw it predicts furthermore the 1−3 leptonic mixing angle not far from its experimental limit: |Ue3| > 0.1 [18, 20] and it offers an interesting connection between b−τ unification and the large atmospheric mixing angle [21, 22]... 
Thus, a first obvious possibility in ordinary SO(10) is to address the model with 10H + 126H, and to see whether or not it can continue to be realistic... 
In this work, we stick to the renormalizable version of the see-saw mechanism (for alternatives using a radiatively-induced see-saw, see[25]), which makes the representation 126H indispensable, since it breaks the SU(2)R group and gives a see-saw neutrino mass. By itself it gives no fermionic mixing, so it does not suffice. The realistic fermionic spectrum requires adding either 10H or 120H... 
Before starting out, it is convenient to decompose the Higgs fields under the SU(2)L× SU(2)R× SU(4)C Pati-Salam (PS) group:
10 = (2, 2, 1) + (1, 1, 6)  
126 = (1, 3, 10) + (3, 1, 10) + (2, 2, 15) + (1, 1, 6) 
120 = (1, 3, 6) + (3, 1, 6) + (2, 2, 15) + (2, 2, 1) + (1, 1, 20) 
As is well known, the 126H provides mass terms for right-handed and left-handed neutrinos: 
MνR = <1, 3, 10>Y126,  MνL = <3, 1, 10> Y126   
which means that one has both type I and type II seesaw: 
MN = −MνD MνR-1MνD + MνL 
In the type I case it is the large vev of (1, 3, 10) that provides the masses of right-handed neutrino whereas in the type II case, the left-handed triplet provides directly light neutrino masses through a small vev [26, 27]. The disentangling of the two contributions is in general hard.
With the minimal fine-tuning the light Higgs is in general a mixture of, among others, (2,2,1) of 10H and (2,2,15) of 126H. This happens at least due to the large (1,3,10) vev in the term (126H)2126H10H. In any case, their mixings require the breaking of SU(4)C symmetry at a scale MP S, and it is thus controlled by the ratio MP S/M, where M corresponds to the mass of the heavy doublets. Thus, if M≃MGUT, and MPS≪ MGUT , this would not work; we come to the conclusion that one needs to tune-down somewhat M... 
If the model with real 10H does fail eventually, one could simply complexify it. This of course introduces new Yukawa couplings which makes the theory less predictive. Certain predictions may remain, though, such as the automatic connection between b−τ unification and large atmospheric mixing angle in the type II seesaw. This is true independently of the number of 10 dimensional Higgs representations, since 10H cannot distinguish down quarks from charged leptons... It is a simple exercise to establish the above mentioned connection between |mb|≈|mτ| and large θatm... In the non-supersymmetric theory, b−τ unification fails badly, mτ∼2mb [29]. The realistic theory will require a Type I seesaw, or an admixture of both possibilities.

Taking axion as dark matter solution
A complex 10H means, as we said, an extra set of Yukawa couplings. At the same time this non-supersymmetric theory cannot account for the dark matter of the universe, since there are no cosmologically stable neutral particles and, as is well known, light neutrinos cannot too. It is then rather suggestive to profit from the complex 10H and impose the U(1)PQ Peccei-Quinn symmetry: 
16 → e16 , 10 → e-2iα10 , 126 →  e-2iα126 , (10) 
with all other fields neutral. The Yukawa structure has the form (5) with 10H now complex. This resolves the inconsistency in fermion masses and mixings discussed above, and gives the axion as a dark matter candidate as a bonus [30]. The neutrality of the other Higgs fields under U(1)PQ emerges from the requirement of minimality of the Higgs sector that we wish to stick to. Namely, 126H is a complex representation and 10H had to be complexified in order to achieve realistic fermion mass matrices and to have U(1)PQ. It is desirable that the U(1)PQ be broken by a nonzero <126H>, i.e. the scale of SU(2)R breaking and right-handed neutrino masses [31], otherwise 10H would do it an give MPQMW, which is ruled out by experiment. Actually, astrophysical and cosmological limits prefer MPQ in the window 1010 − 1013 GeV [32]. Now, a single 126H just trades the original Peccei-Quinn charge for a linear combination of U(1)PQ, T3R and B−L [31, 33]. Thus in order to break this combination and provide the Goldstone boson an additional Higgs multiplet is needed. One choice is to add another 126H and decouple it from fermions, since it must necessarily have a different PQ charge [31]. An alternative is to use a (complex) GUT scale Higgs as considered for SU(5) by [34], with MPQ≃MGUT, which however implies too much dark matter or some amount of fine-tuning. Of course, the Peccei-Quinn symmetry does more than just providing the dark matter candidate: it solves the strong CP problem and predicts the vanishing θ. The reader may object to worrying about the strong CP and not the Higgs mass hierarchy problem; after all, they are both problems of fine-tuning. Actually, the strong CP problem is not even a problem in the standard model, at least not in the technical sense [35]. Namely, although divergent, in the standard model θ is much smaller than the experimental limit: θ ≪ 10-10 for any reasonable value of the cutoff Λ, e.g. θ≈10-19 for Λ=MPlanck.
The physical question is really the value of θ. PecceiQuinn symmetry fixes this arbitrary parameter of the SM. The situation with supersymmetry and the Higgs mass is opposite. Low energy SUSY helps keep Higgs mass small in perturbation theory, but fails completely in predicting it. If we do not worry about the naturalness we can do without supersymmetry. On the other hand, if we wish to predict the electron dipole moment of the neutron, U(1)PQ is a must, unless we employ the spontaneous breaking of P or CP in order to control θ [36, 37, 38].

Scrutinizing the patterns of symmetry breaking and neutrino mass
In the over-constrained models discussed in this paper, the Dirac neutrino Yukawa couplings are not arbitrary. Thus one must make sure that the pattern of intermediate mass scale is consistent with a see-saw mechanism for neutrino masses. More precisely, the B−L-breaking scale responsible for right-handed neutrino masses cannot be too low. On the other hand, this scale, strictly speaking, cannot be predicted by the renormalization group study of the unification constraints. The problem is that the right-handed neutrinos and the Higgs scalar responsible for B−L breaking are Standard Model singlets, and thus have almost no impact (zero impact at one-loop) on the running. Fortunately, we know that the B−L breaking scale must be below SU(5) breaking, since the couplings do not unify in the Standard Model. Better to say, 
MB-L≤ MR, the scale of SU(2)R breaking, and hence one must make sure that MR is large enough. This, together with proton decay constraints, will allow us to select between a large number of possible patterns of symmetry breaking. Our task is simplified by the exhaustive study of symmetry breaking in the literature, in particular the careful two-loop level calculations of Ref. [10]. Recall, though, that the (2, 2, 15) field must lie below the GUT scale ...and although its impact on the running is very tiny, it must be included. The lower limit on MR stems from the heaviest neutrino mass mν ≥ mt2 / MR , (26) which gives MR≥ 1013 GeV or so. One can now turn to the useful table of Ref. [10], where the most general patterns of SO(10) symmetry breaking with two intermediate scales consistent with proton decay limits are presented... 
The above limit on MR immediately rules out a number of the remaining possibilities; the most promising candidates are those with an intermediate SU(2)L×SU(2)R×SU(4)C×P symmetry breaking scale (that is, PS group with unbroken parity). This is the case in which the breaking at the large scale is achieved by a Pati-Salam parity even singlet, for example contained in 54H. In the searching for a realistic symmetry breaking pattern one does not need to stick to the global minimum of the potential as in [11], but a local metastable minimum with a long enough lifetime will do the job as well. It has to be stressed however, that a big uncertainty is implicit in all models with complicated or unspecified Higgs sector, due to possibly large and uncontrolled threshold corrections [45]. In any case, the nature of the GUT Higgs and the pattern of symmetry breaking will also enter into the fitting of fermion masses, since they determine the decomposition of the light (fine-tuned) Higgs doublet...This point is often overlooked but it is essential in the final test of the theory. At this point, for us it is reassuring that both the pattern of symmetry breaking and the nature of Yukawa interactions allow for a possibly realistic, predictive minimal model of non-supersymmetric SO(10).
Going to a conclusion 

Since we know nothing about the existence of supersymmetry or the nature of its breaking, it is mandatory to study the non-supersymmetric version, as a part of the search for the SO(10) GUT. This was the scope of our paper. We have identified two potentially realistic, predictive Yukawa structures for the case of the renormalizable see-saw mechanism, based on a 126H. This choice is motivated by the fact that the alternative radiative seesaw seems to favor split supersymmetry [25]. We have focused on the renormalizable version simply in order to be predictive, without invoking unknown physics. 
The models require adding 10H or 120H fields. The latter is particularly interesting, due to the small number of Yukawa couplings. Both models seem to require adding U(1)PQ. While this may be appealing since it provides the axion as a dark matter candidate, it is against the spirit of sticking to pure grand unification...

* This article on NON-SUSY SO(10) models has not gone unnoticed as its citation recordings can prove it but the number of publications on SUSY SO(10) is still much larger today meanwhile empirical motivations to explore non-susy ways are blatant. Writing about sleeping beauties, the reader familiar with the Quantum Ostinato blog can guess I had another speculative idea in the back of my mind, the Pati Salam extension of the spectral Standard Model; it follows a philosophy almost parallel and shares a phenomenology quite convergent** with the one described in the article studied today at least up to the intermediate Pati-Salam scale. Quite interestingly beyond this partial unification scale, the most recent advances in the spectral noncommutative geometrization of physics call into question the very notion of space-time as we know it so that GUT models and SUSY might be the gauge groups and the symmetry too far ...

**Contrary to most of the recent non-susy SO(10) publications the article by Bajc and coworkers focuses on the minimal fermion content namely the standard model (SM) fermions + one right handed Majorana neutrino for each of the three SM generations just like in the full spectral Standard Model where the number of fermions is constrained from axioms
Of course the need for more fermions come from requirements to model dark matter*** in a different way than axions or to provide physics beyond the SM (neutrino-less double beta decay, neutron-anti-neutron oscillations, light right-handed neutrinos...) testable at energy scales accessible to current technology. It is then not unexpected that theoretical surveys of SO(10) models with a Pati-Salam intermediate scale and a minimal fermion content have been very few... as the perspective to test them directly is pretty challenging! 

***Talking about dark matter it is worth reminding the reader that other solutions requiring no new particle but modification of gravitation have been proposed. It just so happens that one of them called mimetic dark matter (mentioned recently for the first time in a post at Quantum diaries ;-) could be provided naturally by non-commutative geometry

lundi 13 juillet 2015

Summer program 2015 : riding the W ' ave .... (LHC season 2, episode 1)

... to reach new Heig(g)hts ?
I carry on reporting some news about the hint of a signal for a new weak gauge boson W' at LHC: a potential discovery classified as "not unexpected" in a recent review by Chris Quigg. In this post I chose to follow the work from authors of a specific model reported previously in this blog just for the sake of obstinacy ;-). 
The reader will encounter a detailed discussion of the scalar sector proposed to accommodate the spontaneous breaking of a left-right symmetric extension of the standard model at the TeV scale and she will get more information about the potential falsification of such a phenomenological construction expected for the end of this year thanks to the 13 TeV available now for LHC Run2! 

Using LHC data at √s=8 TeV, the ATLAS and CMS Collaborations, have reported deviations from the Standard Model (SM) of statistical significance between 2 and 3σ in several final states, indicating mass peaks in the 1.8–2 TeV range [1]-[5]. The cross sections required for producing these mass peaks are consistent with the properties of a W' boson in an SU(2)L×SU(2)R×U(1)B-L gauge theory with right-handed neutrinos that have Dirac masses at the TeV scale [6]. 
The spontaneous breaking of SU(2)×SU(2)×U(1) gauge groups require an extended Higgs sector. For large regions of parameter space, the W' boson have large branching fractions into heavy scalars from the Higgs sector [7][8]. We show here that the W' boson hinted by the LHC data is likely to decay into H+A0 and H+H0, where H+A0 and H0 are heavy spin-0 particles present in Two-Higgs-Doublet models. We compute the branching fractions for these decays and present evidence that signals for the W'→ H+A0/H0 processes may already be visible in the 8 TeV LHC data.  
There are numerous and diverse studies of SU(2)L×SU(2)R×U(1)B-L models, spanning four decades [10]. An interesting aspect of the left-right symmetric models is that they can be embedded in the minimal SO(10) grand unified theory. This scenario must be significantly modified due to the presence of Dirac masses for right-handed neutrinos required by the CMS e+e−jj events [Unless the right-handed neutrinos have TeV-scale masses with the split between two of them at the MeV scale [9]].1 The theory introduced in [6] involves at least one vectorlike fermion transforming as a doublet under SU(2)R. This may be part of an additional SO(10) multiplet, but it may also be associated with completely different UV completions... 
The Higgs sector of the SU(2)L×SU(2)R×U(1)B-L gauge theory discussed in [6] consists of two complex scalar fields: an SU(2)R triplet T of B−L charge +2, and an SU(2)L× SU(2)R bidoublet Σ of B−L charge 0.  
The renormalizable Higgs potential is given by V(T)+V(T,Σ)+V(Σ)... The bidoublet-only potential V(Σ) is chosen such that by itself it does not generate a VEV for Σ ... the T scalar acquires a VEV ... this breaks SU(2)R×U(1)B-L down to the SM hypercharge gauge group, U(1)Y , leading to large masses for the W' and Z' bosons. The value of the T VEV is related to the parameters of the Wboson. In the next section we will show that the parameters indicated by the LHC mass peaks near 2 TeV imply uT≈ 3−4 TeV. 
The triplet field includes 6 degrees of freedom, and can be written as T = (T1, T2, T3), with Ti (i = 1, 2, 3) complex scalars... The fields of definite electric charge, which are combinations of the Ti components, include three Nambu-Goldstone bosons (GR±, GR-). These become the longitudinal degrees of freedom of the W'± and Z' bosons. The three remaining fields are a real scalar T0, a doubly-charged scalar T++ and its charge conjugate state T--... For quartic couplings in the 0.1–1 range and in the absence of fine tuning, the T0 and T++ particles have masses comparable to, or heavier than W' 
The mixed terms, which involve both the T and Σ scalars, will induce a nonzero VEV {proportional to vH} where vH=174 GeV is the electroweak scale. We are interested in the case where vH/uT∼1/20. The effect of the Σ VEV on the T0 and T++ masses and couplings is thus negligible. At energy scales below the T0 and T++ masses, the scalar sector consists only of Σ, which is the same as two Higgs doublets.
.... Besides h0 and the longitudinal W± and Z, the bidoublet field Σ includes the heavy scalars H±H0A0. The range of allowed masses for these particles spans more than an order of magnitude, from the weak scale to the SU(2)R breaking scale. If they are lighter than the W' boson, then W' decays may provide the main mechanisms for production of these scalar particles at hadron colliders. 
...For MW',≈1.9 TeV and gR≈0.45–0.6 (as determined in [6], by comparing the W' production cross section to the CMS dijet excess [4]), we find the SU(2)breaking scale uT ≈ 3–4 TeV.
...The main decay modes for the heavy Higgs bosons are A0H0t and H+→t. If their masses are below MW'/2, then the cascade decay W'H±A0/H0→3t+b→3W+4b has a branching fraction of up to 3% and provides a promising way for discovering all these particles. An excess of events, with a statistical significance of about 3σ, has been reported by the ATLAS Collaboration [17] in the final state with two leptons of same charge and two or more b jets. We have shown that this can be explained by the cascade decay of W' if the heavy Higgs bosons have masses in the 400–700 GeV range. 
The SU(2)L×SU(2)R×U(1)B-L gauge theory presented here depends on only a few parameters, whose ranges are already determined by accounting for the deviations from the SM mentioned above. The various phenomena predicted by this gauge theory can thus be confirmed or ruled out in the near future. 
In Run 2 of the LHC, the W' production cross section is large, in the 1–2 pb range at √s=13 TeV. Besides resonant production of WZ, Wh0, jj and tb, there are several W' discovery modes: W'→ τNτ → ττjj, ττtb, eτjj, eτtb and W'→eNτ→eejj, eetb would test the existence of the heavy right-handed neutrino Nτ {with benchmark mass of 1 TeV}, while W'→H±A0/H0→3t+b, W'→WA0/H0→Wt, W'→H±h0→tbh0 and others would test the existence of the heavy Higgs bosons. 
Another promising search channel for the heavy neutral Higgs bosons, independent of the W' , follows from production in association with a t pair, which has a cross section of the order of 10 fb at √s=13 TeV. With more data, the Z' boson analyzed in [6] will also be accessible in a variety of channels.
(Submitted on 7 Jul 2015)

Summertime is a good opportunity to build wonderful sandcastles but few withstand waves ... 
No offence to the hard work of phenomenologists of course but due respect to these wizards and their fascinating ingenuity as quantum "mechanics" ;-)

samedi 4 juillet 2015

Circumnavigating the Great Grand Loop of Physics (a hypothetical narrative)

Today I have decided to celebrate the third anniversary of the Higgs boson discovery and the 30 (35?) years of noncommutative geometry by musing on the famous "Glashow's snake" which symbolizes the eventual merging of empirical high energy particle physics with observational cosmology and following the Ariane thread woven with insights from the spectral noncommutative geometrization of physics. 
Summer heat oblige I will indulge myself with a watery metaphor.

the original Glashow's Snake
The unity of the forces on the inside and 
the characteristic structure sizes on the snake, 
with the succession of scale size on the outside. 
The snake devours its tail, where the physics 
of very smallest (the Planck scale, 10-30cm) 
is visible by peering back in time to
the outer reaches of the universe (1025cm).

Driven by the wind of spectral action...
Let's begin our navigation starting in the gravity sector, given a heading from macro to micro scales thus sailing clockwise on the Glashow's snake. Our caravel will be christened "the MCtheory" which is my nickname for the spectral noncommutative geometrization of physics (more explanation below). Here is an already old but still relevant and above all pedagogical review of this endeavour that fits nicely with the nautic metaphor chosen for this post:
Einstein was a passionate sailor. We speculate that this was no accident. The subtle harmony between geometries and forces becomes palpable to the sailor, he sees the curvature of the sail and feels the force that it produces. Before Einstein, it was generally admitted that forces are vector fields in an Euclidean space, R3, the scalar product being necessary to define work and energy. Einstein generalized Euclidean to Minkowskian and Riemannian geometry and we have two dreisätze or règles de trois. Take Coulomb’s static law for the electric field with coupling constant ε0 and add Minkowskian geometry with its scale c, the speed of light: you obtain Maxwell’s theory. In particular, there appears the magnetic field with feeble coupling constant µ0=1/(c2ε0). Maxwell’s theory is celebrated today as Abelian or should we say, commutative Yang-Mills theory. The second dreisatz starts from Newton’s (static) universal law of gravitation, adds Riemannian geometry to obtain general relativity with new feeble, gravito-magnetic forces. 
Connes proposes two more dreisätze. Take a certain Yang-Mills theory with coupling constant g, coupled to a Dirac spinor of mass m. Add noncommutative geometry [1] with an energy scale Λ: you obtain a Yang-Mills-Higgs theory [2],[3]. The symmetry breaking scalar becomes a magnetic field of the Yang-Mills field and its mass and self-coupling λ are constrained in terms of g, m and Λ. His second dreisatz starts from general relativity, adds noncommutative geometry to obtain the Einstein-Hilbert action plus the Yang-Mills-Higgs action [4][5]. Now the Yang-Mills and the Higgs fields are magnetic fields of the gravitational field. Again there are constraints on λ, but they are different. 
Let us call noncommutative Yang-Mills the third and noncommutative relativity the fourth dreisatz. Note however that — unlike with supersymmetry — you cannot take any Yang-Mills theory and put ’noncommutative’ in front [6][7][8]. Note also that, behind noncommutative relativity, there stands a genuine noncommutative extension of Einstein’s principle of general relativity, the spectral {action} principle. One of the attractive features of noncommutative geometry is to unify gauge couplings with scalar self-couplings and Yukawa couplings. 
(Submitted on 13 Jun 1997)

Since the writing of this paper, some progress has been made in the noncommutative geometrization of the standard model or put it differently the improved spectral modelisation of spacetime at the very small scale. The most important one is probably the tentative quantization of spacetime through a new kind of Heisenberg-like relation due to A. Connes and Ali Chamseddine (it would probably deserve to be called a third dreisatz). As time is short and as a more thorough discussion of the possible physical consequences is still awaited, I will not dwell further on this subject. To tease the reader I will just add that the noncommutative Yang-Mills and relativity theories could be a bridge indeed over less dark matter thanks to the collaboration of Sacha Mukhanov who recently joined Chamseddine and Connes.

...tracking a new Higgs σ scalar
The standard model of particle physics with its now confirmed Brout-Englert-Higgs mechanism that has been checked up to the TeV scale can be completely computed from the spectral action principle which is a generalisation of the equivalence principle applied on an ordinary 4D Manifold dressed with a fine structure coordinatized by two Clifford algebras (it's hard then to resist the temptation to name this work by Connes Chamseddine and Mukhanov the MCtheory). Conceptually this new model of spacetime with "discrete" dimensions so to speak lead us from the electroweak sector to the Grand Unified Theory one. It should be more appropriate to talk about partial instead of grand unification as the most recent quantitative guess from noncommutative geometry posits the existence of a left-right symmetric Pati-Salam type model at an energy scale around 1014 GeV if one want to fit the mass of the Higgs boson. Such an extension of gauge symmetries requires a new Higgs-like scalar that naturally lives at the same high scale. Here is what can be said about it:
Connes’ non-commutative geometry (NCG) [1,2] is a generalization of Riemannian geometry which also provides a particularly apt framework for expressing and geometrically reinterpreting the action for the standard model of particle physics, coupled to Einstein gravity [312] (for an introduction, see [13,14]). In a recent paper [15], we suggested a simple reformulation of the NCG framework, and pointed out three key advantages of this reformulation: (i) it unifies many of the traditional NCG axioms into a single, simpler axiom; (ii) it immediately yields a further generalization, from non-commutative to non-associative geometry [16]; and (iii) it resolves a key problem with the traditional NCG construction of the standard model, thereby making the NCG construction tighter and more explanatory than the traditional one based on effective field theory [17]. 
Here we report the discovery of three crucial and unexpected consequences of the reformulation in [15]. (i) First, it yields a new notion of the natural symmetry associated to any non-commutative space, and the action functional that lives on that space. (ii) Second, when we work out the realization of this symmetry for the non-commutative geometry used to describe the standard model of particle physics we find that the usual SU(3)C×SU(2)LU(1)gauge symmetry is augmented by an extra U(1)B-L factor. (iii) Third, as a consequence of this additional gauge symmetry, we find the standard model field content must be augmented by the following two fields: a U(1)B-L gauge boson Cµ, and a single complex scalar field σ which is {a standard model} singlet and has charge B−L=2. 
The scalar field σ has important phenomenological implications. (i) First, although the traditional NCG construction of the standard model predicted an incorrect Higgs mass   (mh≈170 GeV), several recent works [1821] have explained that an additional real singlet scalar field σ can resolve this problem, and also restore the stability of the Higgs vacuum. Our σ field, although somewhat different (since it is complex, and charged under B−L), solves these same two problems for exactly the same reasons (as may be seen in the U(1)B-L gauge where σ is real). (ii) Furthermore, precisely this field content (the standard model, extended by a right-handed neutrino in each generation of fermions, plus a U(1)B-L  gauge boson Cµ, and a complex scalar field σ that is a singlet under SU(3)C×SU(2)LU(1)but carries B−L=2) has been previously considered [22,23] because it provides a minimal extension of the standard model that can account for several cosmological phenomena that may not be accounted for by the standard model alone: namely, the existence of dark matter, the cosmological matter-antimatter asymmetry, and the scale invariant spectrum of primordial curvature perturbations.
(Submitted on 22 Aug 2014 (v1), last revised 14 Jan 2015 (this version, v2))

At any rate, we have here a clear advantage over grand unified theories which suffers of having arbitrary and complicated Higgs representations. In the noncommutative geometric setting, this problem is now solved by having minimal representations of the Higgs fields. Remarkably, we note that a very close model to the one deduced here is the one considered by Marshak and Mohapatra where the U (1) of the left-right model is identified with the B−L symmetry. They proposed the same Higgs fields that would result starting with a generic initial Dirac operator not satisfying the first order condition. Although the broken generators of the SU (4) gauge fields can mediate lepto-quark interactions leading to proton decay, it was shown that in all such types of models with partial unification, the proton is stable. In addition this type of model arises in the first phase of breaking of SO(10) to SU(2)R×SU(2)L×SU(4) and these have been extensively studied [1].
(Submitted on 30 Apr 2013 (v1), last revised 25 Sep 2014 (this version, v4))

We can now envision a comprehensive picture of physics from macroscale to microscale. Beyond the electroweak scale one would expect to recover a left-right symmetry of interactions between chiral subatomic particles. A minimal amount of new particles is required consisting of right-handed WR and ZR gauge bosons and three Majorana NR plus a new Higgs-like scalar boson σ. The theory does not precisely predict their masses and it could be that all of them would remain inaccessible to any man-made collider experiment. But now we can take advantage of the seesaw mechanism - a theoretical mechanism naturally implemented in the partial unification model - that helps to check experimentally but indirectly the physics of inaccessibly small scales with accessible left-handed neutrino physics for instance. More on this another day, now let's consider another indirect observational test of the ultra small (10-25  to 10-30 cm): namely cosmology. 

... course to steer : seesaw inflation?
We are focusing now on the last and most important part of the Glashow's snake where tail and head literally meet each other. It is quite enjoyable a theoretical mechanism called seesaw might help to merge ultra small scale speculations with possible observational phenomena at the cosmological scale. 
... a new scalar field can play the role of inflaton in the presence of a non-minimal gravitational coupling. For example, one may introduce a SM singlet scalar to drive inflation and yield the inflationary predictions consistent with the observations [12a,12b,13], with a lower bound r>0.002 for ns≥0.96 when possible quantum corrections are taken into account [13]. This scalar may be identified as a B-L Higgs field in the minimal B-L model [14]. Furthermore, the Higgs portal scalar dark matter can play the role of inflaton, leading to a unification of inflaton and dark matter particle [15a,15b]. For a scenario relating inflation, seesaw physics and Majoron dark matter, see Ref. [16].
(Submitted on 22 Jan 2015)

A recent twist along these lines was the proposal that inflation and dark matter have a common origin (similar idea was suggested by Smoot in arXiv:1405.2776 [astro-ph]), with the inflaton identified to the real part of the complex singlet containing the majoron and breaking lepton number through its vev [30]. The resulting inflationary scenario is consistent with the recent CMB observations, including the B-mode observation by the BICEP2 experiment re-analized jointly with the Planck data, as illustrated in the {figure below}. The upper (red) contours correspond to the BICEP2 results, while the lower ones (green) follow from the new analysis released jointly with PLANCK [34]. The lines correspond to 68 and 95% CL contours. Further restrictions on the majoron dark matter scenario should follow from structure formation considerations

(Submitted on 8 Apr 2015)

We have a natural See-Saw Inflationary scenario based upon having heavy (1014 GeV) right-handed neutrinos to explain the observed light left-handed neutrinos. There needs to be a scalar field to produce the heavy right-handed neutrino mass and it is a natural source for inflation - See-Saw Inflation. 
This leads to plausible and generally physically possible though perhaps finetuned mechanisms to tie the neutrino sector to the four major fundamental issues in cosmology: Inflation, Dark Matter, Dark Energy, and Baryogenesis 
The weakest argument presented here is for Dark Energy. It is not unreasonable to find that this inflation gets back into action when it is perturbed by the later symmetry breaking allowing the left-handed neutrinos to gain a very low Majorana mass, and then make a very slow roll and reasonably rapid decay to the new minimum producing the apparent Dark Energy accelerating the universe. Thus the seesaw mechanism completes its work. 
In one simple incarnation there are three right-handed neutrinos and related fields that correspond to energy levels of two GUT symmetry breakings, e.g. SO(10), and the last big Inflation. The lightest right-handed neutrino acting with its field produces the last high-scale inflation period and then the lightest left-handed neutrino acting with its field produces the late-time accelerating universe 
It is interesting to note that the lowly left-handed neutrino and its high-borne right-handed neutrino partner appear to have a big role in the destiny of the universe and that measurements of the neutrino properties reflect on parameters both at the GUT scale and at the lowest energy scale. In particular, it is important to determine: 
  • 1) Are these Majorana neutrinos? So I say to my Cuore colleagues go to it, as I know have a personal interest beyond being involved via (former) graduate student Michele Dolinski, post doc Tom Gutier in Cuoricino and my colleagues in Berkeley. Let’s see some good neutrinoless double-beta decay. 
  • 2) Measuring the neutrino mass spectrum as these feed directly into the fits for the right-handed neutrino mass and the inflation potential. There will undoubtedly be joint fits between the neutrino data and the large scale structure and CMB data to make a global fit to the right-hand neutrino mass and the inflation self coupling. 
The large scale structure observations - e.g. galaxy and quasar-Lyman alpha surveys - may have interesting things to say not only about levels and coefficients but also about any structure in the potential or any splitting or right-hand neutrino masses. Theorists have their work cut out in continuing the resurrection of SO(10) and making a more coherent and exhaustive treatment of the neutrino and scalar sectors and the links to observables.
(Submitted on 12 May 2014 (v1), last revised 19 May 2014 (this version, v2))

Even if this last article by the Nobel prize laureate astrophysicist G. Smoot is yet a bricolage as he says it himself it provides fascinating expectations and seem to me to fit pretty well in the noncommutative geometric perspective...
I leave the mimetic dark matter sector discussion for another day...

 The great grand loop of physics / la grande boucle de la physique ;-)  
a new version of the Glashow's snake 
(L-R SYM stands for left-right symmetry and NCG for NonCommutative Geometry)

//last edit 25 August 2015: "great loop" replaced by "grand loop" to make an explicit connection to the historical grand unification theories from the past inspired by Glashow in particular.