dimanche 29 novembre 2015

Life without SUSY = La vie sans soucis ?

Proton lifetime estimates out of the limbo of theoretical uncertainties?
I continue my wandering in the valley of non-supersymmetric SO(10) models started at this post or if you prefer I offer to contemplate our quantum phenomenological world from the probably most conservative heuristic point of view ;-) 
The continuous efforts to bring into life a new generation of the large neutrino detectors such as DUNE [1] (a 40 kt LAr TPC detector to be built in the Homestake mine in South Dakota), Hyper-K [2] (about 500 kt fiducial water-Cherenkov detector proposed in order to supersede the “smaller” Super-K [3] in Japan) or perhaps even some variant of the liquid-scintillator machine like the european LENA bring back the questions of the possible fundamental instability of the baryonic matter. It is namely the close complementarity of the planned super-rich neutrino physics programme (with a common belief that not only the neutrino mass hierarchy would be determined but also signals of CP violation in the lepton sector may be observed) with the nucleon decay searches that fuels the hope that processes like p → π0e+ or p → π+ν (or perhaps even p → K+ν  favoured by low-energy supersymmetry) may finally be seen. To this end, the projected sensitivity of these new facilities should in practically all channels exceed that of the previous generation (dominated by Super-K) by at least one order of magnitude, thus touching the “psychological” proton lifetime boundary of 1035years.  
Unfortunately, the steady progress on the experimental side has hardly been complemented by any significant improvement in the accuracy of the proton lifetime predictions in theory. This, of course, would not be a true concern on day 1 after the proton decay discovery; however, whenever it would come to more delicate (i.e., quantitative) questions beyond the obvious “Is proton absolutely stable?” like, e.g., “Which models can now be ruled-out?” or “What did we really learn about the (presumably) unified dynamics behind these processes?” the theory would have a difficult time to pull any robust answer up the sleeve.  
This has to do, namely, with the enormous (and often irreducible) theoretical uncertainties plaguing virtually all proton lifetime estimates in the current literature [See, e.g., [4, 5] and the references therein] which by far (often by many orders of magnitude) exceed the relatively small – yet fantastic – factor-of-ten “improvement window” the new generation of facilities may open. The typical reason behind this is either a disparity between the amount of the input information available and needed for any potentially accurate calculation – with supersymmetric grand unified theories (SUSY GUTs) as a canonical example – or the lack of a fully consistent treatment at better than just the leading order (LO). While the first issue is more a matter of fashion and, as such, it can be expected to resolve by itself in the future (especially if the LHC sees no hints of SUSY at the TeV scale) the latter is much more difficult to deal with  due to the parametrically higher level of complication such a next-to-leading (NLO) analysis represents... 
There are actually many sources of theoretical uncertainties of very different origins that have to be taken into account when the ambition is to get the total uncertainty at least within the ballpark of the aforementioned experimental “improvement window”. In the framework of the classical GUTs, these are, traditionally, i) the limited accuracy of the existing estimates of the relevant hadronic matrix elements, ii) the limited accuracy of the determination of the masses of the d=6 baryon and lepton number violating (BLNV) operators ... and iii) the insufficiency of the information about their flavour structure available at low energy. On top of that, the proximity of the GUT scale MG, typically in the 1016GeV ballpark, to the (reduced) Planck scale MPl∼1018GeV should make one worried about the size of the possible gravity-induced effects which, typically, are not under a good control... 
The typical “internal structure” of the gauge-mediated (left) and scalar-mediated (right) d = 6 baryon and lepton number violating operators in the Standard model. The Xµ stands for the vector leptoquarks transforming, usually, as (3, 2, +5/6 ) + h.c. or (3, 2, − 1/6 ) + h.c. under the SM gauge group while ∆ is a generic symbol for scalar mediators transforming typically as (3, 1, + 1/3 ) or (3, 1, + 4/3 ).
From what has been said one may conclude that it is virtually impossible to provide a theoretically robust NLO prediction for the proton lifetime in the classical GUT framework as the proximity of the Planck scale and the size of its effects on the vital ingredients of these calculations makes the theoretical uncertainties way larger than the desired order-of-magnitude ballpark. Nevertheless, there is a very special setting in which the leading order gravity effects may remain under control to such a degree that it is at least worth trying to perform an NLO analysis within.
Indeed, the renormalizable non-supersymmetric SO(10) model in which the GUT-scale gauge symmetry is broken by the 45-dimensional adjoint scalar representation and the Yukawa couplings are governed by the SO(10) vector(s) and 5-index antisymmetric tensor(s) (i.e., 10 and 126), see, e.g., [9, 25], may overcome the main issues discussed above. Due to the antisymmetry of 45 the leading order gravity smearing effects in the gauge matching are absent and, hence, the masses of the BLNV mediators may be, in principle, determined to a sufficient accuracy. At the same time, the symmetric nature of all the Yukawa couplings at play may justify the use of the formulae (7) and, thus, overcome also the issue with the limited amount of the flavour information available at the low scale. 
Interestingly enough, this very special and beautiful model has been ignored for more than two decades due to the peculiar tachyonic instabilities revealed in its scalar sector back in 1980’s [26–28]; only recently it has been shown [10] that this is a mere artefact of the tree-level approximations used therein and that the model makes perfect sense as a truly quantum theory. Since then, it has been a subject of several dedicated analyses [6, 7, 29] which will hopefully culminate into a complete and robust NLO prediction in a not-so-distant future.
Theoretical Uncertainties in Proton Lifetime Estimates
Helena Kolešová, Michal Malinský, Timon Mede(Submitted on 19 Oct 2015)

A handful of testable Wimps?
One of the most promising class of candidates for DM is the so-called weakly-interacting massive particle (WIMP). These are electrically neutral and colorless particles which have masses of O(10(2-3)) GeV and couple to SM particles via weak-scale interactions. Their thermal relic abundance can explain the current energy density of DM. Such particles are predicted in many new-physics models; for example, the lightest neutralino in the supersymmetric (SUSY) SM is a well-known candidate for WIMP DM [2].  
For a WIMP to be DM, it should be stable or have a sufficiently long lifetime compared to the age of the Universe. To assure that, it is usually assumed that there is a symmetry which stabilizes the DM particle. For instance, in the minimal SUSY SM (MSSM), R-parity makes the lightest SUSY particle stable and thus a candidate for DM in the Universe [2]. Similarly, Kaluza-Klein parity in universal extra dimensional models [3] and T-parity in the Littlest Higgs model [4] yield stable particles, which can also be promising DM candidates. The ultraviolet (UV) origin of such a symmetry is, however, often obscure; thus it would be quite interesting if a theory which offers a DM candidate and simultaneously explains its stability can be realized as a UV completion rather than introducing the additional symmetry by hand.  
In fact, grand unified theories (GUTs) can provide such a framework. Suppose that the rank of a GUT gauge group is larger than four. In this case, the GUT symmetry contains extra symmetries beyond the SM gauge symmetry. These extra symmetries should be spontaneously broken at a high-energy scale by a vacuum expectation value (VEV) of a Higgs field. Then, if we choose the proper representation for the Higgs field, there remains discrete symmetries, which can be used for DM stabilization [5–10]. The discrete charge of each representation is uniquely determined, and thus we can systematically identify possible DM candidates for each symmetry 
In this work, we discuss the concrete realization of this scenario in non-SUSY SO(10) GUT models. It is widely known that SO(10) GUTs [11–13] have a lot of attractive features. Firstly, all of the SM quarks and leptons, as well as right-handed neutrinos, can be embedded into 16 representations of SO(10). Secondly, the anomaly cancellation in the SM is naturally explained since SO(10) is free from anomalies. Thirdly, one obtains improved gauge coupling unification [13–20] and improved fermion mass ratios [13, 21] if partial unification is achieved at an intermediate mass scale. In addition, since right-handed neutrinos have masses of the order of the intermediate scale, small neutrino masses can be explained via the seesaw mechanism [22] if the intermediate scale is sufficiently high. SO(10) includes an additional U(1) symmetry, which is assumed to be broken at the intermediate scale. If the Higgs field that breaks this additional U(1) symmetry belongs to a 126 dimensional representation, then a discrete 2 symmetry is preserved at low energies {(equivalent to matter parity PM=(−1)3(B-L))}. One also finds that as long as we focus on relatively small representations (≤ 210), the 126 Higgs field leaving a 2 symmetry is the only possibility for a discrete symmetry [23, 24]. We focus on this case in the following discussion.  
DM candidates appearing in such models can be classified into two types; one class of DM particles have effectively weak-scale interactions with the SM particles so that they are thermalized in the early universe, while the other class contains SM singlets which are never brought into thermal equilibrium. In the latter case, DM particles are produced out of equilibrium via the thermal scattering involving heavy (intermediate scale) particle exchange processes {see figure below from arxiv.org/abs/1510.03509}. This type of DM is called Non-Equilibrium Thermal DM (NETDM) [25], whose realization in SO(10) GUTs was thoroughly discussed in Ref. [24]. NETDM is necessarily fermionic as scalar DM would naturally couple to the SM Higgs bosons. Depending on the choice of the intermediate-scale gauge group, candidates for NETDM may originate from several different SO(10) representations such as 45, 54, 126 or 210. Although the NETDM candidate itself does not affect the running of the gauge couplings from the weak scale to the intermediate scale, part of the original SO(10) multiplet has a mass at the intermediate scale and does affect the running up to the GUT scale. Demanding gauge coupling unification with a GUT scale above 1015 GeV leaves us with a limited set of potential NETDM candidates... 
The non equilibrium thermal mechanism for dark matter production 

Stable SO(10) scalar (fermion) DM candidates must be odd (even) under the 2 symmetry. Therefore fermions must originate in either a 10, 45, 54, 120, 126, 210 or 210' representation, while scalars are restricted to either a 16 or 144 of SO(10). These multiplets must be split and we gave explicit examples of fine-tuning mechanisms in order to retain a 1 TeV WIMP candidate which may be a SU(2)L, singlet, doublet, triplet, or quartet with or without hypercharge. Fermions which are SU(2)L singlets with no hypercharge are not good WIMP candidates but are NETDM candidates and these were considered elsewhere [24]. Our criteria for a viable dark matter model required: gauge coupling unification at a sufficiently high scale to ensure proton stability compatible with experiment; a unification scale greater than the intermediate scale; and elastic cross sections compatible with direct detection experiments. The latter criterion often requires additional Higgs representations to split the degeneracy of the fermionic intermediate scale representations if DM is hypercharged. 
Despite the potential very long list of candidates (when one combines the possible different SO(10) representations and intermediate gauge groups), we found only a handful of models which satisfied all constraints. Among the scalar candidates, the Y=0 singlet and Y=1/2 doublet (often referred to as an inert Higgs doublet [50]) are possible candidates for SU(4)C⊗SU(2)L⊗SU(2)R and SU(3)C⊗SU(2)L⊗SU(2)R⊗U(1)B-L (with or without a left-right symmetry) intermediate gauge groups. These originate from either the 16 or 144 of SO(10). The latter group (without the left-right symmetry) is also consistent with a state originating from the 144 being a triplet under SU(2)R. To avoid immediate exclusion from direct detection experiments, a mass splitting of order 100 keV implies that the intermediate scale must be larger than about 3×10-6 MGUT for a nominal 1 TeV hyper-charged scalar DM particle. Some of these models imply proton lifetimes short enough to be testable in on-going and future proton decay experiments. 
The fermion candidates were even more restrictive. Models with Y=0 must come from a SU(2)L triplet (singlets are not WIMPs). In this case only one model was found using the SU(4)C⊗SU(2)L⊗SU(2)R intermediate gauge group and requiring additional Higgses ... at the intermediate scale. Models with Y=1/2 doublets were found for SU(4)C⊗SU(2)L⊗U(1)R with a singlet fermion required for mixing, and SU(4)C⊗SU(2)L⊗SU(2)R with a triplet fermion for mixing. In both cases, additional Higgses ... are required at the intermediate scale...

(Submitted on 2 Sep 2015)


One route from GUT scale to the SM with a break at Pati-Salam?
The discovery of the Higgs boson at the Large Hadron Collider (LHC) at CERN is a major milestone of the successes of the standard model (SM) of particle physics. Indeed, with all the quarks and leptons and force carriers of the SM now detected and the source of spontaneous symmetry breaking identified there is a well-deserved sense of satisfaction. Nonetheless, there is a widely shared expectation that there is new physics which may be around the corner and within striking range of the LHC. The shortcomings of the standard model are well-known. There is no candidate for dark matter in the SM. The neutrino is massless in the model but experiments indicate otherwise. At the same time the utter smallness of this mass is itself a mystery. Neither is there any explanation of the matter-antimatter asymmetry seen in the Universe. Besides, the lightness of the Higgs boson remains an enigma if there is no physics between the electroweak and Planck scales. 
Of the several alternatives of beyond the standard model extensions, the one on which we focus in this work is the left-right symmetric (LRS) model [1, 2] and its embedding within a grand unified theory (GUT). Here parity is a symmetry of the theory which is spontaneously broken resulting in the observed left-handed weak interactions. The left-right symmetric model is based on the gauge group SU(2)LSU(2)RU(1)B-L and has a natural embedding in the SU(4)C⊗SU(2)L⊗SU(2)R Pati-Salam model [3] which unifies quarks and leptons in an SU(4)C symmetry. The Pati-Salam symmetry is a subgroup of SO(10) [4, 5]. These extensions of the standard model provide avenues for the amelioration of several of its shortcomings... 
In a left-right symmetric model emerging from a grand unified theory, such as SO(10), one has a discrete symmetry SU(2)LSU(2)R – referred to as D-parity [15] – which sets gL=gR. Both D-parity and SU(2)R are broken during the descent of the GUT to the standard model, the first making the coupling constants unequal and the second resulting in a massive WR. The possibility that the energy scale of breaking of D-parity is different from that of SU(2)R breaking is admissible and well-examined. The difference between these scales and the particle content of the theory controls the extent to which gL≠gR. 
In this work we consider the different options of {non-supersymmetric} SO(10) symmetry breaking. It is shown that a light WR  goes hand-in-hand with the breaking of D-parity at a high scale, immediately excluding the possibility of gL=gR. Breaking of D-parity above the scale of inflation, in fact, is usually considered a good feature for getting rid of unwanted toplogical defects such as domain walls [16, 17]. The other symmetries that are broken in the passage to the standard model are the SU(4)and SU(2)R of the Pati-Salam (PS) model. The stepwise breaking of these symmetries and the order of their energy scales have many variants. There are also a variety of options for the scalar multiplets which are used to trigger the spontaneous symmetry breaking at the different stages. We take a minimalist position of (a) not including any scalar fields beyond the ones that are essential for symmetry breaking, and also (b) impose the Extended Survival Hypothesis (ESH) corresponding to minimal fine-tuning to keep no light extra scalars. With these twin requirements we find that only a single symmetry-breaking route – the one in which the order of symmetry breaking is first D-parity, then SU(4)C, and finally SU(2)R – can accommodate a light MWR ... 

Symmetry breaking routes of SO(10) distinguished by the order of breaking of SU(2)RSU(4)C, and D-parity. The SO(10) scalar multiplets responsible for symmetry breaking at every stage have been indicated. Only the DCR (red solid) route can accommodate the light WR scenario... 


Scalar fields considered when the ordering of symmetry-breaking scales is MD>MC>MR The submultiplets contributing to the RG evolution at different stages according to the Eextended Survival Hypothesis are shown. D-parity (±) is indicated as a subscript.

... Before turning to SO(10) we briefly remark about Pati-Salam partial unification within this route. Because there are four steps of symmetry breaking this is an underdetermined system. For this work, Mis restricted to be in the O(TeV) range. The scale MC is taken as the other input in the analysis. At the one-loop level the results can be analytically calculated using the beta-function coefficients in eq. (23). The steps can be identified from eqs. (25) and (26). The latter determines MD once MC is chosen. η is then fixed using eq. (25). For example, for MC=106  GeV one gets η=0.63 when MR=5 TeV. Within the Pati-Salam model the upper limit of MD is set by the Planck mass MPlanck . We find that in such a limit one has MC=1017.6 GeV and η=0.87 for MR=5 TeV...
The observation by the CMS collaboration of a 2.8σ excess in the (2e)(2j) channel around 2.1 TeV can be interpreted as a preliminary indication of the production of a right-handed gauge boson WR. Within the left-right symmetric model the excess identifies specific values of η=gR/gL, r=MNe/MWR, and VN{Nis the electronic right-handed neutrino and V parametrizes the mixing between the electron and its right-handed neutrino}. We stress that even with gL=gR and VNe=1 the data can be accommodated by an appropriate choice of r. We explore what the CMS result implies if the left-right symmetric model is embedded in an SO(10) GUT. η ≠1 is a consequence of the breaking of left-right D-parity. We find that a WR in the few TeV range very tightly restricts the possible routes of descent of the GUT to the standard model. The only sequence of symmetry breaking which is permitted is MD>MC>MR with a D-parity breaking scale ≥ 1016 GeV. All other orderings of symmetry breaking are excluded. Breaking of left-right discrete parity at such a high scale pushes gL and gR  apart and one finds 0.64 ≤ η ≤ 0.78. The unification scale, MU, has to be as high as ∼1018 GeV so that it is very unlikely that proton decay will be seen in the ongoing experiments. The SU(4)C-breaking scale, MC, can be as low as 106 GeV, which may be probed by rare decays such as KL → µe and Bd,s → µe or n −  oscillations...
 The ATLAS collaboration has also presented evidence [32] for an enhancement around 2 TeV in the di-boson – ZZ and WZ – channels in their 8 TeV data. Our interpretation of the excess in the (ee)(jj) channel in terms of a WR by itself fails to provide an explanation of the above. If the WR production is normalised to the former then it falls an order of magnitude short of the di-boson rates. It has been shown that interpretation of the di-boson observations as well as the (ee)(jj) data is possible if the LRS model is embellished with the addition of some other fermionic states [33, 34].
Implications of the CMS search for W_R on Grand Unification  
Triparno Bandyopadhyay (Calcutta Univ), Biswajoy Brahmachari (Vidyasagar Evening Coll),Amitava Raychaudhuri (Calcutta Univ)
(Submitted on 10 Sep 2015)
//Updated with two figures on December 1 2015.
//Title slighlty edited on 29 December 2016

mercredi 25 novembre 2015

Celebrating the 100th anniversary of the general theory of relativity in the quantum (or at least noncommutative) way

Taking up the Dyson's challenge?
The most glaring incompatibility of concepts in contemporary physics is that between Einstein's principle of general coordinate invariance and all the modern schemes for a quantum-mechanical description of nature. Einstein based his theory of general relativity [28] on the principle that God did not attach any preferred labels to the points of space-time. This principle requires that the laws of physics should be invariant under the Einstein group E, which consists of all one-to-one and twice-differentiable transformations of the coordinates. By making full use of the invariance under E, Einstein was able to deduce the precise form of his law of gravitation from general requirements of mathematical simplicity without any arbitrariness. He was also able to reformulate the whole of classical physics (electromagnetism and hydrodynamics) in E-invariant fashion, and so determine unambiguously the mutual interactions of matter, radiation and gravitation within the classical domain. There is no part of physics more coherent mathematically and more satisfying aesthetically than this classical theory of Einstein based upon E-invariance. 
On the other hand, all the currently viable formalisms for describing nature quantum-mechanically use a much smaller invariance group. The analysis of Bacry and Lévy-Leblond [21] indicates the extreme range of quantum-mechanical kinematical groups that have been contemplated. In practice all serious quantum-mechanical theories are based either on the Poincaré group P or the Galilei group G. This means that a class of preferred inertial coordinate-systems is postulated a priori, in flat contradiction to Einstein's principle. The contradiction is particularly uncomfortable, because Einstein's principle of general coordinate invariance has such an attractive quality of absoluteness. A physicist's intuition tells him that, if Einstein's principle is valid at all, it ought to be valid for the whole of physics, quantum-mechanical as well as classical. If the principle were not universally valid, it is difficult to understand why Einstein achieved such deeply coherent insights into nature by assuming it to be so 
To make the mathematical incompatibility more definite, I will focus attention on one of the competing schemes for describing a quantum-mechanical universe. I choose the scheme which is most carefully based on rigorous mathematical definitions and which is also general enough to encompass a wide variety of physical systems. This scheme is the "Algebra of Local Observables" of Haag and Kastler [29]... 
These axioms, taken together with the axioms defining a C*-algebra [30], are a distillation into abstract mathematical language of all the general truths that we have learned about the physics of microscopic systems during the last 50 years. They describe a mathematical structure of great elegance whose properties correspond in many respects to the facts of experimental physics. In some sense, the axioms represent the most serious attempt that has yet been made to define precisely what physicists mean by the words "observability, causality, locality, relativistic invariance," which they are constantly using or abusing in their everyday speech.  
If we look at the axioms in detail, we see that (1), (2), (3) and (6) are consistent with Einstein's general coordinate invariance, but (4) and (5) are inconsistent with it. Axioms (4) and (5), the axioms of Poincaré invariance and local commutativity, require the Poincaré group to be built into the structure of space-time. If we try to replace the Poincaré group P by the Einstein group E, we have no way to define a space-like relationship between two regions, and axiom (5) becomes meaningless. I therefore propose as an outstanding opportunity still open to the pure mathematicians, to create a mathematical structure preserving the main features of the Haag-Kastler axioms but possessing E-invariance instead of P-invariance. 
I had better warn any mathematician who intends to respond to my challenge that his task will not be easy. No merely formal rearrangement of the Haag-Kastler axioms can possibly be sufficient. For we know that Einstein could construct his E-invariant classical theory of 1916 only by bringing in the full resources of Riemannian differential geometry. He needed a metric tensor to give his space-time a structure independent of coordinate-systems. Therefore an E-invariant axiom of local commutativity to replace axiom (5) will require at least some quantum-mechanical analog of Riemannian geometry. Some analog of a metric tensor must be introduced in order to give a meaning to space-like separation. The answer to my challenge will necessarily involve a delicate weaving together of concepts from differential geometry, functional analysis, and abstract algebra With these words of warning I leave the problem to you.
Freeman J. Dyson ,(September 1972)

Extending the Riemann manifold paradigm...
Riemann was well aware of the limits of his own point of view as is clearly expressed in the last page of his inaugural lecture; ([26])  
Questions about the immeasurably large are idle questions for the explanation of Nature. But the situation is quite different with questions about the immeasurably small. Upon the exactness with which we pursue phenomenon into the infinitely small, does our knowledge of their causal connections essentially depend. The progress of recent centuries in understanding the mechanisms of Nature depends almost entirely on the exactness of construction which has become possible through the invention of the analysis of the infinite and through the simple principles discovered by Archimedes, Galileo and Newton, which modern physics makes use of. By contrast, in the natural sciences where the simple principles for such constructions are still lacking, to discover causal connections one pursues phenomenon into the spatially small, just so far as the microscope permits. Questions about the metric relations of Space in the immeasurably small are thus not idle ones.  
If one assumes that bodies exist independently of position, then the curvature is everywhere constant, and it then follows from astronomical measurements that it cannot be different from zero; or at any rate its reciprocal must be an area in comparison with which the range of our telescopes can be neglected. But if such an independence of bodies from position does not exist, then one cannot draw conclusions about metric relations in the infinitely small from those in the large; at every point the curvature can have arbitrary values in three directions, provided only that the total curvature of every measurable portion of Space is not perceptibly different from zero. Still more complicated relations can occur if the line element cannot be represented, as was presupposed, by the square root of a differential expression of the second degree. Now it seems that the empirical notions on which the metric determinations of Space are based, the concept of a solid body and that of a light ray, lose their validity in the infinitely small; it is therefore quite definitely conceivable that the metric relations of Space in the infinitely small do not conform to the hypotheses of geometry; and in fact one ought to assume this as soon as it permits a simpler way of explaining phenomena.  
The question of the validity of the hypotheses of geometry in the infinitely small is connected with the question of the basis for the metric relations of space. In connection with this question, which may indeed still be ranked as part of the study of Space, the above remark is applicable, that in a discrete manifold the principle of metric relations is already contained in the concept of the manifold, but in a continuous one it must come from something else. Therefore, either the reality underlying Space must form a discrete manifold, or the basis for the metric relations must be sought outside it, in binding forces acting upon it.  
An answer to these questions can be found only by starting from that conception of phenomena which has hitherto been approved by experience, for which Newton laid the foundation, and gradually modifying it under the compulsion of facts which cannot be explained by it. Investigations like the one just made, which begin from general concepts, can serve only to insure that this work is not hindered by too restricted concepts, and that progress in comprehending the connection of things is not obstructed by traditional prejudices.  
This leads us away into the domain of another science, the realm of physics...”.
(Submitted on 23 Nov 2000)
to be continued...
Updated on November 29 2015
Distillating a quantum mechanical analog from the harvest of particle physics



 





Alain Connes and Matilde Marcolli
At the time Dyson was writing about these "missed opportunities" (1972) the Standard Model of particle physics had not been completed yet. I wonder what is/could be his thoughts about the spectral noncommutative geometric program as an endeavor to fulfill the requirements set out in the roadmap he formulated forty three years ago...


//Last update on December 1
Last words to Dirac as a safeguard
We ... heard an anecdote from Roger Penrose, in response to the first question from the audience which was along the lines of 'which came first quantum mechanics or general relativity?'. Penrose replied by telling of a time he had listened to a wonderfully animated lecture by John Wheeler and at the end there came a similar question from the audience, which came first general relativity or the quantum principle? Penrose said that a small voice in the front of the audience piped up and asked 'what is the quantum principle?' The small voice belonged to Dirac.


Quantization is still a mystery indeed...


samedi 21 novembre 2015

A particularly significant void

Thank you very much Matt Strassler  for having shared with us your expertise, thoughts, time and passion writing about physics! Looking forward reading from your blog soon. 

Matt has done a fantastic work shedding light on the intricacies of high energy physics focussing particularly on the Higgs boson discovery of course and what could come next. He has been very generous in giving time to answer the many questions or his numerous readers. Exactly two years ago he offered me his answer and commentary about a question I asked on his blog Of particular significance. Since he has closed his comment section, is now employed outside of science and has announced: "other bloggers will have to tell th[e] tales" I have decided to write these supportive words here. 

mercredi 18 novembre 2015

If a contemporary physicist could calculate with the same genius and luck as Einstein...

... then the Higgs naturalness problem would have to capitulate in the face of the  proper quantum geometric model and at the same time the cosmological constant would have to offer its excuses for the fact that it does not weigh.
The blogger, waiting for the physical applications of a spectral geometric quantization scheme proposed by Chamseddine, Connes and Mukhanov celebrates today the 100th anniversary of the first two post Newtonian results of Einstein (precession of the perihelion of Mercury and the bending of light) inspired by the following quotation:
If I could calculate as quickly as you, then the electron would have to capitulate in the face of my equations and at the same time the hydrogen atom would have to offer its excuses for the fact that it does not radiate. 
Hilbert writes to Einstein on November 19, 1915 to congratulate him for having mastered the perihelion problem

Give the neo-Archimedes a non-minimal coupling to spacetime curvature and he could....

...  protect the Higgs from quantum fluctuations...
The present paper will point out an exception to {the} inevitable destabilization {of the electroweak scale by additive power-law quantum corrections [4]} noting that the Higgs field, being a doublet of fundamental scalars, necessarily develops the non-minimal Higgs-curvature interaction [7]
 ∆V(H, R)=ζRHH   ...
with which the Higgs vacuum expectation value (VEV) {v2=−mH2H} changes to
v2=(−mH2− 4ζV0/MPl2) / (λH + ζmH2/M2Pl)  ...
and this new VEV can be stabilized by fine-tuning ζ to counterbalance the quadratic divergences δmH2∝ΛUV2 with the quartic divergences δV0∝ΛUV4. Quantum corrections to the SM parameters are independent of ζ if gravity is classical, and thus ζ acts as a gyroscope that stabilizes the electroweak scale against violent UV contributions. This novel fine-tuning scheme is in accord with Sakharov’s induced gravity approach, and continues to hold also in extensions of the SM involving extra Higgs fields (additional Higgs doublets or singlet scalars or scalar multiplets belonging to larger gauge groups)... 

 The workings of the fine-tuning ... is best exemplified by the special value of ζ
 ζ0=1/(nF−nB)×(6ht2−6λH−3gY2/4−9g22/4)×(MPl2UV2)  ...
{where ht is top quark Yukawa coupling, gY (g2) is the hypercharge (isospin) gauge coupling, and nF (nB) is the total number of fermions (bosons) in the SM. The}... numerical value is ζ0≈1/15 for ΛUVMPl. It is smaller than the conformal value 1/6 [7] and much much smaller than the Higgs inflation value 104 [11]. As a function of ΛUVζ0 completely eradicates the power-law UV contribution ... and the concealed logarithmic corrections give the usual renormalization properties of the Higgs VEV. Obviously, smaller the ΛUV larger the ζ0 though there remains lesser and lesser need to fine-tuning if ΛUV gets closer and closer to the Fermi scale... 
To discuss further, we state that ζ fine-tuning can have a variety of implications for model building and phenomenology. Below we highlight some of them briefly: ...
  • The matter sector does not have to be precisely the SM. The finetuning mechanism here works also in extensions of the SM which include extra scalar fields provided that each scalar assumes a non-minimal coupling to curvature...  The scalar fields can be additional Higgs doublets, singlet scalars or multiplets of scalars belonging to larger gauge groups. The VEV of each scalar is of the form in (4), and can be fine-tuned individually without interfering with the VEVs of the remaining scalars.
  • The classical gravity assumption in the present work can be lifted to include quantum gravitational effects [22]. In this case, non-minimal coupling spreads into the SM parameters through graviton loops. Moreover, this quantum gravitational setup is inherently non-renormalizable [15]. These factors can obscure the process of fine-tuning ζ.
  • There have been various attempts [23] to nullify the quadratic divergence in Higgs VEV by introducing singlet scalars. This is now known to be not possible at all, even when vector-like fermions are included [24]. Nevertheless, non-minimal coupling between curvature scalar and some scalar fields can help stabilize both electroweak and hidden scales ... and then masses of the particles in the SM and hidden sector get automatically stabilized.
(Submitted on 1 May 2014 (v1), last revised 10 May 2014 (this version, v2))


... and connect the electroweak scale with the cosmological inflation one
Recently, by Demir [10], it has been shown that the one-loop quadratic divergences can be suppressed completely if Higgs coupling to spacetime curvature is finely tuned. The most interesting aspect of this fine-tuning is that it is phantasmal if gravity is classical. The reason for this phantom behaviour is that the Higgs-curvature coupling does not appear in quantum corrections to the SM parameters. Moreover, particle masses are sensitive only to the Higgs vacuum expectation value (VEV); they are completely immune to what mechanism has set the Higgs VEV to that specific value appropriate for electroweak interactions. In this sense, one is able to stabilize the Higgs boson through a “soft fine-tuning” that does not interfere with workings of the SM [10] (see Refs. [11, 12] for quantum corrections in curved background).  
In the present work, we discuss implications of the quartic divergences. More specifically, we show that the quartic divergences induce an enormous vacuum energy which can inflate the Universe. The scale of inflation sets the UV scale and determines the degree of soft finetuning. In fact, quartic contributions give the plateau section of the slow-roll inflaton potential and fully governs the inflationary epoch for parameter ranges preferred by the softly fine-tuned Higgs mass. As a matter of fact, an analysis of the inflationary phase is rather timely since recent measurements of the tensor-to-scalar ratio r in CMB polarization have the potential to fix the scale of inflation...  
In this paper, exposed is one such model in which inflationary dynamics and electroweak stability are directly correlated. Extended SM scenarios keeping the Higgs vacuum stable while yielding the high-scale inflation successfully exist in the literature by incorporating either an additional U(1)B−L symmetry [22], with non-minimal coupling of the Higgs kinetic term with the Higgs field [23], with the Einstein tensor [24], with both the Higgs field and the Einstein tensor [25]... 
As a matter of fact, quantum corrections shift the scalar curvature by δR=(ΛUV4/MPl2)×(nF−nB)/(4π)2... after neglecting subleading quadratic and logarithmic corrections... This curvature correction, proportional to nF−nB, takes enormous values if there is no fermion-boson degeneracy in the theory. The SM exhibits no such symmetry... the background de Sitter spacetime is found to have the Hubble constant H2=δR/12... This gives numerically H0.18ΛUV2/MPl ... for the SM spectrum where the Hubble parameter acts like a moment arm in a see-saw creating a balance between ΛUV and MPl. The CMB observations such as BICEP2 and Planck can measure H when foreground is small. This then fixes ΛUV, directly. For H=1016GeV, as reported by BICEP2, one finds ΛUV=3.7×1017GeV. This means that a measurement of the Hubble constant H determines the upper validity limit ΛUV of the SM and, in general, smaller the H smaller the ΛUV. The ongoing and upcoming experiments on CMB polarization place upper limits on tensor to scalar ratio r... 
In consequence, quartic quantum corrections from matter loops inflate the Universe with a Hubble constant determined by the UV scale ΛUV. The flatness, homogeneity and isotropy of the observable Universe can be understood by some 60 e-foldings in a rather short time interval. The crucial question concerns exit of the Universe from this exponential expansion phase. This is not possible with a constant vacuum energy. The resolution comes from the fact that, the vacuum energy does actually change in time due to phase transitions occurring as the Universe expands. In fact, a decaying cosmological constant was proposed decades ago by Dolgov [28]. In this sense, as an inherent assumption in inflationary cosmology, the vacuum energy can be ascribed as the energy density of a slowly-varying real scalar field. It could be modelled in various ways, and slow-roll of the scalar field along the model potential can give a graceful exit from inflationary epoch such that the vacuum energy at the beginning has effectively decayed into matter and radiation during reheating. 
Given the regularized action in Ref. [10], there arise {two viable}... scenarios to be considered as emerging from sliding cutoff scale: ... {one is a} chaotic inflation with non-minimal couplingφ=1/[6(4π)2√(nF−nB)}... previously studied by Refs. [3031] and it is found that slow-roll conditions ... cannot be realized unless ζφ ≤ 10-3. As  ζφ10-4 in our scenario, inflationary dynamics could be successfully driven by the inflaton field...

(Submitted on 29 Oct 2015)

dimanche 8 novembre 2015

One SM gauge singlet scalar for a non-minimal quartic or quadratic inflaton? Ask COrE+, PIXIE and LiteBIRD!

//This post has been updated and some figures corrected on 10 November 2015

Is an inflaton non-minimally coupled with gravity the simplest inflation model compatible with experiment?
... we briefly review and update the results of five closely related, well motivated and previously studied inflationary models which are consistent with values of the tensor to scalar ratio r around 0.05, a signal level which will soon be probed. The first two models employ the very well known quadratic (φ2) and quartic (φ4) potentials [5], supplemented in our case by additional couplings of the inflaton φ to fermions and/or scalars {such as (1/2)hφNN or (1/2)g2φ2χ2 to a Majorana fermion N and a scalar χ respectively}, so that reheating becomes possible. These new interactions have previously been shown [6, 78] to significantly modify the predictions for the scalar spectral index ns and r in the absence of these new interactions. The next two models exploit respectively the Higgs potential [7–11] and Coleman-Weinberg potential [10, 1213, 14]. With the SM electroweak symmetry presumably broken by a Higgs potential, it seems natural to think that nature may have utilized the latter (or the closely related Coleman-Weinberg potential) to also implement inflation, albeit with a SM singlet scalar field. Finally, we consider a class of models [15, 16] which invokes a quartic potential for the inflaton field, supplemented by an additional non-minimal coupling of the inflaton field to gravity [8, 17]. Our results show that the predictions for ns and r from these models are generally in good agreement with the BICEP2, Planck and WMAP9 measurements, except the radiatively corrected quartic potential which is ruled out by the current data. We display the range of r values allowed in these models that are consistent with ns being close to 0.96. Finally, we present the predictions for {the running of the scalar spectral index α=|dns/dlnk|} which turn out to be of order 10-4-10-3.
Radiatively corrected φ2 potential 
ns vs. r (left panel) and ns vs. α (right panel) for various κ(2h4 − g4) values, along with the ns vs. r contours (at the confidence levels of 68% and 95%) given by the Planck collaboration (Planck TT+lowP) [3]. The black points and triangles are predictions in the textbook quartic and quadratic potential models, respectively. The dashed portions are for κ<0 {with suppressed r for larger |κ|. The number of e-folding} is taken as 50 (left curves) and 60 (right curves). {blogger's comment: the  r=0.028 value indicated by the cross and the smallest ns vs. r contour are updates thanks to BICEP2/Keck data through the end of the 2014 season including new 95 GHz maps [arxiv.org/abs/1510.09217]}


Radiatively corrected φ4 potential:...



Higgs potential:
...The dashed portions are for the initial inflaton VEV larger than its VEV at the potential minimum




Coleman-Weinberg potential: 

...The dashed portions are for the initial inflaton VEV larger than its VEV at the potential minimum.


φ4 potential with non-minimal gravitational coupling: 
... for various ξ values along each curve from 0 (top left) to ξ≫1 (down right)

(Submitted on 25 Mar 2014 (v1), last revised 13 May 2015 (this version, v4))


Getting the mass of a Big broson from a testable inflation model? 
... the first attempts to make a joint analysis of Planck and BICEP2 data have been presented [11, 13] concluding that the quadratic chaotic inflation (CI) is disfavored at more than 95% confidence level... it was shown several years ago [14] that a quadratic (or quartic) potential can, at best, function as an approximation within a more realistic inflationary cosmology. The end of CI is followed by a reheating phase which is implemented through couplings involving the inflaton and some additional suitably selected fields. The presence of these additional couplings can significantly modify, through radiative corrections (RCs), the tree level inflationary potential. For instance, for a quadratic potential supplemented by a coupling of the inflation field to, say, right-handed neutrinos, r can be reduced to values close to 0.05 [5] or so, at the cost of a (less efficient) reduction of ns , though. In this paper we briefly review this idea taking into account the recent refinements of Ref. [15], according to which an unavoidable dependence of the results on the renormalization scale arises. 

Another mechanism for reducing r at an acceptable level within models of quadratic CI is the introduction of a strong, linear non-minimal coupling of the inflaton to gravity [16,17]. The aforementioned mechanism, that we mainly pursue here, can be applied either within a supersymmetric (SUSY) [16] or a non-SUSY [17] framework. The resulting inflationary scenario, named non-minimal CI (nMI), belongs to a class of universal “attractor” models [18], in which an appropriate choice of the non-minimal coupling to gravity suitably flattens the inflationary potential, such that r is heavily reduced but ns stays close to the currently preferred value of 0.96. 
In this work we reexamine the realization of nMI based on the quadratic potential implementing the following improvements:
• As regards the non-SUSY case, we also consider RCs to the tree-level potential which arise due to Yukawa interactions of the inflaton – cf. Ref. [20, 21]. We show that the presence of RCs can affect the ns values of nMI – in contrast to minimal CI, where RCs influence both ns and r. For subplanckian values of the inflaton field, though, r remains well suppressed and may be observable only in the next generation of experiments such as COrE+ [22], PIXIE [23] and LiteBIRD [24] which may bring the sensitivity down to 10-3. [For ns = 0.96 the model favors fermionic coupling of the inflaton with strength in the range (0.01−3.5) and predicts r≃0.003 and inflaton mass m≃ 3·1013GeV.] 
• As regards the SUSY case, following Ref. [25], we generalize the embedding of the model in SUGRA allowing for a variation of the numerical prefactor encountered in the adopted Kähler potential... 

We finally show that, in both of the above cases, the ultaviolet (UV) cut-off scale [28, 29] of the theory can be identified with the Planck scale and, thus, concerns regarding the naturalness of this kind of nMI can be safely evaded. It is worth emphasizing that this nice feature of these models was recently noticed in Ref. [30] and was not recognized in the original papers [16, 17].



(Submitted on 11 Dec 2014 (v1), last revised 27 Mar 2015 (this version, v3))


 Let us wait for future cosmological probes
The value of the tensor-to-scalar ratio r in the region allowed by the latest Planck 2015 measurements can be associated to a large variety of inflationary models. We discuss here the potential of future Cosmic Microwave Background cosmological observations in disentangling among the possible theoretical scenarios allowed by our analyses of current Planck temperature and polarization data. Rather than focusing only on r, we focus as well on the running of the primordial power spectrum, αs and the running of thereof, βs. Our Fisher matrix method benefits from a detailed and realistic appraisal of the expected foregrounds. Future cosmological probes, as the COrE mission, may be able to reach an unprecedented accuracy in the extraction of βs and rule out the most favoured inflationary models... 


Notice, from the {figure above}, that the trajectories in the (ns, r) plane for the non-minimally coupled case (ξRφ2) start always at the point corresponding to the φ2 model predictions, and then, as the coupling ξ takes positive values, the tensor contribution is reduced, and the scalar spectral index ns is pushed below scale invariance, see Ref. [24].


(Submitted on 17 Sep 2015)


The first model of inflation, proposed by Starobinsky [1], is based on a conformal anomaly in quantum gravity. The Lagrangian density f(R) = R+R2/(6M2), where R is a Ricci scalar and M is a mass scale of the order of 1013GeV, can lead to a sufficient amount of inflation with a successful reheating [7]. Moreover, the Starobinsky model is favored from the 1-st year Planck observations [6]. The “old inflation” [2], which is based on the theory of supercooling during the cosmological phase transition, turned out to be unviable, because the Universe becomes inhomogeneous as a result of the bubble collision after inflation. The revised version dubbed “new inflation” [8, 9], where the second-order transition to true vacuum is responsible for cosmic acceleration, is plagued by a fine-tuning problem for spending enough time in false vacuum. However, these pioneering ideas opened up a new paradigm for the construction of workable inflationary models based on theories beyond the Standard Model of particle physics (see e.g., Refs. [10–13]). Most of the inflationary models, including chaotic inflation [14], are based on a slow-rolling scalar field with a sufficiently flat potential. One can discriminate between a host of inflaton potentials by comparing theoretical predictions of the scalar spectral index ns and the tensor-to-scalar ratio r with the CMB temperature anisotropies (see, e.g., [15–18])...






In Starobinsky inflation the scalar spectral index and the tensor-to-scalar ratio are given by ns=1−2/N and r=12/N2 respectively, in which case the model is well within the 68 % CL region. In Higgs inflation, described by the potential V(φ)= λ4/4(φ2−v2)2 (v∼102GeV), the presence of non-minimal couplings −ξφ2R/2 with |ξ|≫1 gives rise to the Einstein-frame potential similar to that in Starobinsky inflation, so that ns and r are the same in both models as long as quantum corrections to the tree-level Higgs potential are suppressed. It is possible to realize the self coupling λ4 of the order of 0.1 at the expense of having a large negative non-minimal coupling ξ∼−104... 
It is expected that future observations of CMB polarization such as LiteBIRD will provide further tight constraints on the amplitude of gravitational waves. We hope that we can approach the best model of inflation in the foreseeable future.

(Submitted on 19 Jan 2014 (v1), last revised 12 Jun 2014 (this version, v2))