mardi 30 juin 2015

The Good left-right symmetry, the bad correlated statistical fluke and the ugly systematic error

The Good
I have already reported almost one year ago a putative sign of new physics beyond the Standard Model (SM) at the TeV scale. Since then a few other connected anomalies have shown up providing some momentum for phenomenological investigations. The physicist and excellent blogger Jon Butterworth was recently talking of a definite, intriguing and exciting case of “watch this space”. In this post I would like to share some readings to provide an overview of the current situation:

Recently... a few deviations from the SM predictions have been reported by the ATLAS and CMS Collaborations in invariant mass distributions near 2 TeV:
  • 1) a 3.4σ excess at ∼2 TeV in the ATLAS search [1] for a W' boson decaying into WZ→JJ, where J stands for a wide jet formed by the two nearly collinear jets produced in the decays of a boosted W or Z boson. The mass range with significance above 2σ is ∼1.9–2.1 TeV; the global significance is 2.5σ. A CMS search [2] for JJ resonances, without distinguishing between the W- and Z-tagged jets, has a 1.4σ excess at ∼1.9 TeV. 
  • 2) a 2.8σ excess in the 1.8–2.2 TeV bin in the CMS search [3] for a W0 and a heavy “right-handed” neutrino, NR, through the W'→NRe→eejj process. 
  • 3) a 2.2σ excess in the 1.8–1.9 TeV bin in the CMS search [4] for W'→Wh0, where the SM Higgs boson, h0, is highly boosted and decays into bb̅, while W→ν. 
  • 4) a ∼2σ excess at ∼1.8 TeV in the CMS dijet resonance search [5]. The ATLAS search [6] in the same channel has yielded only a 1σ excess at 1.8 TeV. 

Although none of these deviations is significant enough to indicate a new phenomenon, it behooves us to inquire whether a self-consistent theory may explain all of them. Here we construct a renormalizable theory that explains quantitatively these deviations, and derive its predictions for signals that can be probed in Run 2 of the LHC. 

The deviations showed up in searches for a W' boson but there are several theoretical and experimental hurdles that need to be overcome before a particle of mass near 2 TeV can be inferred. The eejj excess suggests that the W' boson couples to the right-handed quarks, as in the popular left-right symmetric models [7]. However, those models predict that NR has a Majorana mass, so that the number of events with same-sign lepton pairs should be approximately equal to that for opposite-sign lepton pairs [8]. This is ruled out by the CMS excess, which consists almost entirely of e+e- pairs. Thus, we need to modify the left-right symmetric models in order to allow a TeV-scale Dirac mass for NR (alternatively, NR’s with CP violating mixing must be  highly{non} degenerate [9]). 
 Another issue is that all gauge extensions of the SM that include a W' also include a Z' boson. If that Z' couples to the SM leptons, as in left-right symmetric models, then the dilepton resonance searches force the Z' to be significantly heavier than the W'. This constrains the extended Higgs sector responsible for their masses...  
The W' considered here does not directly couple to left-handed leptons, implying highly suppressed W' decays into SM ν pairs (due to the small W−W' mixing). In order to fit the CMS eejj excess, and to avoid large flavor-changing effects, we assume W' coupling to leptons... with the heavy right-handed neutrinos (NeR,NµR,NτR) being part of three vector-like fermions with Dirac masses. Since the CMS µµjj search [3] has not yielded deviations from the SM, the Nµ mass must satisfy mNµ >MW'  ... 
In order for NτR to acquire a Dirac mass we introduce a vector-like fermion ψ=(ψNτ)T transforming as (2,+1) under SU(2)R×U(1)B-L. Its ψNL component can become the Dirac partner of NτR. To see that, let us first describe a simple Higgs sector: an SU(2)R triplet scalar T breaks SU(2)R×U(1)B-L to U(1)Y giving the bulk of MW' and MZ' , and a bi-doublet scalar Σ breaks SU(2)L×U(1)Y→ U(1)Q inducing a small mixing between the charged gauge bosons. For MW'MW , Σ consists of two SU(2) Higgs doublets, which break the electroweak symmetry. The SM Higgs does not mix with other scalars in the alignment limit, and the other charged and neutral scalars could be at the TeV scale as they are also charged under SU(2)R. 
Conclusions.—The W' model presented here appears to be a viable description of the small mass peaks near 2 TeV observed in at least five channels at the LHC. Definitive tests of this model will be performed in several W' decay channels in Run 2 of the LHC. Assuming an SU(2)L×SU(2)R×U(1)B-L gauge origin of the W' , we predict the existence of a Z' boson of mass below 4.5 TeV with production rates shown in Fig.2 {below}. Our renormalizable theory includes Dirac masses for the right-handed neutrinos.

(Submitted on 22 Jun 2015 (v1), last revised 25 Jun 2015 (this version, v2))

* I propose a correction to this statement that seems to me wrong according to my reading of the quoted work.


The Bad
The statistical significance of the signal for new physics is still pretty low. Here is a brief reminder about the general issue of nσ claims by an expert in the field:

It has become the convention in Particle Physics that in order to claim a discovery of some form of New Physics, the chance of just the background having a statistical fluctuation at least as large as the observed effect is equivalent to the area beyond 5σ in one tail of a normalised Gaussian distribution, or smaller...
The traditional arguments for the 5σ criterion are: 
  • History: In the past there have been many ‘phenomena’ that corresponded to 3 or 4σ effects that have gone away when more data were collected. The 5σ criterion is designed to reduce the number of such false claims...
  • Look Elsewhere Effect (LEE): The significance of an interesting peak in a mass spectrum is defined in terms of the probability of a background fluctuation producing an effect at least as large as the one actually seen. If the background fluctuation is required to be at the observed mass, this probability is called the local p-value. If, however, the restriction on the location of the background fluctuation is removed, this is instead the global p-value, and is larger than the local one because of the LEE...
  • Subconscious Bayes’ Factor: When searching for a discovery, the data statistic that is used to discriminate between just background (known as the null hypthesis H0) and ‘background plus signal’ (H1) is often the likelihood ratio L1/L0 for the two hypotheses; and the 5σ criterion is applied to the observed value of this ratio, as compared with its expected distribution assuming just background. However, more relevant to the discovery claim is the ratio of probabilities for the two hypotheses... The above argument is clearly a Bayesian application of Bayes Theorem, while analyses in Particle Physics usually have a more frequentist flavour. Nevertheless, this type of reasoning does and should play a role in requiring a high standard of evidence before we reject well-established theories. There is sense to the oft-quoted maxim ‘Extraordinary claims require extraordinary evidence’...
  • Systematics: It is in general more difficult to estimate systematic uncertainties than statistical ones. Thus a nσ effect in an analysis where the statistical errors are dominant may be more convincing that one where the nσ claim is dominated by systematics errors. Thus in the latter case, a 5σ claim should be reduced to merely 2.5σ if the systematic uncertainties had been underestimated by a factor of 2; this corresponds to the p-value increasing from 3×10-7 by a dramatic factor of 2×104. The current 5σ criterion is partially motivated by a cautious approach to somewhat vague estimates of systematic uncertainties...
There are several reasons why it is not sensible to use a uniform criterion of 5σ for all searches for new physics. These include most of the features that we included as supporting the use of the 5σ criterion. 
This can vary enormously from search to search. Some experiments are specifically designed to measure one parameter, which is sensitive to new physics, while others use general purpose detectors which can produce a whole range of potentially interesting results, and hence have a larger danger of a statistical fluctuation somewhere in the background. An example of an experiment with an enormous LEE is the search for gravitational waves; these can have a variety of different signatures, occur at any time and with a wide range of frequencies, durations, etc...
(Submitted on 4 Oct 2013)

Of course the former article does not deal with the specific issue how many σ should be required for the discovery of W', Z' gauge bosons or NR  right-handed neutrinos. In so far as these particles are simply embedded in a pretty minimal gauge extension of the standard model without going out of the non-susy renormalizable relativistic quantum field theory on 4D Minkowski space-time - the successful paradigm of high energy physics since 50 fifty years - one can expect the same usual 5σ rules... 


The Ugly
To finish with, le's have a look on an intriguing possible systematic error that could jeopardize the exciting interpretation of some of these reported LHC1 anomalies:

The ATLAS Collaboration has recently observed a localised excess in the invariant mass distribution of pairs of fat jets, hereafter denoted by J, around mJJ≃2 TeV [1]. Fat jets can be produced in the hadronic decay of boosted bosons V=W,Z, where the two quarks from the boson decay merge into a single jet. Using jet substructure analyses, the fat jets are tagged as resulting from a boson decay. In addition, in ref. [1] the jets J are identified as W or Z bosons if the jet mass mJ satisfies |mJ−MW | ≤ 13 GeV or |mJ−MZ | ≤ 13 GeV. The excess in the mJJ spectrum appears for W Z, ZZ and WW selections, with statistical significances of 3.4 σ, 2.9 σ and 2.6 σ, respectively [Notice that J can be simultaneously tagged as W and Z with these criteria, as the mass windows for W and Z tagging partially overlap. This indicates, in particular, that a W Z signal can yield significant excesses in the WW and ZZ selections too]. These three channels are not independent and some events fall into two or even the three above categories. A statistical combination of the three channels must take this fact into account, and has not been yet performed.
... it is very unlikely that [HH, WH or ZH resonance signals] could contribute significantly to the JJ excess with a fat dijet selection optimised for VV production [V=W or Z] . Additionally, one expects relations between VV and VH decay fractions of heavy resonances in definite models [13]. All this overwhelming set of related SM-like measurements has motivated the caution by the ATLAS Collaboration regarding this excess, but it has not discouraged early interpretations as new diboson resonances of technicolour models [14,15]. While ref. [14] only takes into account the limit on the production of WZ resonances from the fully leptonic channel (the weakest one), ref. [15] attributes the tension among the searches in different W, Z decay channels to statistics. Other W′/Z′ interpretations [16] only focus on the JJ excess overlooking the null results obtained in the other decay modes of the gauge boson pair 
Statistical fluctuations aside, experimental data seem to disfavour the possibility that the ATLAS JJ excess results from a diboson resonance. We are then led to consider that, it this excess is real, it might be due to something different that looks as a diboson peak due to the kinematical selection applied to reduce SM backgrounds. As we will show in this paper, a requirement on transverse momenta applied in ref. [1] shapes certain resonant VVX signals, with X an extra particle, making them look like a VV resonance. Such a requirement is not used by the corresponding analysis of the JJ final state by the CMS Collaboration [5], nor in the analysis of semi-leptonic final states... 
A heavy resonance decaying into two massive gauge bosons plus an extra particle might explain the peak-shaped excess in the ATLAS diboson resonance search [1] and the absence of such peaks in semi-leptonic channels [2,3,4], nor in the CMS dijet analysis [5]. Simple tests of this hypothesis could be performed by removing the transverse momentum balance requirement in the ATLAS dijet analysis—which would make the excess adopt a broader shape—or, conversely, by introducing this requirement in the rest of searches, especially in the CMS fat dijet analysis. 
Among more exotic candidates, the possibility that X is simply the Higgs boson is quite intriguing. If a WZH resonance R is produced with the above estimated cross section, a 12 fb WH signal will result when the Z boson decays invisibly. In the W leptonic decay mode the invariant mass distribution of the WH pair mℓνJ will concentrate around MR, since the invisible Z still contributes to mℓνJ . For the hadronic channel there are two possibilities that correspond to the two topologies in figure 4 {below}. For the cascade decay R→YZ→WZH, the WH invariant mass mJJ  will peak at the Y mass MY < MR, while for R→YW→WZH, mJJ will be broadly distributed below MR... Therefore, for the topology in figure 4 (b) {below}, a peak should manifest in the WH invariant mass distribution in the semi-leptonic channel but not in the fully hadronic one. This is precisely the behaviour suggested by the CMS semi-leptonic [11] and fully hadronic [10] searches for WH resonances: the former does have a 2.2 σ deviation of ∼ 20 fb at 1.8 TeV whereas the latter, more sensitive, only has an excess at the 1 σ level for this mass. Still, one should bear in mind that statistics are not enough to draw any conclusion. 
The possibly common origin of the ATLAS VV and CMS WH excesses—where the slight mass differences can be attributed to the energy resolution—certainly deserves a more detailed study of the boosted jet tagging and mass reconstruction of W ZH signals. Also, one should bear in mind another 2.8 σ excess in final states with two leptons and two jets at an invariant mass of 2 TeV [22], already interpreted as resulting from new W′ or Z′ vector bosons [23,24,25]. Provided the current excesses are confirmed in 13 TeV data, the higher statistics available will allow for exhaustive tests of the various hypotheses of new resonance production
(Submitted on 22 Jun 2015 (v1), last revised 29 Jun 2015 (this version, v2))


//Last edit 1 July 2015.

lundi 29 juin 2015

Our God particle accelerators who art in Heaven...

...Give us our daily cosmic rays
Cosmic rays were discovered about one century ago by Victor Hess. Hess was awarded with the Nobel prize in 1936 for his studies, but he was never able to actually perform a direct detection of the cosmic rays due to the technological limitations of balloon flights and of the detectors at his times... It took several years to understand that the main component of cosmic rays is made of protons with a steeply falling flux as function of energy. The study of cosmic radiation and its interaction with the Earth atmosphere led to the discovery of new particles and set the basis for the experimental particle physics that is carried out today at accelerators. 
The origin, acceleration and propagation mechanisms of charged particles traveling in the Space have been the main topics in the studies of cosmic radiation since its discovery. In the aim of solving these puzzling issues, in the 80s and 90s a massive campaign of experiments was carried out on stratospheric balloon flights and small satellites. With increasing knowledge on the cosmic rays, it began to be clear that it is very difficult to provide a satisfactory and self-consistent global model.  
The cosmic ray all-particle spectrum is shown in figure 1 [1]. Most experiments agree, at least qualitatively, that the spectrum consists of at least three regions. At the lowest energies, from tens of MeV to tens of GeV (yellow band in figure), the particles coming from the interstellar space are deflected and influenced by the magnetic region generated by the Sun, the heliosphere. As a consequence, the observed spectrum is flattening. At higher energies, instead, the direct measurements of cosmic rays represent the interstellar flux and composition. For energies between tens of GeV and about 1015eV, the spectrum can be fit with a power law with slope ∼2.7. Due to such a steep spectrum, with current technology, a direct measurement (cyan band in figure) is possible only up to about 1015eV, the so called “knee”. Beyond the knee (purple band), the slope grows to ∼3.1. Only indirect measurements are possible by exploiting the atmosphere as a large calorimeter and by making use of ground based detectors. At the highest energies particles have energies comparable to the Greisen-Zatsepin-Kuzmin limit (GZK cutoff), which occurs at about 5×1019 eV. 

At the end of the 90s, experimental cosmic ray direct measurements were limited to few hundreds of GeV for protons and helium nuclei (major component of cosmic radiation) and to few tens of GeV for antiparticles. Due to the limited statistics and to quite large systematic uncertainties it was still not possible to answer to many fundamental questions concerning the cosmic rays. As a consequence, the experimental study of cosmic rays took three paths that are still effective. The first research line aims to push the direct measurements at the highest energies, possibly reaching the knee, in order to study sources and acceleration mechanisms. The second research line is dedicated to study the chemical composition of cosmic rays, measuring highly charged nuclei spectra, with the aim of understanding the source material, dust and gas, the nucleosynthesis and the propagation of cosmic rays in the interstellar medium. The third path, finally, is dedicated to the study of the rare antiparticle and anti-matter component, trying to search for signal of the elusive dark-matter, set anti-matter limits and understand the matter-antimatter asymmetry in the Universe... 
(Submitted on 4 Jul 2014)

Lead us not to the wrong standard model, 


Depending on the research line, different platforms and detection techniques have been adopted. In the following, I will describe the latest missions conducted on stratospheric balloons, satellites and on the International Space Station (ISS) while discussing the main physics results obtained in the recent years. I will categorize the results by type of particles and their role in the cosmic ray “standard model”. With “standard model”, figure [below] 
The latest generation of cosmic ray particle detectors has brought and is bringing many exciting results. Proton, helium and, possibly, highly charged nuclei spectra seems to harden at similar rigidities. Moreover, there is a strong indication that proton and helium nuclei have indeed a different spectral index. 
The measurement of antiparticles in the cosmic rays has been very popular in the last years, with a possible indication of dark matter detection in the positron fraction has to take into account not only the “missing signal” in the antiproton measurement but also the possibility of nearby astrophysical sources capable of accelerating electrons and positrons. 
All these measurements are challenging the cosmic ray standard model. New results from current and future experiments will probably contribute in developing a more precise description of the sources, acceleration and propagation of cosmic rays. 
A special care, however, must be used when interpreting experimental data: thanks to larger acceptances and acquisition time, it is likely that systematic uncertainties will dominate in a big part of the detected energy window. In such cases, it is always important to carefully describe the sources of these uncertainties and their effects on the measurements. Unlikely statistical errors, systematic uncertainties are estimated and they are strongly related to the experimental apparatus and their effect can bring not only renormalization problems but also distortions in the flux measurements.
Id.
But deliver us from darkness
Recently, several groups including Daylan et al. [7], Calore et al. [8], and the Fermi Collaboration [9] re-analyzed data from the Fermi-LAT [17] and concluded that the 1–3 GeV gamma ray signal is statistically significant and appears to originate from dark matter particles annihilating rather than standard astrophysical sources. The peak in the energy distribution is broadly consistent with gamma rays originating from self-annihilation of dark matter particles [7, 18–23]. The intensity of the signal suggests a dark matter annihilation cross section at thermal freeze out [24–29]. The diffuse nature and morphology of the gamma ray excess is consistent with a Navarro-Frenk-White-like Galactic distribution of dark matter [8]. This gamma ray excess thus drew the attention of a number of particle model builders and phenomenologists [10, 14, 24, 30–32]. The conclusion that we have discovered dark matter particles, however, cannot be drawn yet. First, we have to be able to exclude the possibility of a standard astrophysical explanation. Second, we need to demonstrate that a dark matter particle that explains the gamma ray excess (with a given mass, spin, and interaction strength to the standard sector) is consistent with a large number of other observations. The latter concerns our paper. We aim to determine the microscopic properties of the dark matter particle from the gamma ray excess and check that these properties comply with limits from other experiments. We use dark matter abundance and direct detection data, measurements of the gamma ray flux from the Galactic Center, near Earth positron and anti-proton flux data, Cosmic Microwave Background (CMB) observations, and measurements of galactic radio emission as experimental constraints. 
In this work we perform a comprehensive statistical analysis of the gamma ray excess from the Galactic Center in a simplified dark matter model framework. According to our previous study, Majorana fermion dark matter interacting with standard model fermions via a scalar mediator is the most favoured explanation of the galactic center excess when characterised by Bayesian evidence. We locate the most plausible parameter regions of this theoretical hypothesis using experimental data on the dark matter abundance and direct detection interactions, the gamma ray flux from the Galactic center, near Earth positron and anti-proton fluxes, the Cosmic Microwave Background, and galactic radio emission. We find that the radio data excludes the model if we include synchrotron radiation as the only energy loss channel. Since it was shown that inclusion of other types of energy losses lifts this exclusion we discard the single radio data point from our combined likelihood [34]. The rest of the data prefers a dark matter (mediator) mass in the 10–100 (3–1000) GeV region and weakly correlated couplings to bottom quarks and tau leptons with values of 10−3–1 at the 68% credibility level.
(Submitted on 25 May 2015)

Which disunites us?
The origin of the neutrino background radiation (NBR) above 35 TeV discovered with [1] IceCube, of the sub-TeV Galactic cosmic ray (CR) positrons measured recently with [2] PAMELA, Fermi-LAT and AMS and of the sub TeV gamma ray background (GBR) measured with [3] Fermi-LAT are still among the unsolved major cosmic puzzles. High energy particle physics offers three main mechanisms, which can produce simultaneously high energy neutrinos (ν’s), gamma rays (γ’s) and positrons (e+’s): 
  • (1) meson production in hadronic collisions of high energy CRs with diffuse matter in the interstellar medium of galaxies [4] , in the intergalactic medium (IGM) of Galaxy clusters [5], or inside the cosmic ray sources [6a,6b,6c], 
  • (2) photo production of mesons in CR collisions with radiation in/near gamma ray sources [7], and 
  • (3) decay of massive dark matter particle [8] relics from the Big Bang. But, so far, no connection has been found [9a,9b] between the NBR, GBR and CR positrons and their origin and observed properties are still unsolved cosmic puzzles [1,2,3].  
In this letter, however, using only priors and no adjustable parameters, we show that if the high energy cosmic ν’s, γ’s, and e+’s are mainly produced in hadronic collisions of the CRs inside their main accelerators [the highly relativistic jets ejected in supernova explosions and by active Galactic nuclei [12][6a,6b,6c], then the NBR discovered ... with IceCube at energies above 35 TeV is that expected from the sub-TeV GBR measured ... with Fermi-LAT and that the Galactic GBR itself is that expected from the flux of sub-TeV CR positrons measured with AMS2. Moreover, the sky distributions of the NBR and GBR are predicted to be similar, while the predicted large spectral index of the NBR and low statistics make the Glashow resonance [10] undetectable in the current Icecube data... 
In the energy range between several GeV and PeV, the total flux of primary nucleons is well described by [15] 
Φp(E) ≈ 1.8 (E/GeV)−β fu (2) 
where β ≈ 2.70 and fu=(GeV cm2 s sr)−1  is the flux unit... 
When in-source CR production of mesons dominates the Galactic production of high energy γ-rays and e+’s, their fluxes, which are given by Eq. (1), are simply related by
Φγ(E) ≈ (Fγ/Fe+)×(h/c×τe) Φe+ , (11)
where Fγ/Fe+=4.0 for βs = 1.367, h≈3.5 kpc is the typical distance from the center of the Galaxy of the supernova remnants and pulsars, the tracers of its main CR accelerator (supernova explosions and gamma ray bursts) are located [20]...  
The precise measurements with AMS of the spectrum of the sub-TeV CR positrons [2] can be used as an additional test of their proposed origin. In a steady state, the expected Galactic flux of CR e+’s is a sum Φe+(CR)=Φe+(source)+Φe+(ISM) where the in-source flux is given by Eq. (3), and the interstellar medium (ISM) flux is  
Φe+(ISM)≈Fe+×σin×nISM ×c×τe+×Φp/(β−1) (13) 
with Fe+≈0.007 for β=2.70, a mean ISM density in the Gallactic CR halo ≈ nISM  ≈ 0.05 cm-3 and Φp given by Eq.(2). In order to test the inside-source production hypothesis, Fig.3 compares the predicted Φe+(source)+Φe+(ISM) for βs=2.33 and the Φe+(CR) measured with AMS2. The normalization of Φe+(CR) (source) at 100 GeV was adjusted to reproduce the Galactic contribution to the GBR at 100 GeV, as given by Eq(11). For completeness, in Fig.3  we have included a phenomenological heliosphere modulation of the espectrum, which affect the spectrum only below 10 GeV. In order to demonstrate the possible effect of pair production in e+ collisions with ∼eV photons of a dense radiation field inside the source we have also included a cutoff in the ee+ spectrum [9a,9b], which affects its behavior only above 800 GeV. Such a cutoff is expected for cosmic accelerators such as the highly relativistic jets of supernova explosions of Type Ic, which produce the long duration highly beamed gamma ray bursts, most of which do not point in our direction [12]. As can be seen from Fig. 3, the agreement between the predicted and observed Φe+(CR) is quite satisfactory...

A critical test of the inside-source production hypothesis is whether the sky distributions of the NBR and GBR are nearly equal, which, however, requires much larger statistics than currently available from the IceCube [1].
(Submitted on 19 May 2015 (v1), last revised 28 May 2015 (this version, v3))

samedi 27 juin 2015

Prospects for a B-L Higgs boson discovery at LHC: wait for high luminosity Run 6 (2035 or 3000 fb-1)!

Let's call it the B(rout)-(Eng)L(ert) scalar...
It is well known that the SM cannot be the final theory of nature. The successful explanation of the hierarchy problem requires some new physics (NP) near the TeV scale. In addition, the observation of small neutrino masses and their very particular mixing indicates the presence of physics beyond the standard model (BSM)... neither ATLAS nor CMS have yet conclusively discovered any particle that serves as proof for BSM physics. Now, with the discovery of the Higgs boson, effects of new physics can be searched for in its coupling measurements [437]. In this paper, we consider the simplest manifestation of a BSM extension through an extra singlet scalar. As a first step, we would like to see how the addition of just an additional neutral Higgs boson fares with the discovery prospects at the high-luminosity run at LHC (HL-LHC) with a final integrated luminosity of 3000 fb-1. 
The presence of a heavy Higgs-like neutral scalar is innate in various models, such as, the minimal supersymmetric standard model (MSSM), two Higgs doublet models (2HDMs), models with extra spatial dimensions, etc. However, the simplest among these models is the SM augmented with a gauge singlet. This can originate very naturally from a U(1)B-L model with an extra U(1) local gauge symmetry, where B and L represents the baryon number and lepton number respectively. In particular, we focus on a TeV scale B−L model, that can further be embedded in a TeV scale Left-Right symmetric model [3841]. The B−L symmetry group is a part of a Grand Unified Theory (GUT) as described by a SO(10) group [42]. Besides, the B−L symmetry breaking scale is related to the masses of the heavy right-handed Majorana neutrinos, which participate in the celebrated seesaw mechanism [43–46] and generate the light neutrino masses. 
Another important theoretical motivation of this model is that the right handed neutrinos, that are an essential ingredient of this model participate in generating the baryon asymmetry of the universe via leptogenesis [47]. Hence, the B−L breaking scale is strongly linked to leptogenesis via sphaleron interactions that preserve B−L. It is important to note that in the U(1)B-L model, the symmetry breaking can take place at scales much lower than that of any GUT scale, e.g. the electroweak (EW) scale or TeV scale. Because the B+L symmetry is broken due to sphaleron interactions, baryogenesis or leptogenesis cannot occur above the B−L breaking scale. Hence, the B−L breaking around the TeV scale naturally implies TeV scale baryogenesis. 
The presence of heavy neutrinos, a TeV scale extra neutral gauge boson and an additional heavy neutral Higgs, makes the model phenomenologically rich, testable at the LHC as well as future e +e − colliders [48, 49,50,51,52,53,54,55,56]. The Majorana nature of the heavy neutrinos can be probed for example through same-sign dileptonic signatures at the LHC [57]. On the other hand, the extra gauge boson Z 0 in this model interacts with SM leptons and quarks. Non-observation of an excess in dilepton and di-jet signatures by ATLAS and CMS have placed stringent constraints on the Z' mass [5863]. 
In this work, we examine in detail the discovery prospects of the second Higgs at the HL-LHC for a TeV scale U(1)B−L model. The vacuum expectation value (vev) of the gauge singlet Higgs breaks the U(1)B-L symmetry and generates the masses of the right handed neutrinos. We consider the B−L breaking scale to be of the order of a few TeVs, for which the right handed neutrino masses can naturally be in the TeV range. The physical second Higgs state mixes with the SM Higgs boson with a mixing angle θ, constrained by electroweak precision measurements from LEP [64, 6566], as well as from Higgs coupling measurements at LHC [67, 68]. The second Higgs is dominantly produced by gluon fusion with subsequent decay into heavy particles. The largest branching ratios are into W, Z and Higgs bosons. We discuss in detail the different channels through which the second Higgs state can be probed at the HL-LHC... 

We studied the discovery prospect of a heavy Higgs H2 in the 4l, 2l2j and the ljjET channels at the LHC (with ∫Ldt=100 fb-1) and HL-LHC ( ∫Ldt=3000 fb-1), where we employed a boosted decision tree to separate signal from background... 

The channel with four leptons was found to be the cleanest. The signal and background cross-sections for these processes are σS≃0.1 fb and σB≃42 pb, respectively. Using the cuts on the i) invariant mass of 4l and on the reconstructed Z bosons, ii) the pT cuts on the momenta of four leptons, as well as, the reconstructed Z bosons, we found that for a mass MH2≤ 500 GeV, the H2 can be discovered with a significance of ∼5σ at HL-LHC with 3000 fb-1.
(Submitted on 21 Jun 2015)

A LHC time schedule

Higgs boson at LHC Run 2 : our workhorse for probing effective dimension-6 operators of the Standard Model fields

The particle physicist without low energy SUSY but a Higgs boson is a modern sisyphus
If one suppose following Adam Falkowski (aka Jester) that no slightest hint for a particular scenario beyond the Standard Model has emerged so far (I will venture to nuance this statement another day) any good physicist has to resort to effective field theory. This is what is reported below, following a model-independent framework. 

The standard model (SM) of particle physics was proposed back in the 60s as a theory of quarks and leptons interacting via strong, weak, and electromagnetic forces [1]. It is build on the following principles
  • #1 The basic framework is that of a relativistic quantum field theory, with interactions between particles described by a local Lagrangian. 
  • #2 The Lagrangian is invariant under the linearly realized local SU(3)×SU(2)×U(1) symmetry.
  • #3 The vacuum state of the theory preserves only SU(3)×U(1) local symmetry, as a result of the Brout-Englert-Higgs mechanism [2, 3, 4]. The spontaneous breaking of the SU(2)×U(1) symmetry down to U(1) arises due to a vacuum expectation value (VEV) of a scalar field transforming as (1, 2)1/2 under the local symmetry
  • #4 Interactions are renormalizable, which means that only interactions up to the canonical mass dimension 4 are allowed in the Lagrangian...
The SM passed an incredible number of experimental tests. It correctly describes the rates and differential distributions of particles produced in high-energy collisions; a robust deviation from the SM predictions has never been observed. It predicts very accurately many properties of elementary particles, such as the magnetic and electric dipole moments, as well as certain properties of simple enough composite particles, such as atomic energy levels. The discovery of a 125 GeV boson at the Large Hadron Collider (LHC) [89] nails down the last propagating degree of freedom predicted by the SM. Measurements of its production and decay rates vindicates the simplest realization of the Brout-Englert-Higgs mechanism, in which a VEV of a single SU(2) doublet field spontaneously breaks the electroweak symmetry. Last not least, the SM is a consistent quantum theory, whose validity range extends to energies all the way up to the Planck scale (at which point the gravitational interactions become strong and can no longer be neglected).  
Yet we know that the SM is not the ultimate theory. It cannot account for dark matter, neutrino masses, matter/anti-matter asymmetry, and cosmic inflation, which are all experimental facts. In addition, some theoretical or esthetic arguments (the strong CP problem, flavor hierarchies, unification, the naturalness problem) suggest that the SM should be extended. This justifies the ongoing searches for new physics, that is particles or interactions not predicted by the SM. 
In spite of good arguments for the existence of new physics, a growing body of evidence suggests that, at least up to energies of a few hundred GeV, the fundamental degrees of freedom are those of the SM. Given the absence of any direct or indirect collider signal of new physics, it is reasonable to assume that new particles from beyond the SM are much heavier than the SM particles. If that is correct, physics at the weak scale can be adequately described using effective field theory (EFT) methods. 
In the EFT framework adopted here the assumptions #1 . . . #3 above continue to be valid... Thus, much as in the SM, the Lagrangian is constructed from gauge invariant operators involving the SM fermion, gauge, and Higgs fields. The difference is that the assumption #4 is dropped and interactions with arbitrary large mass dimension D are allowed. These interactions can be organized in a systematic expansion in D. The leading order term in this expansion is the SM Lagrangian with operators up to D = 4. All possible effects of heavy new physics are encoded in operators with D > 4, which are suppressed in the Lagrangian by appropriate powers of the mass scale Λ. Since all D=5 operators violate lepton number and are thus stringently constrained by experiment, the leading corrections to the Higgs observables are expected from D = 6 operators suppressed by Λ2 [14]. I will assume that the operators with D > 6 can be ignored, which is always true for v≪Λ... 
Using the dependence of the signal strength on EFT parameters worked out in Section 4 and the LHC data in Table 2 one can constrain all CP-even independent Higgs couplings in Eq. (3.2)... In the Gaussian approximation near the best fit point I find the following constraints:
 
 where the uncertainties correspond to 1σ...
The Higgs boson has been discovered, and for the remainder of this century we will study its properties. Precision measurements of Higgs couplings and determination of their tensor structure is an important part of the physics program at the LHC and future colliders... it is important to (also) perform these studies in a model-independent framework. The EFT approach described here, with the SM extended by dimension six operators, provides a perfect tool to this end. One should be aware that Higgs precision measurements cannot probe new physics at very high scales. For example, LHC Higgs measurements are sensitive to new physics at Λ∼1TeV at the most. This is not too impressive, especially compared to the new physics reach of flavor observables or even electroweak precision tests. However, Higgs physics probes a subset of operators that are often not accessible by other searches. For example, for most of the 9 parameters in Eq. (5.1) the only experimental constraints come from Higgs physics. It is certainly conceivable that new physics talks to the SM via the Higgs portal, and it will first manifest itself within this particular class of D = 6 operators. If this is the case, we must not miss it.
(Submitted on 30 Apr 2015 (v1), last revised 12 May 2015 (this version, v2))

In the following I chose to focus on two specific model-dependant frameworks for the sake of argument.

Scalar fields are the last hype in  high-energy physics...
As a start, we assume that new physics does not violate known gauge and Lorentz symmetries in the SM so that the higher dimensional operators obtained by integrating out the heavy degrees of freedom also satisfy the same symmetries. There is only one dimension-5 operator (for one family of fermions) consistent with this, i.e., the Weinberg operator that gives rise to Majorana mass for neutrinos [1]. This operator violates the lepton number by two units. In the case of dimension-6 operators, the original attempt to compile a complete basis [2] was later found to be redundant [3, 45], leaving 64 independent operators (also for one family of fermions) [6] with five of them violating either baryon or lepton number [1, 7, 8]...
There are some attractive motivations to consider models with an extended scalar sector. For example, new scalar bosons in these models may facilitate a strong first-order phase transition for successful electroweak baryogenesis, provide Majorana mass for neutrinos, and/or have a connection with a hidden sector that houses dark matter candidates. Even though it may not be possible to directly probe this sector due to the heavy masses of new scalar bosons and/or their feeble interactions with SM particles, they can nevertheless leave imprints in some electroweak precision observables...  
Although it is widely believed that the standard model (SM) is at best a good effective theory at low energies, the fact that the observed 125-GeV Higgs boson has properties very close to that in the SM suggests that the new physics scale is high and the new degrees of freedom are likely to be in the decoupling limit. Therefore, it is useful to work out an effective field theory (EFT) in terms of operators up to dimension 6 and composed of only the SM fields...
In this paper, we have analysed the EFT of the SM Higgs field for a wide class of weakly coupled renormalizable new physics models extended by one type of scalar fields and respecting CP symmetry, concentrating on the dimension-6 operators that have corrections to the electroweak oblique parameters and current Higgs observables. We have shown that for the new scalar field of specific representations (SU(2)L singlet, doublet, and triplet), there are “accidental” interactions between the scalar and the SM Higgs fields that lead to dimension-6 operators at both tree and one-loop level. For the scalar field of a general representation under the SM gauge groups, we have pointed out that there are only two generic quartic interactions that will lead to dimension-6 operators only at one-loop level... 
(Submitted on 23 May 2015)

Spectral action principle could make phenomenologist's life easier as it talks naturally to scalars
The coupling constants of the three gauge interactions run with energy [1]. The ones relating to the non-abelian symmetries are relatively strong at low energy, but decrease, while the abelian interaction increases. At an energy comprised between 1013−1017GeV their values are very similar, around 0.52, but, in view of present data, and in absence of new physics, they fail to meet at a single scale. Here by absence of new physics we mean extra terms in the Lagrangian of the model. The extra terms may be due for example to the presence of new particles, or new interaction. A possibility could be supersymmetric models which can alter the running and cause the presence of the unification point [2].
The standard model of particle interaction coupled with gravity may be explained to some extent as a particular for of Noncommutative, or spectral geometry, see for example [3] for a recent introduction. The principles of noncommutative geometry are rigid enough to restrict gauge groups and their fermionic representations, as well as to produce a lot of relations between bosonic couplings when applied on (almost) commutative spaces. All these restrictions and relations are surprisingly well compatible with the Standard Model, except that the Higgs field comes out too heavy, and that the unification point of gauge couplings is not exactly found. We have nothing new to say about the first problem, which has been solved in [4, 5, 6, 78] with the introduction of a new scalar field σ suitably coupled to the Higgs field, but we shall address the second one.
Some years ago the data were compatible with the presence of a single unification point Λ. This was one of the motivations behind the building of grand unified theories. Such a feature is however desirable even without the presence of a larger gauge symmetry group which breaks to the standard model with the usual mechanisms. In particular, the approach to field theory, based on noncommutative geometry and spectral physics [10], needs a scale to regularize the theory. In this respect, the finite mode regularization [11, 12, 13] is ideally suited. In this case Λ is also the field theory cutoff. In fact using this regularization it is possible to generate the bosonic action starting form the fermionic one [14, 15, 16], or describe induced gravity on an equal footing with the anomaly-induced effective action [17].
The aim of this paper is to investigate whether the presence of higher dimensional terms in the standard model action − dimension six in particular − may cause the unification of the coupling constants. The paper may be read in two contexts: as an application of the spectral action, or independently on it, from a purely phenomenologically point of view. 
From the spectral point of view, the spectral action [10] is solved as a heath kernel expansion in powers in the inverse of an energy scale. The terms up to dimension four reproduce the standard model qualitatively, but the theory is valid at a scale in which the couplings are equal. The expansion gives, however, also higher dimensional terms, suppressed by the power of the scale, and depending on the details of the cutoff. This fixes relations among the coefficients of the new terms. The analysis of this paper gives the conditions under which the spectral action can predict the unification of the three gauge coupling constants...  
In this paper we have calculated the sixth order terms appearing in the spectral action Lagrangian. We have then verified that the presence of these terms, with a proper choice of the free parameters, could cause the unification of the three constants at a high energy scale. Although the motivation for this investigation lies in the spectral noncommutative geometry approach to the standard model, the result can be read independently on it, showing that if the current Lagrangian describes an effective theory valid below the unification point, then the dimension six operator would play the proper role of facilitating the unification. In order for the new terms to have an effect it is however necessary to introduce a scale of the order of the TeV, which for the spectral action results in a very large second momentum of the cutoff function
We note that we did not require a modification of the standard model spectral triple, although such a modification, and in particular the presence of the scale field σ, could actually improve the analysis. From the spectral action point of view the next challenge is to include the ideas currently come form the extensions of the standard model currently being investigated.
(Submitted on 24 Oct 2014)

The phenomenological requirement of a new TeV scale physics is exciting for the experimentalist. Could-it be related to expections from some noncommutative geometricaly compatible TeV-scale Left-Right models with superconnections? Anyhow the extremely large value asked for the second momentum of the cutoff function in the spectral action functional seems awkward in the perspective of the canonical noncommutative geometer educated physicist. Future will say if further studies involving an effective field theory survey of the full spectral standard model with its Majorana right-handed neutrinos and real or complex SM-singlet scalar fields (not to mention a Pati-Salam extension) will show the same kind of trend or follow the orthodox motto up to high energy partial unification scales ...

last edit 27 June

vendredi 26 juin 2015

(Disentangling how) can time emerge from quantum hazard?

The Connes' intuition about the emergence of time through the notion of quantum variability
Here is the first oral presentation by Alain Connes of one of his most personal philosophical ideas "backed-up by an intuition which comes from many years of work" as he says : 
At the philosophical level there is something quite satisfactory in the variability of the quantum mechanical observables. Usually when pressed to explain what is the cause of the variability in the external world, the answer that comes naturally to the mind is just : the passing of time.

But precisely the quantum world provides a more subtle answer since the reduction of the wave packet which happens in any quantum measurement is nothing else but the replacement of a “q-number” by an actual number which is chosen among the elements in its spectrum. Thus there is an intrinsic variability in the quantum world which is so far not reducible to anything classical. The results of observations are intrinsically variable quantities, and this to the point that their values cannot be reproduced from one experiment to the next, but which, when taken altogether, form a q-number.
How can time emerge from quantum variability ? As we shall see the study of subsystems as initiated by von Neumann leads to a potential answer...

IHES, Jeudi 9 avril 2015
(presentation in English)

Une conférence en français sur le même sujet (filmée elle aussi), plus courte et pour un public plus large a été donnée un peu plus tard à l'académie des sciences : 
Je présenterai une réflexion sur la notion de "variabilité", en commençant par la notion de variable réelle" en mathématique puis par la "variabilité" inhérente aux phénomènes quantiques qui prescrit la reproductibilité de certains résultats d'expériences tout en prédisant la probabilité des résultats. J'énoncerai ensuite une conjecture sur l'écoulement du temps comme phénomène émergent de nature thermodynamique gouverné par une équation mathématique ayant pour origine la non-commutativité des observables de la mécanique quantique."
Alain CONNES
Mardi 19 mai 2015 16:30 - 17:00


The Rovelli's conjecture to understand the time arrow as perspectival
In his presentation, Connes explains the connection between his work and the speculations about time from Carlo Rovelli. Here are the most recent developments on the philosophical reflexions of this theoretical physicist:
An imposing aspects of the Cosmos is the mighty daily rotation of Sun, Moon, planets, stars and all galaxies around us. Why does the Cosmos so rotate? Well, it is not really the Cosmos to rotate, it is us. The rotation of the sky is a perspectival phenomenon: we understand it better as due to the peculiarity of our own moving point of view, rather than a global feature of all celestial objects.... 
The list of conspicuous phenomena that have turned out to be perspectival is long; recognising them has been a persistent aspect of the progress of science. A vivid aspect of reality is the flow of time; more precisely: the fact that the past is different from the future. Most observed phenomena violate time reversal invariance strongly. Could this be a perspectival phenomenon as well? Here I suggest that this is a likely possibility. 
Boltzmann’s H-theorem and its modern versions show that for most microstates away from equilibrium, entropy increases in both time directions [1, 2, 3]. Why then we observe lower entropy in the past? For this to be possible, most microstates around us appear to be very non generic. This is the problem of the arrow of time, or the problem of the source of the second law of thermodynamics [3, 4]. The common solution is to believe that the universe was born in an extremely non-generic microstate [5]... 
Here I point out that there is a different possibility: past low entropy might be a perspectival phenomenon, like the rotation of the sky. 

This is possible because entropy depends on the system’s microstate but also on the coarse graining under which the system is described. In turn, the relevant coarse graining is determined by the concrete existing interactions with the system. The entropy we assign to the systems around us depends on the way we interact with them —as the apparent motion of the sky depends on our own motion. A subsystem of the universe that happens to couple to the rest of the universe via macroscopic variables determining an entropy that happens to be low in the past, is a system to which the universe appears strongly time oriented. As it appears to us. Past entropy may appear low because of our own perspective on the universe... 
Quantum phenomena provide a source of entropy distinct from the classical one generated by coarse graining: entanglement entropy. The state space of any quantum system is described by a Hilbert space H, with a linear structure that plays a major role for physics. If the system can be split into two components, its state space splits into the tensor product of two Hilbert spaces: H=H1⊗H2, each carrying a subset of observables. Because of the linearity, a generic state is not a tensor product of component states; that is, in general ψ≠ ψ1ψ2. This is entanglement. Restricting the observables to those of a subsystem, say system 1, determines a quantum entropy over and above classical statistical entropy. This is measured by the von Neumann entropy S=-tr[ρlogρ] of the density matrix ρ=trH2|ψ><ψ|. Coarse graining is given by the restriction to the observables of a single subsystem. The conjecture presented in this paper can then be extended to the quantum context. Consider a “sufficiently complex” quantum system [This means: with a sufficient complex algebra of observables and a Hamiltonian which is suitably “ergodic” with respect to it. A quantum system is not determined uniquely by it Hilbert space, Hamiltonian and state. All separable Hilbert space are isomorphic, and the spectrum of the Hamiltonian, which is the only remaining invariant quantity, is not sufficient to characterise the system.].  Then: 
Conjecture: Given a generic state evolving in time as ψ(t), there exists splits of the system into subsystems such that the von Neumann entropy is low at initial time and increases in time. 
This conjecture, in fact, is not hard to prove. A separable Hilbert space admits many discrete bases |n>. Given any ψ∈H, we can always choose a basis |ni where ψ=|1>. Then we can consider two Hilbert spaces, H1 and H2, with bases |ki and |mi, and map their tensor product to H by identifying |k>⊗|m> with the state |n> where (k, m) appear, say, in the n-th position of the Cantor ordering of the (n, m) couples ((1,1),(1,2),(2,1),(1,3),(2,2),(3,1),(1,4)...). Then, ψ = |1>⊗|1> is a tensor state and has vanishing von Neumann entropy. On the other hand, recent results show that entanglement entropy generically evolve towards maximizing entropy of a fixed tensor split (see [9]). 
Therefore for any time evolution ψ(t) there is a split of the system into subsystems such that the initial state has zero entropy and then entropy grows. Growing and decreasing of (entanglement) entropy is an issue about how we split the universe into subsystems, not a feature of the overall state of things (on this, see [10]). Notice that in quantum field theory there is no single natural tensor decomposition of the Fock space.  
Finally, let me get to general relativity. In all examples above, I have considered non-relativistic systems where a notion of the single time variable is clearly defined. I have therefore discussed the direction of time, but not the choice of the time variable. In special relativity, there is a different time variable for each Lorentz frame. In general relativity, the notion of time further breaks into related but distinct notions, such as proper time along worldliness, coordinate time, clock time, asymptotic time, cosmological time... Entropy increase becomes a far more subtle notion, especially if we take into account the possibility that thermal energy leaks to the degrees of freedom of the gravitational field and therefore macrostates can includes microstates with different spacetime geometries. In this context, a formulation of the second law of thermodynamics requires to identify not only a direction for the time variable, but also the choice of the time variable itself in terms of which the law can hold [11]. In this context, a spit of the whole system into subsystems is even more essential than in the non-relativistic case, in order to understand thermodynamics [11]. The observation made in this paper therefore apply naturally to the non relativistic case 
The perspectival origin of many aspects of our physical world has been recently emphasised by some of the philosophers most sensible to modern physics [12, 13]. I believe that the arrow time is not going to escape the same fate.  
The reason for the entropic peculiarity of the past should not be sought in the cosmos at large. The place to look for them is in the split, and therefore in the macroscopic observables that are relevant to us. Time asymmetry, and therefore “time flow”, might be a feature of a subsystem to which we belong, features needed for information gathering creatures like us to exist, not a feature of the universe at large.
(Submitted on 4 May 2015 (v1), last revised 10 May 2015 (this version, v2))

The Connes and Rovelli joint work
The hypothesis that we have put forward in this paper is that physical time has a thermodynamical origin. In a quantum generally covariant context, the physical time is determined by the thermal state of the system, as its modular flow... The main indications in support this hypothesis are the following 
 • Non-relativistic limit. In the regime in which we may disregard the effect of the relativistic gravitational field, and thus the general covariance of the fundamental theory, physics is well described by small excitations of a quantum field theory around a thermal state |ω>. Since |ω> is a KMS state of the conventional hamiltonian time evolution, it follows that the thermodynamical time defined by the modular flow of |ω> is precisely the physical time of non relativistic physics. 
 • Statistical mechanics of gravity. The statistical mechanics of full general relativity is a surprisingly unexplored area of theoretical physics [4]. In reference [4] it is shown that the classical limit of the thermal time hypothesis allows one to define a general covariant statistical theory, and thus a theoretical framework for the statistical mechanics of the gravitational field. 
 • Classical limit; Gibbs states. The Hamilton equations, and the Gibbs postulate follow immediately from the modular flow relation ...
 • Classical limit; Cosmology. We refer to [11], where it was shown that (the classical limit of) the thermodynamical time hypothesis implies that the thermal time defined by the cosmic background radiation is precisely the conventional Friedman-Robertson-Walker time. 
 • Unruh and Hawking effects. Certain puzzling aspects of the relation between quantum field theory, accelerated coordinates and thermodynamics, as the Unruh and Hawking effects, find a natural justification within the scheme presented here. 
 • Time–Thermodynamics relation. Finally, the intimate intertwining between the notion of time and thermodynamics has been explored from innumerable points of view [1], and need not be expanded upon in this context.

...
We leave a large number of issues open. It is not clear to us, for instance, whether one should consider all the states of a general covariant quantum system on the same ground, or whether some kind of maximal entropy mechanism able to select among states may make sense physically
(Submitted on 14 Jun 1994)
//edit 30 June 2015
The most synthetic review at the technical level of this idea of passing of time in noncommutative von Neumann algebra can probably be found in the appendix of Noncommutative Geometry, Quantum Fieldsand Motives by Alain Connes and Matilde Marcolli.



jeudi 25 juin 2015

The Universe also Grows ...

... but never decelerates? 
The blogger would like to present here an example of a tentative to develop cosmology in the context of the noncommutative geometry model of space-time, taking advantage of the new dynamic coming from the hidden conformal longitudinal mode of the gravitational field surprisingly brought to light in the recently proposed spectral quantization of spacetime
Quintessential Inflation models have been introduced drastically in the literature in an attempt to link inflation to the later stages of the universe’s evolution[4, 5, 6 ,7 8]. The key element in this unification is the fact that both inflaton, the field describing inflation, and quintessence, the field describing dark energy, are both dynamical scalar fields that are describing an accelerating expansion of the Universe...  

Instead of introducing scalar fields from outside into the Lagrangian, Chamseddine and Mukhanov wrote the physical metric in the following way:
 
where ~gµν is an auxiliary metric, φ is (for the moment) a random scalar field and ∂α denotes partial derivative with respect to xα. In this way, one might say that the conformal mode of the metric has been isolated, for the physical metric is invariant under a conformal transformation of the auxiliary metric. It was shown in [1] that the resulting equations of motion can ”mimic” the behavior of dark matter. Therefore, one is not obliged to introduce any type of matter from outside to explain the phenomena attributed to dark matter, rather now one has to extract hidden fields from the metric to explain these phenomena... 

In this chapter, we will consider a quintessential inflation model for mimetic dark matter (MDM). We start by showing from where the inspiration of the model came. We do this using the dynamics of a slow rolling field. We will then go onto considering the appropriate potential in MDM that would produce almost the same effect... the final result for the potential that would produce a quintessential inflation model in MDM is [separated... into two parts, one before inflation (t ≤ t0) and the other after inflation (t > t0)]: 




while the energy density becomes:
















... to determine β, we have to use the number of e-folds of inflation. If inflation is to last for 70 e-folds, then... we get: β≈7×1032...  On the other hand, β' is determined by matching the value of the energy density at infinity to that of the cosmological constant [13]. This will result in β'≈√(3)×10-23 ...

These equations will result in the plots above for the scale factor and the energy density( we have used an epsilon = 0.01). From the first plot, it is clear that the scale factor is increasing with ä > 0, which implies it is an accelerated expansion of the universe. The expansion during inflation is much steeper than that after it, which is exactly what’s needed, since the universe cannot keep on accelerating at the same rate as that during inflation. Furthermore, concerning the energy density plot, the graph shows a constant energy density during inflation, which is a characteristic of inflation. In addition, the energy density reaches an asymptote as t → ∞, which is nothing but Quintessence...


In this paper... the potential used to produce [a Quintessential inflation scenario from the Mimetic Dark Matter model [1]] is defined on two time intervals, one during inflation(t = 10-36 −10-32s) and the other after inflation. The parameters of the potential were set in a way to produce 70 e-folds inflation and to have an energy density corresponding to the one measured today, representing Dark Energy. The scale factor after inflation is that of an accelerating universe, in contrast to a decelerating De-Sitter Universe as it has been presented in the literature [10]. However, in the non-modified General theory of Relativity, an energy density of matter/radiation is accompanied by a decelerating Universe, this is why usually it is required to have a decelerating universe after inflation. But in this case, from the equations of MDM, one obtains an energy density of matter/radiation dominated universe. The important thing is the energy density rather than the scale factor. Since in this model we got the required energy density but not the ”usual” scale factor, we can avoid the problem of explaining why the universe should decelerate after inflation. Rather, at the end of inflation, the universe loses enough energy for it to remain accelerating, but with a slower rate than the one during inflation. Of course we still have to check whether this will still result in the required Nucleosynthesis and we have to see if the temperature perturbations that arise matches those of the CMB. These will be handled in future work.
(Submitted on 20 Jun 2015 (v1), last revised 23 Jun 2015 (this version, v2)))

As far as the blogger can tell, Ali R. Khalifeh is a recently graduated student from American University of Beirut, Lebanon. He worked on the supervision of Ali Chamseddine.