New Dark Matter Generation Mechanism and Its Implications for the Cosmological Background ()

Henk Defloor^{}

Independent Researcher, Gent, Belgium.

**DOI: **10.4236/jhepgc.2017.34058
PDF
HTML XML
1,121
Downloads
2,470
Views
Citations

Independent Researcher, Gent, Belgium.

A new concept has been proposed and elaborated to account for recent observations deviating from ΛCDM and ΛWDM. Using an intermediate energy conversion mechanism in the expanding universe and assuming three neutrino families with identical mass, a neutrino mass of eV/c2 has been found as well as a natural explanation for the difference in Hubble constant as measured by WMAP/Planck and obtained from LSS data. The value for the effective number of neutrinos at the time of decoupling is compatible with the Planck result. The age of the universe is slightly younger at Gyr. At late times, the closure parameter for the neutrino radiation drastically increases but still remains well below that of the baryons , among other energy contributions.

Keywords

Share and Cite:

Defloor, H. (2017) New Dark Matter Generation Mechanism and Its Implications for the Cosmological Background. *Journal of High Energy Physics, Gravitation and Cosmology*, **3**, 791-807. doi: 10.4236/jhepgc.2017.34058.

1. Introduction

The six parameter ΛCDM model [1] , where CDM stands for Cold Dark Matter and Λ refers to the cosmological constant, has long been regarded as the standard model of modern cosmology. Despite its success in explaining many features of the Cosmic Microwave Background (CMB) power spectrum [2] among others, increasingly precise data from observations are more and more at odds with it.

In the ΛCDM scenario, an important role is played by cold dark matter. Many interesting scenarios have been proposed to explain the true nature of CDM, including extended theories of gravity [3] , though some of them are experiencing difficulties explaining recent experiments and observations. Weakly Interacting Massive Particles (WIMPs) are very good candidates for cold dark matter. They are quite massive (about 100 GeV/c^{2}) and are thought to originate from thermal equilibrium processes in the very early universe. Despite a huge experimental effort, no convincing evidence for them could be found [4] .

Another popular candidate is axions [4] which might be generated in the Coulomb field of the nucleon. In contrast to the WIMPs, their masses are expected to be very low (10^{−6} - 1 eV range) and to allow for structure formation they cannot be in thermal equilibrium with the other particles. Experiments indicate that the original axion scenario [5] does not describe how nature works.

With the currently accepted amount of CDM, N-body simulations [6] result in various phenomena including higher central densities in galaxies (cusp-core problem) [7] and more satellites (missing satellite problem) [8] than observed. One possible explanation comes from sterile neutrinos [9] in the keV range, referred to as Warm Dark Matter (WDM). They are assumed to not interact according to the known interactions. Boltzmann codes reveal, on the other hand, that there are a number of deviations in the CMB power spectrum such as the quadrupole anomaly, a lower Temperature-Temperature (TT) power spectrum at $l=20\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}30$ and a slightly enhanced first acoustic peak [2] [10] . From oscillation experiments, we also know that at least two of the three neutrino families must be massive [11] .

In this paper, dark matter is examined in a different cosmological model, using light dark matter particles in the eV range i.e. Hot Dark Matter (HDM), but still allowing for structure formation as observed.

First, a new concept is proposed for dark matter generation, starting from well-known physics and a single main additional assumption. Subsequently, the formulas used in the simulations to calculate the background are presented. In the third part, the simulations and results are dealt with, with the emphasis on the approach rather than the details. Finally, the predictions of these simulations are discussed, and three tests are proposed to compare theoretical results with observations.

2. Concept and Assumptions

The first Friedmann equation [12] describes how the scale factor evolves in a homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker (FLRW) universe with matter, radiation and dark energy (cosmological constant) and is essentially an energy equation [4] . As energy is conserved locally in general relativity, one could imagine a mechanism converting kinetic and potential energy into mass. In the whole approach almost everything is based on well-known physics and there are very few assumptions. The main assumption is to take the proton with its characteristic radius of ${r}_{p}=0.88\text{\hspace{0.17em}}\text{fm}$ and typical string constant $R=1$ GeV/fm [13] as representative for an underlying mechanism which is unknown. See Section 4 for a justification of the value of R . This mechanism is thought to be capable of following the expansion of the universe temporarily and in doing so building up some small amount of energy, which is eventually released by generating neutral weakly interacting massive particle/antiparticle pairs with their initial kinetic energies depending on cosmological time.

Note that their center of mass is that of the nucleon they originate from, an indispensable feature to allow for structure formation in the universe with light HDM particles which is otherwise not possible [4] . This concept does not only allow for calculating the evolution of the characteristics of the universe but also results in specific predictions of, among others, the masses of the dark matter particles themselves. In addition to this main assumption, the universe is considered flat today with the possibility to vary towards the past and the Hubble constant, as obtained from measurements in the local universe, is taken as an input.

3. Calculation of Background

The evolution of the scale factor $a\left(t\right)$ of the expanding universe is described by the Friedmann equation [14] which can be derived from the 00-component of the Einstein field equations of general relativity. Accounting for curvature it reads [12]

${H}^{2}\left(t\right)\equiv {\left(\frac{\stackrel{\dot{}}{a}}{a}\right)}^{2}=\frac{8\text{\pi}G}{3}{\rho}_{tot}\left(t\right)-\frac{\kappa {c}^{2}}{{a}^{2}}$ (1)

where H is the Hubble parameter, ${\rho}_{tot}$ stands for the total energy density, c is the speed of light, G denotes Newton’s constant and $\kappa $ is a parameter representing curvature, which is negative in case of an open universe with sub-critical density.

The curvature term can also be represented by a fictive density [12] . The various energy contributions considered in this paper are the radiation from photons (subscript g ), baryons (b ), Standard Model (SM) neutrino radiation (nr ) and neutrino mass (nm ), generated dark matter radiation (dmr ) and dark matter mass (dmm ) as well as curvature (k ) and dark energy (de ). The SM neutrinos and the generated dark matter particles will turn out to be the same particles but their abundance and time and means of generation (non-thermal versus thermal) differ.

Taking into account the dependencies of the various species on the scale factor (see for example [4] ) and introducing the closure parameter defined as ${\Omega}_{i}\equiv {\rho}_{i}/{\rho}_{cr}$ with ${\rho}_{cr}\equiv 3{H}_{0}^{2}/8\text{\pi}G$ [14] the critical density, Equation (1), written in full, becomes

$\begin{array}{c}{\left(\frac{\stackrel{\dot{}}{a}}{a}\right)}^{2}={H}_{0}^{2}[\left({\Omega}_{g}+{\Omega}_{nr}\right){a}^{-4}+\left({\Omega}_{b}+{\Omega}_{nm}+{\Omega}_{dmm,0}\left(z\right)+{\Omega}_{dmr,0}\left(z\right)\right){a}^{-3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\Omega}_{k,0}\left(z\right){a}^{-2}+{\Omega}_{de}{a}^{0}]\end{array}$ (2)

The subscript 0 refers to the current cosmological time and all z dependent quantities are calculated through to the present time. It should be noted already that ${\Omega}_{k,0}\left(z\right)$ depends on the redshift and is only zero at the present cosmological time ${t}_{0}$ . Similarly, the contributions with dark matter have a z dependence as well. The amount of dark matter radiation can be inferred from the total amount of energy generated per decay minus the energy associated with the mass of the corresponding dark matter particles. In view of the conservation of the “center of energy” [15] , it is considered as a matter contribution.

The photon ( ${n}_{\gamma}=411\text{\hspace{0.17em}}{\text{cm}}^{-3}$ ) and standard neutrino ( ${n}_{\nu}=336\text{\hspace{0.17em}}{\text{cm}}^{-3}$ ) abundances at ${t}_{0}$ follow directly from the equilibrium conditions in the early universe (see for example [4] ). Relating the corresponding photon density to the critical density for a Hubble constant ${H}_{0}=73.24\pm 1.74$ km/s/Mpc [16] and in combination with the photon temperature ${T}_{CMB}=2.72548\pm 0.00057$ K today [17] , results directly in the closure parameter value ${\Omega}_{g}=\left(4.61\pm 0.22\right)\times {10}^{-5}$ for the photons. See further in this section for the calculation of ${\rho}_{g}$ . Similarly, as far as the closure parameter for neutrinos (and antineutrinos) today is concerned, there is a simple rule relating it to the neutrino mass. It obeys the relation [18]

${\Omega}_{nm}=\frac{3{m}_{\nu}}{93.14{h}^{2}\text{\hspace{0.17em}}\text{eV}}$ (3)

where ${m}_{\nu}$ [eV] is the neutrino mass, h is the reduced Hubble constant ( $\equiv {H}_{0}/100$ , dimensionless) and the expression accounts for non-instantaneous decoupling. The fact that there are three families has been considered by inclusion of a factor of three.

For the generated dark matter and related curvature considered here, the situation is different and some care is required as the number densities and curvature value are dynamical and deviate from the common scaling. ${\Omega}_{dmm,0}\left(z\right)$ is readily obtained by taking the ratio of the actual dark matter density ${\rho}_{dmm}\left(z\right)$ to ${\rho}_{nm}\left(z\right)$ ( ${a}^{-3}$ scaling from ${t}_{0}$ ), multiplied by the neutrino mass fraction ${\Omega}_{nm}$ today. Or written mathematically

${\Omega}_{dmm,0}\left(z\right)=\frac{{\rho}_{dmm}\left(z\right)}{{\rho}_{nm,0}{a}^{-3}}{\Omega}_{nm}.$ (4)

For the dark matter radiation, on the other hand, ${\rho}_{dmr}\left(z\right)$ has to be compared to the photon density ${\rho}_{g}\left(z\right)$ which evolves as ${a}^{-4}$ , resulting in

${\Omega}_{dmr,0}\left(z\right)=\frac{{\rho}_{dmr}\left(z\right)}{{\rho}_{g,0}{a}^{-4}}{\Omega}_{g}.$ (5)

Quantized packets of dark matter and radiation are calculated for logarithmically spaced intervals. At a specific redshift z the total amount of non-electromagnetically interacting matter and radiation consists of all previous contributions properly scaled to the redshift of interest as well as that of the new packet generated at that scale. These quantities can be converted then with Equation (4) and (5) in the desired z dependent closure parameters for the generated dark matter and radiation at ${t}_{0}$ . The closure parameter for the curvature is such that, when considered as a density, it makes the universe flat at that redshift, i.e.

${\Omega}_{k,0}\left(z\right)=1-{\Omega}_{other,0}\left(z\right)$ (6)

where ${\Omega}_{other,0}\left(z\right)$ stands for the sum of the contributions of the photons ${\Omega}_{g}$ , baryons ${\Omega}_{b}$ , neutrinos ${\Omega}_{nr}\left(z\right)$ and ${\Omega}_{nm}$ , generated dark matter ${\Omega}_{dmr,0}\left(z\right)$ and ${\Omega}_{dmm,0}\left(z\right)$ and dark energy ${\Omega}_{de}$ . Note that the universe is assumed to be critical only at ${t}_{0}$ by requiring that the total amount of dark matter today is reached, thereby setting the curvature at the current cosmological time to zero.

^{1}Note already that the predictions and results obtained are quite robust against this value.

The dark matter particles are gradually generated during the expansion history of the universe, starting at the moment of baryogenesis (taken here as
$t={10}^{-4}$ s)^{1}. In the initial stage enough energy has to be accumulated to account for the particle/antiparticle masses. After that, some additional energy build-up is required to get the system to decay. The average time to do so is crucial in order to calculate the evolution of the dark matter in the universe according to the concept presented here.

In case of the weak interaction, combining the decay width of a (virtual) ${Z}^{*}$ boson with energy ${m}_{Z}^{*}$ which is converted into a neutrino/antineutrino ( ${C}_{V}={C}_{A}=1/2$ ) [19] with the correction for the fermion masses ${m}_{\nu}$ and ${m}_{\stackrel{\xaf}{\nu}}$ as obtained from [20] , the decay width ${\Gamma}^{{Z}^{*}\nu \stackrel{\xaf}{\nu}}$ obeys the relation

${\text{\Gamma}}^{{Z}^{*}\nu \stackrel{\xaf}{\nu}}=\frac{3{G}_{F}{\left({m}_{Z}^{*}\right)}^{3}}{12\sqrt{2}\text{\pi}}{\left(1-\frac{4{m}_{\nu}^{2}}{{\left({m}_{Z}^{*}\right)}^{2}}\right)}^{1/2}\left(1-\frac{{m}_{\nu}{}^{2}}{{\left({m}_{Z}^{*}\right)}^{2}}\right).$ (7)

The factor of three stems from the assumption of three neutrino families with degenerate masses and ${G}_{F}$ is Fermi’s constant. The corresponding decay time directly follows from the relation $\tau \equiv \hslash /\Gamma $ with $\hslash $ the reduced Planck constant. The peculiarity here is that ${m}_{Z}^{*}$ , which represents the average energy available to a particular decay is not a constant but is steadily increasing. Its time dependence is given by ${m}_{Z}^{*}=H{r}_{p}R\Delta {t}_{tot}$ , with $\Delta {t}_{tot}$ the average total elapsed time since the previous decay. Note also from Equation (7) that decays can only occur when ${m}_{Z}^{*}>2{m}_{\nu}$ .

To quantify the time required for reaching this threshold of mass generation, it is important to remember that the rate of energy production $\text{d}E/\text{d}t$ can be obtained from the expansion rate H applied to the proton radius ${r}_{p}$ with its characteristic field strength R . The production rate therefore is $\text{d}E/\text{d}t=H{r}_{p}R$ . Taking into account that the energy difference $\Delta {E}_{m}=H{r}_{p}R\Delta {t}_{m}$ is twice the dark matter particle mass ${m}_{dm}$ , this leads to

$\text{\Delta}{t}_{m}=\frac{2{m}_{dm}}{H{r}_{p}R{c}^{2}}$ (8)

where $\text{\Delta}{t}_{m}$ is the time for mass generation of a particle and antiparticle. Note that the Hubble parameter $H\equiv \stackrel{\dot{}}{a}/a$ is used and not the derivative with respect to the scale factor a . See for example [12] .

Finding the time for decay $\Delta {t}_{r}$ is a little more elaborate. Starting from the

general expression for the probability of a decay $P\left(t\right)=\frac{1}{\tau}{\mathrm{exp}}^{-t/\tau}$ , where $\tau $

stands for the decay constant, the likelihood that a decay occurs in an interval $t+\text{d}t$ is given by the probability that it does not happen in the first n intervals $\text{d}t$ multiplied by the probability that it occurs in the next interval $\text{d}t$ . The probability $P\left(t\right)$ then becomes

$P\left(t\right)=\underset{i=1}{\overset{n}{{\displaystyle \prod}}}\left(1-\frac{1}{{\tau}_{i}}{\mathrm{exp}}^{-\frac{t}{{\tau}_{i}n}}\frac{t}{n}\right)\frac{{\mathrm{exp}}^{-\frac{t}{{\tau}_{n+1}n}}}{{\tau}_{n+1}}$ (9)

where $\text{d}t=t/n$ has been substituted and t is the total elapsed time for decay. This approximation becomes quite accurate for n sufficiently large e.g. $n=1000$ . The value for ${\tau}_{i}$ follows directly from its definition $\tau \equiv \hslash /\Gamma $ and Equation (7) with ${m}_{Z}^{*}=H{r}_{p}Rt{c}^{2}$ . The Hubble parameter is that of the background at that time and is given by the Friedmann equation, but applied to the concept of gradually generated dark matter. The value of $\Delta {t}_{tot}$ is calculated as the average over the distribution $P\left(t\right)$ and almost coincides with its maximum. The resulting time for the decay process itself amounts then to $\Delta {t}_{r}=\Delta {t}_{tot}-\Delta {t}_{m}$ and the corresponding radiation energy is $\Delta {E}_{r}=H{r}_{p}R\Delta {t}_{r}$ .

In summary, the total elapsed time for a decay results in an energy $\Delta {t}_{tot}\equiv \Delta {E}_{m}+\Delta {E}_{r}=H{r}_{p}R\left(\Delta {t}_{m}+\Delta {t}_{r}\right)$ . The mass energy generated is always the same but the amount of radiation for a specific decay varies and depends on the random decay character. Comparing the distribution for $\Delta {t}_{tot}$ with the Einstein relation $\Delta {t}_{tot}=\sqrt{{\left(2{m}_{\nu}{c}^{2}\right)}^{2}+{\left(2{p}_{\nu}c\right)}^{2}}$ yields the instantaneous momentum distribution. It will turn out to be fairly sharply peaked and can be replaced by its average momentum. This simplifies the computational effort considerably.

It is well known that, as the universe expands, particles cool down and eventually become non-relativistic if they are massive. However, the type of distribution remains preserved. The key point is that the momentum is inversely proportional to the scale factor a [12] so that the total energy $\Delta {t}_{tot}$ of a particle with mass m scales as [1]

${E}_{tot}=\sqrt{{m}^{2}{c}^{4}+\frac{a{\left({t}_{em}\right)}^{2}}{a{\left(t\right)}^{2}}{p}_{em}^{2}{c}^{2}}$ (10)

where the subscript em stands for emission. Subtracting the mass energy $\Delta {E}_{m}$ from it, the radiation energy $\Delta {E}_{r}$ of the neutrino or antineutrino is readily obtained. To find the evolution of the radiation density ${\rho}_{dmr}$ of the neutrino population generated up to that time, all particle kinetic energies have to be rescaled correspondingly. A non-thermal distribution results, with a total radiation density ${\rho}_{dmr}$ and an ${N}_{eff,dmr}\left(z\right)$ .

An important parameter often used as an indicator for new physics beyond the ΛCDM model is the effective number of relativistic degrees of freedom ${N}_{eff}$ . It is positively correlated with the Hubble constant [2] . As a rule of thumb, ${N}_{eff}\propto {h}^{2}$ has been adopted when applied to the CMB data. As we will see in the next section the results obtained here might give a natural explanation for the difference in Hubble constant as derived from CMB and Large Scale Structure (LSS) data.

The content of the universe can be considered as a fluid obeying the equation of state $p=w\rho {c}^{2}$ with w the equation of state (e.o.s.) parameter. It applies to a single component as well as to all species together. In the latter case it follows from the weighted average as

${w}_{x}=\frac{{{\displaystyle \sum}}_{i}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{w}_{i}{\Omega}_{i,0}{a}^{-3\left(1+{w}_{i}\right)}}{{{\displaystyle \sum}}_{i}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Omega}_{i,0}{a}^{-3\left(1+{w}_{i}\right)}}$ (3.11)

where ${w}_{x}$ is the resulting averaged value and the ${w}_{i}$ s are the e.o.s. parameters for the individual components. All types of radiation obey ${w}_{r}=1/3$ while ${w}_{m}\approx 0$ for cold dark matter, curvature has ${w}_{k}=-1/3$ and dark energy is specified here by ${w}_{de}=-1$ , i.e. a cosmological constant Λ. In view of conservation of the center of energy, both the generated dark matter and corresponding radiation are classified as ${w}_{m}=0$ , as are the baryons they originate from.

The amount of radiation from relativistic particles is often expressed as an effective number of relativistic neutrinos. It is implicitly defined as [18]

${\rho}_{r}={\rho}_{g}+{\rho}_{\nu}=\left[1+\frac{7}{8}{\left(\frac{4}{11}\right)}^{\frac{4}{3}}{N}_{eff}\right]{\rho}_{g}$ (12)

where ${\rho}_{\nu}$ stands for all types of radiation, whether ${\rho}_{dmr}$ or ${\rho}_{nr}$ , from relativistic particles and ${\rho}_{g}$ is the photon density. For the SM neutrinos, the radiation density ${\rho}_{nr}$ is obtained from [14]

${\rho}_{nr}={g}_{\nu}{\displaystyle {\int}_{0}^{\infty}\frac{4\text{\pi}{\left(pc\right)}^{2}\text{d}\left(pc\right)}{{\left(2\text{\pi}\hslash \right)}^{3}{c}^{5}}\frac{E\left(p\right)-m}{\mathrm{exp}\left(\frac{{p}_{\nu}c}{{k}_{B}{T}_{\nu}}\right)+1}}$ (13)

with $E\left(p\right)=\sqrt{({m}^{2}{c}^{4}+{p}^{2}{c}^{2}}$ , ${p}_{\nu}$ and ${T}_{\nu}={T}_{\nu ,0}/a$ the momentum and neutrino temperature respectively and ${k}_{B}$ Boltzmann’s constant. ${\rho}_{g}$ can be derived directly from Boltzmann thermodynamics and obeys the relation [4]

${\rho}_{g}={\text{\pi}}^{4}{\left({k}_{B}T\right)}^{4}\frac{{g}_{\gamma}/2}{15{\text{\pi}}^{2}{\hslash}^{3}{c}^{5}}.$ (14)

${g}_{\nu}={g}_{\gamma}=2$ are the spin degrees of freedom for the neutrinos and photons respectively. Note that the Fermi-Dirac distribution of the neutrinos is independent of their mass. Strictly speaking Equation (12) applies to particles with a thermal distribution but it is also used here to determine ${N}_{eff}$ for the generated neutrinos. Their total radiation density is obtained as described above.

Finally, a least squares method has been applied to astrophysical data to arrive at a justification of the value of R . To account for the magnitude of the error bars, the following fitting formula has been used [21]

${\chi}_{A}^{2}={\displaystyle {\sum}_{i}\frac{{\left|{A}_{i}^{th}-{A}_{i}^{obs}\right|}^{2}}{{\sigma}_{A,i}^{2}}}$ (15)

where ${\chi}_{A}^{2}$ represents the least squares value, ${A}_{i}^{th}$ and ${A}_{i}^{obs}$ stand for the theoretical (th) and measured (obs) values of a particular data point respectively, ${\sigma}_{A,i}$ accounts for the standard deviation and the sum is over all measurements i .

4. Simulations and Results

As in the previous section, we restrict ourselves to the evolution of the background parameters. The main task is to solve the Friedmann equation―Equation (2) numerically with the approach and formulas as presented in Section 3. All (other) background quantities can then be derived from it. For this purpose numerical codes have been written in C++ and Python. What happens in the very early universe essentially is irrelevant for our purposes here. Therefore, the simulations start at ${t}_{s}={10}^{-4}$ s, which is sufficiently early as the amount of generated dark matter at that time is close to zero; the exact value of ${t}_{s}$ is non-critical and does not influence the results.

In the Friedmann equation―Equation (2) the evolution of the densities of the standard components are implemented in the conventional way with their typical scaling. However, the dark matter fraction generated starts at zero and there is some curvature related to this deviation from “flatness”. From an expansion point of view, the latter components are irrelevant as the universe is completely radiation dominated at that time. Integration is forward in time and as time evolves, dark matter and radiation are generated and curvature diminishes correspondingly.

After each integration loop, the z dependent dynamical contributions are updated in Equation (2) before a next cycle starts. This is iterated until $a=1$ . The input parameters are ${\Omega}_{g}=\left(4.61\pm 0.22\right)\times {10}^{-5}$ (calculated), ${\Omega}_{b}=0.0463\pm 0.0024$ , ${\Omega}_{m,0}=0.2793\pm 0.024$ for the total matter contribution at ${t}_{0}$ , ${\Omega}_{de}=0.721\pm 0.025$ [10] as well as ${H}_{0}=73.24\pm 1.74$ km/s/Mpc as obtained from LSS data [16] . Best results are found for a nucleon potential with $R=1$ GeV/fm. These values apply throughout this paper. Requiring that the total amount of matter must be generated by $a=1$ and that the universe must be flat today, the mass of the dark matter particles can be determined and the evolution of all background quantities and parameters is completely fixed. Error bars are found with a Monte Carlo method and starting from Gaussian distributions for the input parameters ${\Omega}_{b}$ , ${\Omega}_{m,0}$ and ${H}_{0}$ .

The result turns out to be compatible with the observed energy contributions in the universe with the dark matter particles consisting of the standard model neutrinos (three families) with a (nearly) degenerate mass of
$1.19\pm 0.19$ eV/c^{2} . An effective number of neutrinos of
${N}_{eff,CMB}=3.81\pm 0.28$ is found at the CMB (
${z}_{CMB}=1089$ , [2] ), which is still within the matter dominated era. It should be noted however that
${N}_{eff,CMB}$ is composed differently here and consists of a contribution from the “SM neutrinos” (
${N}_{eff,nr,CMB}=0.84\pm 0.11$ ) and that of the generated dark matter neutrinos (
${N}_{eff,dmr,CMB}=2.97\pm 0.26$ ). In the recent universe
${N}_{eff}$ increases (
${N}_{eff,{t}_{0}}=356\pm 111$ ) but the corresponding closure parameter still remains well below that of the baryons, among other energy contributions.

The age of the universe follows directly from the simulations and turns out to be with its $13.5\pm 0.5$ Gyr slightly younger than in the SM of cosmology [2] . This value might still be compatible with the age of the oldest globular clusters and stars. The redshift of matter-radiation equality is now ${z}_{eq}=2122\pm 214$ and dark energy takes over at ${z}_{de}=0.39\pm 0.06$ .

Figure 1 illustrates how the various contributions to the energy budget evolve for the concept presented here with the well-known epochs of radiation domination at early times (small scale factor), followed by matter domination and finally, very recently, a universe dominated by dark energy. A distinction is made between the neutrinos created from equilibrium reactions (SM) in the early universe (thermal) and those gradually generated during the expansion history (non-thermal). Note that, from an energy point of view, the deviation from flatness, represented by the curvature contribution, peaks around $a\approx 0.25$ and never exceeds more than a few percent.

With the expansion history determined by the (dynamical) evolution of the different components, the energy contributions of dark radiation and matter as generated by the concept used here, can be plotted as a function of a . Figure 2 shows the result. The instantaneous situation is plotted; cooling due to expansion has not been considered in this figure. At early times the amount of radiation exceeds by far the matter contribution. However, the impact remains limited as the dark matter quantity is very low at this stage and radiation quickly decays as time evolves. A main part of the dark matter is generated in the last decade and in absolute value the total dark matter radiation also continuously increases. See also Figure 4.

Figure 1. Evolution of the various energy components as a function of the scale factor a for the concept presented here. Contributions are as labelled and for the input values specified in the text. Total density is normalized to 1.

In the concept introduced, dark matter particles are thought to be representatively generated by an energy conversion mechanism where the nucleon potential plays a central role. Decays occur statistically and, even for a fixed cosmological time, the momenta of the decay products (neutrinos and antineutrinos) are distributions. Figure 3 illustrates how these look like for certain specific moments in time, without cooling down due to the expansion of the universe. All shapes are sharply peaked. Distributions are replaced by single momentum values as far as the calculation of the background is concerned. Note that the

Figure 2. Relative importance of mass and radiation, instantaneously generated by the approach taken, as a function of a . The mass fraction is continuously increasing.

Figure 3. Momentum distributions (probability) as a function of the Hubble parameter as they are at their moment of creation. Note also the direction of time.

peak values increase and distributions sharpen as time evolves, as might be expected. Equation (9) has been used with $n=800$ as a good approximation everywhere.

The amount of radiation energy is often presented as an effective number of neutrinos ${N}_{eff}$ , as if they were massless. Figure 4 shows this evolution, once again as a function of the scale factor, as well as that of the neutrino number densities rescaled for today. For many decades in a , but a rather short period in time, the thermal neutrinos provide the main contribution to the total amount of dark matter. From Equation (3), the corresponding closure parameter is ${\Omega}_{\nu ,nm}=0.0715\pm 0.012$ . At late times, generated dark matter constitutes the main contribution. Simulations have been performed so that, at $a=1\left(\equiv {t}_{0}\right)$ , the total amount of dark matter is the one required, here ${\Omega}_{dm,0}=0.162\pm 0.027$ .

Additional structure formation power comes primarily from the fact that the generated dark matter originates from the baryons and inherits their instantaneous density perturbations due to the center of energy principle of the (relativistic) particle/antiparticle configuration. Consequently, baryons and dark matter are more strongly related than usual. The fact that dark matter is thought to originate from the baryon overdensities might also explain the mysterious conspiracy of both components as observed [22] .

In ΛCDM the evolution of the universe is determined by one or the other species for long times and transitions are rather short. If dark matter is considered to be continuously generated and the neutrinos contribute considerably, the story is different. Figure 5 shows the evolution of the equation of state parameter for all species together ( ${w}_{tot}$ ) and the neutrinos (thermal and non-thermal) specifically ( ${w}_{dm+n}$ ). Equation (11) is used where the dynamical z dependent closure parameters are obtained from the simulations. Note the rather large transition regions.

Figure 4. Effective numbers of neutrinos ${N}_{eff}$ and neutrino number densities for thermal and generated neutrinos as a function of the scale factor.

Figure 5. Evolution of the equation of state parameters for all species together and the neutrinos (dark matter and standard neutrinos) specifically.

Up to this point, the amount of dark matter and baryons, constituting the total matter content, have been considered as fixed. It is a promising sign that the outcome is so good with a single additional assumption and without fine tuning. Simulations allow us to vary the matter contribution and quantify the impact on the main quantities of interest here. Figure 6 illustrates how the simulated neutrino mass ${m}_{\nu}$ , the effective number of neutrinos at the CMB and ${t}_{0}$ as well as the age of the universe ${t}_{0}$ evolve, if the total amount of matter ${\Omega}_{m,0}$ is varied.

According to the simulations a large matter fraction above 0.35 is not possible in view of the resulting young age of the universe, the small value for ${N}_{eff,CMB}$ and the large neutrino mass which would be in conflict with neutrino mass experiments.

Finally, one could go a step further and try to fit the main model input parameters, the baryon content ${\Omega}_{b}$ , the matter contribution ${\Omega}_{m,0}$ , the local Hubble constant ${H}_{0}$ as well as the potential R to astrophysical data.

A key parameter in the model, and the only new one here, is the string constant R which is monotonically related to the neutrino mass ${m}_{\nu}$ . Higher values of R result in lower mass bounds. To investigate the influence of the value adopted here, the proposed model has been fitted to both $H\left(z\right)$ measurements [23] and the Union2.1 compilation [24] simultaneously, using a least squares method.

Allowing the field strength to vary in the range 0.1 - 2.2 GeV/fm in steps of 0.1 GeV/fm and considering the baryon content in between 0.042 and 0.049 (steps of 0.001), the total amount of matter in the range 0.20 - 0.32 (steps of 0.01 ) and ${H}_{0}$ varying from 67.5 to 75 km/s/Mpc (steps of 0.5 km/s/Mpc) parameter space has been scanned to arrive at the best fit. A least squares method has been used supplemented by the criterion ${N}_{eff}=3.84$ [10] . As might be expected, the result is fairly independent of ${\Omega}_{b}$ . The best input parameters are then found to be

Figure 6. Simulation results for the evolution of neutrino mass ${m}_{\nu}$ , predicted age of the universe ${t}_{0}$ and effective number of neutrinos at the CMB and ${t}_{0}$ as a function of the total matter content ${\Omega}_{m,0}$ .

${\Omega}_{b}=0.049$ ,
${\Omega}_{m,0}=0.28$ ,
${H}_{0}=70.0$ km/s/Mpc and
$R=1.0$ GeV/fm. With those input parameters the main output values would amount to
${m}_{\nu}=1.08$ eV/c^{2} ,
${N}_{eff,CMB}=3.85$ ,
${t}_{0}=14.1$ Gyr,
${N}_{eff,{t}_{0}}=408$ ,
${z}_{eq}=1957$ and
${z}_{de}=0.38$ .

Starting from the Planck results [2] of ${H}_{0}=67.4$ km/s/Mpc and ${N}_{eff}={3.36}_{-0.64}^{+0.68}$ (95% CL), a value of ${H}_{0}=73.24\pm 1.74$ km/s/Mpc [16] would imply (see rule of thumb in previous section) ${N}_{eff}=3.97\pm 0.47$ , which is compatible with the prerequisite of ${N}_{eff}=3.89\pm 0.67$ [10] , justifies our choice of the string constant R and is in excellent agreement with ${N}_{eff}=3.81\pm 0.28$ as found with the model proposed here for ${\Omega}_{b}=0.0463\pm 0.0024$ , ${\Omega}_{m,0}=0.2793\pm 0.024$ [10] and ${H}_{0}=73.24\pm 1.74$ km/s/Mpc [16] . Imposing ${N}_{eff}=3.36$ from [2] yields $R=0.9$ .

The result of the best fit has been shown in Figure 7 for ΛCDM (red line) and the proposed concept (green line). The best fit input parameters used are ${\Omega}_{m,0}=0.28$ , i.e. ${\Omega}_{de}=0.72$ , and ${H}_{0}=70.0$ km/s/Mpc (both models) supplemented by ${\Omega}_{b}=0.049$ and $R=1$ GeV/fm for the model proposed here. In addition, the model considered is assumed to be flat only today and open towards the past but with fixed ${\Omega}_{de}$ . Equation (15) has been used. Note that the two fitted lines are indistinguishable in the lower plot. Least squares values are ${\chi}_{H\left(z\right),model}^{2}=21.0$ and ${\chi}_{SN1a,model}^{2}=562.4$ as compared to ${\chi}_{H\left(z\right),\Lambda CDM}^{2}=24.8$ and ${\chi}_{SN1a,\Lambda CDM}^{2}=562.3$ for ΛCDM. In other words, the proposed model yields a

Figure 7. Best fit for ΛCDM (red) and proposed model (green) in case of $H\left(z\right)$ and SN1a data respectively. Error bars are 1σ. Input parameter values are ${\Omega}_{m,0}=0.28$ and ${H}_{0}=70.0$ km/s/Mpc (both models) supplemented by ${\Omega}_{b}=0.049$ and $R=1$ GeV/fm for the model proposed here.

better fit to the observed $H\left(z\right)$ data and a similar fit to the SN1a distance moduli compared to the ΛCDM model.

5. Predictions

A first firm prediction of the analytical work and simulations presented here is the mass of the dark matter particle (
${m}_{\nu}=1.19\pm 0.19$ eV/c^{2} ) and the fact that they are generated by the neutral current weak interaction and therefore are thought to be the standard model neutrinos. Note that the cosmology presented here is quite different from ΛCDM, which impacts the predictions considerably. With a value of
${m}_{\nu}=1.19$ eV/c^{2} the predicted neutrino mass is just outside the range of experiments performed by Troitsk [25] and Mainz [26] some years ago but the Katrin experiment [27] starting up at this moment should be able to confirm or falsify this prediction in the very near future.

A second prediction is related to the effective number of neutrinos ${N}_{eff}$ and much effort is done to experimentally determine its value at various stages of the cosmological evolution. As can be inferred from Figure 4 ${N}_{eff}$ starts at the SM value of 3.046 , becomes marginally larger at the CMB ( ${N}_{eff,CMB}=3.81\pm 0.28$ ) and dramatically increases only in the last decade of the scale factor ( ${N}_{eff,{t}_{0}}=356\pm 111$ ). As explained in Section 4 the value at the CMB consists of a contribution of the thermal and non-thermal neutrinos. Note that the kinetic energies of the neutrinos are very low (eV range at maximum) and therefore difficult to detect directly anyway. Comparison to results such as from Planck ( ${N}_{eff,CMB}=3.30\pm 0.27$ ) [2] might indicate whether the presented model is potentially right.

It is well known that there is some tension between the value of the Hubble constant as obtained from CMB data and that inferred from observations in the recent universe, see for example [16] . In addition, as explained in [2] , a positive correlation exists between ${N}_{eff}$ and ${H}_{0}$ , though, ${N}_{eff}$ is composed differently here. Compatible values for the effective number of relativistic degrees of freedom were found around ${N}_{eff}\approx 3.9$ which, starting from the Planck results [2] , would imply a Hubble constant around 73 km/s/Mpc. This is a third prediction and consequence of the model presented here. In view of the progress in cosmological measurements, predictions from experiment might reveal the real nature of the Hubble constant and tell us more about the validity of our results.

6. Summary

A new concept has been proposed for the generation of dark matter, based on the nucleon with its characteristic radius and potential, which is thought to be representative of an underlying mechanism converting energy from the expanding universe into dark matter and radiation. Energy build-up first accounted for the mass to be generated and was then subject to a decay which has been assumed to be mediated by the weak interaction. Furthermore, the universe has been taken to be flat today but allowing for a limited amount of curvature (open universe) towards the past, corresponding to the deficiency of dark matter at that redshift.

The background has been calculated/simulated from the first Friedmann equation with specific contributions for the standard thermal neutrinos and the generated dark matter.

Using commonly accepted input parameter values and the constraints required by the concept, the dark matter was found to be composed of the standard neutrinos but with a mass of
$1.19\pm 0.19$ eV/c^{2} . Other results were a somewhat younger universe with
${t}_{0}=13.5\pm 0.5$ Gyr and
${N}_{eff,CMB}=3.81\pm 0.28$ with a contribution of
$0.84\pm 0.11$ from the thermal neutrinos and
$2.97\pm 0.26$ from the generated ones. All these results are compatible with currently observed values or, at least, not excluded by them as far as the neutrino mass is concerned. The model hints at a possible natural explanation for the difference in Hubble constant from CMB (
${H}_{0}=67.3\pm 1.2$ km/s/Mpc) [2] and LSS (
${H}_{0}=73.24\pm 1.74$ km/s/Mpc) [16] data.

Allowing the input values of ${\Omega}_{b}$ , ${\Omega}_{m,0}$ , ${H}_{0}$ and R to vary, the best fit of the model to the $H\left(z\right)$ and SN1a data has been found for ${\Omega}_{b}=0.049$ , ${\Omega}_{m,0}=0.28$ , ${H}_{0}=70.0$ km/s/Mpc and $R=1.0$ GeV/fm, resulting in ${\chi}^{2}$ values of 21.0 and 562.4 respectively. Compared to those of ΛCDM, using the same parameter inputs, the proposed model has yielded a better fit to the $H\left(z\right)$ data and an equivalently good one to the SN1a data.

Certain future experiments may validate or falsify the proposed model. Once an accurate neutrino mass has been found, its value can be compared with the model prediction. Smaller error bars on measurements of the effective number of neutrinos may also confirm or rule out the calculated value of ${N}_{eff}$ .

A peculiarity of the approach taken is that the generated dark matter depends by definition on the baryons. This might explain the mysterious conspiracy observed between those components.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Lesgourgues, J., Mangano, G., Miele, G. and Pastor, S. (2013) Neutrino Cosmology. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9781139012874 |

[2] | Planck Collaboration, ESO (2013) Planck 2013 Results: XVI. Cosmological Parameters. ArXiv:astro-ph/1303.5076v2 |

[3] |
Corda, C. (2009) Interferometric Detection of Gravitational Waves: The Definitive Test for General Relativity. International Journal of Modern Physics, 18, 2275-2282.
https://doi.org/10.1142/S0218271809015904 |

[4] | Perkins, D. (2009) Particle Astrophysics. 2nd Edition, Oxford University Press, Oxford. |

[5] | Peccei, R.D. and Quinn, H.R. (1977) CP Conservation in the Presence of Pseudoparticles. Physical Review Letters, 38, 1440-1443. https://doi.org/10.1103/PhysRevLett.38.1440 |

[6] | Bagla, J.S. and Padmanabdan, T. (2004) Cosmological N-Body Simulations. ArXiv:astro-ph/ 0411730v1. |

[7] | Swaters, R., et al. (2003) The Central Mass Distribution in Dwarf and Low Surface Brightness Galaxies. The Astrophysical Journal, 583, 732-751. https://doi.org/10.1086/345426 |

[8] |
Klypin, A., Kravtsov, A., Valenzuela, O. and Prada, F. (1999) Where Are the Missing Galactic Satellites? The Astrophysical Journal, 522, 82-92. ArXiv:astro-ph/9901240v2. https://doi.org/10.1086/307643 |

[9] | Markovič, K. and Viel, M. (2014) Lyman-α Forest and Cosmic Weak Lensing in a Warm Dark Matter Universe. Publication of the Astronomical Society of Australia, 31, e006. ArXiv:astro-ph/1311.5223v1 |

[10] | Hinshaw, G., et al. (2012) Nine Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameters Results. ArXiv:astro-ph/1212.5226v3. |

[11] | Bettini, A. (2008) An Introduction to Elementary Particle Physics. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511809019 |

[12] | Carroll, S. (2014) Spacetime and Geometry: An Introduction to General Relativity. Int. Edition, Pearson, Harlow. |

[13] | Povh, B., Rith, K., Scholz, C. and Zetsche, F. (2008) Particles and Nuclei. 6th Edition, Springer, Berlin-Heidelberg. |

[14] | Dodelson, S. (2003) Modern Cosmology. Academic Press, Chicago. |

[15] | Schröder, U. (1990) Special Relativity, World Scientific Lecture Notes in Physics. World Scientific, Singapore, 33. |

[16] | Riess, A., et al. (2016) A 2.4% Determination of the Local Value of the Hubble Constant. |

[17] | Fixen, D. (2009) The Temperature of the Cosmic Microwave Background. The Astrophysical Journal, 707, 916-920. https://doi.org/10.1088/0004-637X/707/2/916 |

[18] | Lesgourgues, J. and Pastor, S. (2014) Neutrino Cosmology and Planck. |

[19] | Thomson, M. (2013) Modern Particle Physics. Cambridge University Press, Cambridge. |

[20] |
Huerta, H. and Pérez, N.A. (1992) High Energy Phenomenology. World Scientific, Singapore. https://doi.org/10.1142/1597 |

[21] | Chen, Y., Kumar, S. and Ratra, B. (2016) Determining the Hubble Constant from Hubble Parameter Measurements. |

[22] | Dolgov, A.D. (1999) Dark Matter in the Universe. |

[23] | Farooq, O., Madiyar, F., Ceandall, S. and Ratra, B. (2016) Hubble Parameter Measurement Constraints on the Redshift of the Deceleration-Acceleration Transition, Dynamical Dark Energy, and Space Curvature. |

[24] | Suzuki, N., et al. (2011) The Hubble Space Telescope Cluster Supernova Survey: V. Imposing the Dark Energy Constraints above z>1 and Building an Early-Type-Hosted Supernova Sample. |

[25] | Aseev, V., et al. (2011) An Upper Limit on Electron Antineutrino Mass from Troitsk Experiment. |

[26] | Kraus, C., et al. (2004) Final Results from Phase II of the Mainz Neutrino Mass Search in Tritium β Decay. |

[27] | Drexlin, G., Hannen, V., Mertens, S. and Weinheimer, C. (2013) Current Direct Neutrino Mass Experiments. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.