_{1}

^{*}

A new concept has been proposed and elaborated to account for recent observations deviating from ΛCDM and ΛWDM. Using an intermediate energy conversion mechanism in the expanding universe and assuming three neutrino families with identical mass, a neutrino mass of eV/c2 has been found as well as a natural explanation for the difference in Hubble constant as measured by WMAP/Planck and obtained from LSS data. The value for the effective number of neutrinos at the time of decoupling is compatible with the Planck result. The age of the universe is slightly younger at Gyr. At late times, the closure parameter for the neutrino radiation drastically increases but still remains well below that of the baryons , among other energy contributions.

The six parameter ΛCDM model [

In the ΛCDM scenario, an important role is played by cold dark matter. Many interesting scenarios have been proposed to explain the true nature of CDM, including extended theories of gravity [^{2}) and are thought to originate from thermal equilibrium processes in the very early universe. Despite a huge experimental effort, no convincing evidence for them could be found [

Another popular candidate is axions [^{−6} - 1 eV range) and to allow for structure formation they cannot be in thermal equilibrium with the other particles. Experiments indicate that the original axion scenario [

With the currently accepted amount of CDM, N-body simulations [

In this paper, dark matter is examined in a different cosmological model, using light dark matter particles in the eV range i.e. Hot Dark Matter (HDM), but still allowing for structure formation as observed.

First, a new concept is proposed for dark matter generation, starting from well-known physics and a single main additional assumption. Subsequently, the formulas used in the simulations to calculate the background are presented. In the third part, the simulations and results are dealt with, with the emphasis on the approach rather than the details. Finally, the predictions of these simulations are discussed, and three tests are proposed to compare theoretical results with observations.

The first Friedmann equation [

Note that their center of mass is that of the nucleon they originate from, an indispensable feature to allow for structure formation in the universe with light HDM particles which is otherwise not possible [

The evolution of the scale factor a ( t ) of the expanding universe is described by the Friedmann equation [

H 2 ( t ) ≡ ( a ˙ a ) 2 = 8 π G 3 ρ t o t ( t ) − κ c 2 a 2 (1)

where H is the Hubble parameter, ρ t o t stands for the total energy density, c is the speed of light, G denotes Newton’s constant and κ is a parameter representing curvature, which is negative in case of an open universe with sub-critical density.

The curvature term can also be represented by a fictive density [

Taking into account the dependencies of the various species on the scale factor (see for example [

( a ˙ a ) 2 = H 0 2 [ ( Ω g + Ω n r ) a − 4 + ( Ω b + Ω n m + Ω d m m , 0 ( z ) + Ω d m r , 0 ( z ) ) a − 3 + Ω k , 0 ( z ) a − 2 + Ω d e a 0 ] (2)

The subscript 0 refers to the current cosmological time and all z dependent quantities are calculated through to the present time. It should be noted already that Ω k , 0 ( z ) depends on the redshift and is only zero at the present cosmological time t 0 . Similarly, the contributions with dark matter have a z dependence as well. The amount of dark matter radiation can be inferred from the total amount of energy generated per decay minus the energy associated with the mass of the corresponding dark matter particles. In view of the conservation of the “center of energy” [

The photon ( n γ = 411 cm − 3 ) and standard neutrino ( n ν = 336 cm − 3 ) abundances at t 0 follow directly from the equilibrium conditions in the early universe (see for example [

Ω n m = 3 m ν 93.14 h 2 eV (3)

where m ν [eV] is the neutrino mass, h is the reduced Hubble constant ( ≡ H 0 / 100 , dimensionless) and the expression accounts for non-instantaneous decoupling. The fact that there are three families has been considered by inclusion of a factor of three.

For the generated dark matter and related curvature considered here, the situation is different and some care is required as the number densities and curvature value are dynamical and deviate from the common scaling. Ω d m m , 0 ( z ) is readily obtained by taking the ratio of the actual dark matter density ρ d m m ( z ) to ρ n m ( z ) ( a − 3 scaling from t 0 ), multiplied by the neutrino mass fraction Ω n m today. Or written mathematically

Ω d m m , 0 ( z ) = ρ d m m ( z ) ρ n m , 0 a − 3 Ω n m . (4)

For the dark matter radiation, on the other hand, ρ d m r ( z ) has to be compared to the photon density ρ g ( z ) which evolves as a − 4 , resulting in

Ω d m r , 0 ( z ) = ρ d m r ( z ) ρ g , 0 a − 4 Ω g . (5)

Quantized packets of dark matter and radiation are calculated for logarithmically spaced intervals. At a specific redshift z the total amount of non-electromagnetically interacting matter and radiation consists of all previous contributions properly scaled to the redshift of interest as well as that of the new packet generated at that scale. These quantities can be converted then with Equation (4) and (5) in the desired z dependent closure parameters for the generated dark matter and radiation at t 0 . The closure parameter for the curvature is such that, when considered as a density, it makes the universe flat at that redshift, i.e.

Ω k , 0 ( z ) = 1 − Ω o t h e r , 0 ( z ) (6)

where Ω o t h e r , 0 ( z ) stands for the sum of the contributions of the photons Ω g , baryons Ω b , neutrinos Ω n r ( z ) and Ω n m , generated dark matter Ω d m r , 0 ( z ) and Ω d m m , 0 ( z ) and dark energy Ω d e . Note that the universe is assumed to be critical only at t 0 by requiring that the total amount of dark matter today is reached, thereby setting the curvature at the current cosmological time to zero.

^{1}Note already that the predictions and results obtained are quite robust against this value.

The dark matter particles are gradually generated during the expansion history of the universe, starting at the moment of baryogenesis (taken here as t = 10 − 4 s)^{1}. In the initial stage enough energy has to be accumulated to account for the particle/antiparticle masses. After that, some additional energy build-up is required to get the system to decay. The average time to do so is crucial in order to calculate the evolution of the dark matter in the universe according to the concept presented here.

In case of the weak interaction, combining the decay width of a (virtual) Z * boson with energy m Z * which is converted into a neutrino/antineutrino ( C V = C A = 1 / 2 ) [

Γ Z * ν ν ¯ = 3 G F ( m Z * ) 3 12 2 π ( 1 − 4 m ν 2 ( m Z * ) 2 ) 1 / 2 ( 1 − m ν 2 ( m Z * ) 2 ) . (7)

The factor of three stems from the assumption of three neutrino families with degenerate masses and G F is Fermi’s constant. The corresponding decay time directly follows from the relation τ ≡ ℏ / Γ with ℏ the reduced Planck constant. The peculiarity here is that m Z * , which represents the average energy available to a particular decay is not a constant but is steadily increasing. Its time dependence is given by m Z * = H r p R Δ t t o t , with Δ t t o t the average total elapsed time since the previous decay. Note also from Equation (7) that decays can only occur when m Z * > 2 m ν .

To quantify the time required for reaching this threshold of mass generation, it is important to remember that the rate of energy production d E / d t can be obtained from the expansion rate H applied to the proton radius r p with its characteristic field strength R . The production rate therefore is d E / d t = H r p R . Taking into account that the energy difference Δ E m = H r p R Δ t m is twice the dark matter particle mass m d m , this leads to

Δ t m = 2 m d m H r p R c 2 (8)

where Δ t m is the time for mass generation of a particle and antiparticle. Note that the Hubble parameter H ≡ a ˙ / a is used and not the derivative with respect to the scale factor a . See for example [

Finding the time for decay Δ t r is a little more elaborate. Starting from the

general expression for the probability of a decay P ( t ) = 1 τ exp − t / τ , where τ

stands for the decay constant, the likelihood that a decay occurs in an interval t + d t is given by the probability that it does not happen in the first n intervals d t multiplied by the probability that it occurs in the next interval d t . The probability P ( t ) then becomes

P ( t ) = ∏ i = 1 n ( 1 − 1 τ i exp − t τ i n t n ) exp − t τ n + 1 n τ n + 1 (9)

where d t = t / n has been substituted and t is the total elapsed time for decay. This approximation becomes quite accurate for n sufficiently large e.g. n = 1000 . The value for τ i follows directly from its definition τ ≡ ℏ / Γ and Equation (7) with m Z * = H r p R t c 2 . The Hubble parameter is that of the background at that time and is given by the Friedmann equation, but applied to the concept of gradually generated dark matter. The value of Δ t t o t is calculated as the average over the distribution P ( t ) and almost coincides with its maximum. The resulting time for the decay process itself amounts then to Δ t r = Δ t t o t − Δ t m and the corresponding radiation energy is Δ E r = H r p R Δ t r .

In summary, the total elapsed time for a decay results in an energy Δ t t o t ≡ Δ E m + Δ E r = H r p R ( Δ t m + Δ t r ) . The mass energy generated is always the same but the amount of radiation for a specific decay varies and depends on the random decay character. Comparing the distribution for Δ t t o t with the Einstein relation Δ t t o t = ( 2 m ν c 2 ) 2 + ( 2 p ν c ) 2 yields the instantaneous momentum distribution. It will turn out to be fairly sharply peaked and can be replaced by its average momentum. This simplifies the computational effort considerably.

It is well known that, as the universe expands, particles cool down and eventually become non-relativistic if they are massive. However, the type of distribution remains preserved. The key point is that the momentum is inversely proportional to the scale factor a [

E t o t = m 2 c 4 + a ( t e m ) 2 a ( t ) 2 p e m 2 c 2 (10)

where the subscript em stands for emission. Subtracting the mass energy Δ E m from it, the radiation energy Δ E r of the neutrino or antineutrino is readily obtained. To find the evolution of the radiation density ρ d m r of the neutrino population generated up to that time, all particle kinetic energies have to be rescaled correspondingly. A non-thermal distribution results, with a total radiation density ρ d m r and an N e f f , d m r ( z ) .

An important parameter often used as an indicator for new physics beyond the ΛCDM model is the effective number of relativistic degrees of freedom N e f f . It is positively correlated with the Hubble constant [

The content of the universe can be considered as a fluid obeying the equation of state p = w ρ c 2 with w the equation of state (e.o.s.) parameter. It applies to a single component as well as to all species together. In the latter case it follows from the weighted average as

w x = ∑ i w i Ω i , 0 a − 3 ( 1 + w i ) ∑ i Ω i , 0 a − 3 ( 1 + w i ) (3.11)

where w x is the resulting averaged value and the w i s are the e.o.s. parameters for the individual components. All types of radiation obey w r = 1 / 3 while w m ≈ 0 for cold dark matter, curvature has w k = − 1 / 3 and dark energy is specified here by w d e = − 1 , i.e. a cosmological constant Λ. In view of conservation of the center of energy, both the generated dark matter and corresponding radiation are classified as w m = 0 , as are the baryons they originate from.

The amount of radiation from relativistic particles is often expressed as an effective number of relativistic neutrinos. It is implicitly defined as [

ρ r = ρ g + ρ ν = [ 1 + 7 8 ( 4 11 ) 4 3 N e f f ] ρ g (12)

where ρ ν stands for all types of radiation, whether ρ d m r or ρ n r , from relativistic particles and ρ g is the photon density. For the SM neutrinos, the radiation density ρ n r is obtained from [

ρ n r = g ν ∫ 0 ∞ 4 π ( p c ) 2 d ( p c ) ( 2 π ℏ ) 3 c 5 E ( p ) − m exp ( p ν c k B T ν ) + 1 (13)

with E ( p ) = ( m 2 c 4 + p 2 c 2 , p ν and T ν = T ν , 0 / a the momentum and neutrino temperature respectively and k B Boltzmann’s constant. ρ g can be derived directly from Boltzmann thermodynamics and obeys the relation [

ρ g = π 4 ( k B T ) 4 g γ / 2 15 π 2 ℏ 3 c 5 . (14)

g ν = g γ = 2 are the spin degrees of freedom for the neutrinos and photons respectively. Note that the Fermi-Dirac distribution of the neutrinos is independent of their mass. Strictly speaking Equation (12) applies to particles with a thermal distribution but it is also used here to determine N e f f for the generated neutrinos. Their total radiation density is obtained as described above.

Finally, a least squares method has been applied to astrophysical data to arrive at a justification of the value of R . To account for the magnitude of the error bars, the following fitting formula has been used [

χ A 2 = ∑ i | A i t h − A i o b s | 2 σ A , i 2 (15)

where χ A 2 represents the least squares value, A i t h and A i o b s stand for the theoretical (th) and measured (obs) values of a particular data point respectively, σ A , i accounts for the standard deviation and the sum is over all measurements i .

As in the previous section, we restrict ourselves to the evolution of the background parameters. The main task is to solve the Friedmann equation―Equation (2) numerically with the approach and formulas as presented in Section 3. All (other) background quantities can then be derived from it. For this purpose numerical codes have been written in C++ and Python. What happens in the very early universe essentially is irrelevant for our purposes here. Therefore, the simulations start at t s = 10 − 4 s, which is sufficiently early as the amount of generated dark matter at that time is close to zero; the exact value of t s is non-critical and does not influence the results.

In the Friedmann equation―Equation (2) the evolution of the densities of the standard components are implemented in the conventional way with their typical scaling. However, the dark matter fraction generated starts at zero and there is some curvature related to this deviation from “flatness”. From an expansion point of view, the latter components are irrelevant as the universe is completely radiation dominated at that time. Integration is forward in time and as time evolves, dark matter and radiation are generated and curvature diminishes correspondingly.

After each integration loop, the z dependent dynamical contributions are updated in Equation (2) before a next cycle starts. This is iterated until a = 1 . The input parameters are Ω g = ( 4.61 ± 0.22 ) × 10 − 5 (calculated), Ω b = 0.0463 ± 0.0024 , Ω m , 0 = 0.2793 ± 0.024 for the total matter contribution at t 0 , Ω d e = 0.721 ± 0.025 [

The result turns out to be compatible with the observed energy contributions in the universe with the dark matter particles consisting of the standard model neutrinos (three families) with a (nearly) degenerate mass of 1.19 ± 0.19 eV/c^{2} . An effective number of neutrinos of N e f f , C M B = 3.81 ± 0.28 is found at the CMB ( z C M B = 1089 , [

The age of the universe follows directly from the simulations and turns out to be with its 13.5 ± 0.5 Gyr slightly younger than in the SM of cosmology [

With the expansion history determined by the (dynamical) evolution of the different components, the energy contributions of dark radiation and matter as generated by the concept used here, can be plotted as a function of a .

In the concept introduced, dark matter particles are thought to be representatively generated by an energy conversion mechanism where the nucleon potential plays a central role. Decays occur statistically and, even for a fixed cosmological time, the momenta of the decay products (neutrinos and antineutrinos) are distributions.

peak values increase and distributions sharpen as time evolves, as might be expected. Equation (9) has been used with n = 800 as a good approximation everywhere.

The amount of radiation energy is often presented as an effective number of neutrinos N e f f , as if they were massless.

Additional structure formation power comes primarily from the fact that the generated dark matter originates from the baryons and inherits their instantaneous density perturbations due to the center of energy principle of the (relativistic) particle/antiparticle configuration. Consequently, baryons and dark matter are more strongly related than usual. The fact that dark matter is thought to originate from the baryon overdensities might also explain the mysterious conspiracy of both components as observed [

In ΛCDM the evolution of the universe is determined by one or the other species for long times and transitions are rather short. If dark matter is considered to be continuously generated and the neutrinos contribute considerably, the story is different.

Up to this point, the amount of dark matter and baryons, constituting the total matter content, have been considered as fixed. It is a promising sign that the outcome is so good with a single additional assumption and without fine tuning. Simulations allow us to vary the matter contribution and quantify the impact on the main quantities of interest here.

According to the simulations a large matter fraction above 0.35 is not possible in view of the resulting young age of the universe, the small value for N e f f , C M B and the large neutrino mass which would be in conflict with neutrino mass experiments.

Finally, one could go a step further and try to fit the main model input parameters, the baryon content Ω b , the matter contribution Ω m , 0 , the local Hubble constant H 0 as well as the potential R to astrophysical data.

A key parameter in the model, and the only new one here, is the string constant R which is monotonically related to the neutrino mass m ν . Higher values of R result in lower mass bounds. To investigate the influence of the value adopted here, the proposed model has been fitted to both H ( z ) measurements [

Allowing the field strength to vary in the range 0.1 - 2.2 GeV/fm in steps of 0.1 GeV/fm and considering the baryon content in between 0.042 and 0.049 (steps of 0.001), the total amount of matter in the range 0.20 - 0.32 (steps of 0.01 ) and H 0 varying from 67.5 to 75 km/s/Mpc (steps of 0.5 km/s/Mpc) parameter space has been scanned to arrive at the best fit. A least squares method has been used supplemented by the criterion N e f f = 3.84 [

Ω b = 0.049 , Ω m , 0 = 0.28 , H 0 = 70.0 km/s/Mpc and R = 1.0 GeV/fm. With those input parameters the main output values would amount to m ν = 1.08 eV/c^{2} , N e f f , C M B = 3.85 , t 0 = 14.1 Gyr, N e f f , t 0 = 408 , z e q = 1957 and z d e = 0.38 .

Starting from the Planck results [

The result of the best fit has been shown in

better fit to the observed H ( z ) data and a similar fit to the SN1a distance moduli compared to the ΛCDM model.

A first firm prediction of the analytical work and simulations presented here is the mass of the dark matter particle ( m ν = 1.19 ± 0.19 eV/c^{2} ) and the fact that they are generated by the neutral current weak interaction and therefore are thought to be the standard model neutrinos. Note that the cosmology presented here is quite different from ΛCDM, which impacts the predictions considerably. With a value of m ν = 1.19 eV/c^{2} the predicted neutrino mass is just outside the range of experiments performed by Troitsk [

A second prediction is related to the effective number of neutrinos N e f f and much effort is done to experimentally determine its value at various stages of the cosmological evolution. As can be inferred from

It is well known that there is some tension between the value of the Hubble constant as obtained from CMB data and that inferred from observations in the recent universe, see for example [

A new concept has been proposed for the generation of dark matter, based on the nucleon with its characteristic radius and potential, which is thought to be representative of an underlying mechanism converting energy from the expanding universe into dark matter and radiation. Energy build-up first accounted for the mass to be generated and was then subject to a decay which has been assumed to be mediated by the weak interaction. Furthermore, the universe has been taken to be flat today but allowing for a limited amount of curvature (open universe) towards the past, corresponding to the deficiency of dark matter at that redshift.

The background has been calculated/simulated from the first Friedmann equation with specific contributions for the standard thermal neutrinos and the generated dark matter.

Using commonly accepted input parameter values and the constraints required by the concept, the dark matter was found to be composed of the standard neutrinos but with a mass of 1.19 ± 0.19 eV/c^{2} . Other results were a somewhat younger universe with t 0 = 13.5 ± 0.5 Gyr and N e f f , C M B = 3.81 ± 0.28 with a contribution of 0.84 ± 0.11 from the thermal neutrinos and 2.97 ± 0.26 from the generated ones. All these results are compatible with currently observed values or, at least, not excluded by them as far as the neutrino mass is concerned. The model hints at a possible natural explanation for the difference in Hubble constant from CMB ( H 0 = 67.3 ± 1.2 km/s/Mpc) [

Allowing the input values of Ω b , Ω m , 0 , H 0 and R to vary, the best fit of the model to the H ( z ) and SN1a data has been found for Ω b = 0.049 , Ω m , 0 = 0.28 , H 0 = 70.0 km/s/Mpc and R = 1.0 GeV/fm, resulting in χ 2 values of 21.0 and 562.4 respectively. Compared to those of ΛCDM, using the same parameter inputs, the proposed model has yielded a better fit to the H ( z ) data and an equivalently good one to the SN1a data.

Certain future experiments may validate or falsify the proposed model. Once an accurate neutrino mass has been found, its value can be compared with the model prediction. Smaller error bars on measurements of the effective number of neutrinos may also confirm or rule out the calculated value of N e f f .

A peculiarity of the approach taken is that the generated dark matter depends by definition on the baryons. This might explain the mysterious conspiracy observed between those components.

Defloor, H. (2017) New Dark Matter Generation Mechanism and Its Implications for the Cosmological Background. Journal of High Energy Physics, Gravitation and Cosmology, 3, 791-807. https://doi.org/10.4236/jhepgc.2017.34058